
$294.00
1
2

$294.00
1
2
Trader mode: Actionable analysis for identifying opportunities and edge
Meta is developing a new frontier image and video-focused AI model codenamed “Mango”. You can read more about that here: https://finance.yahoo.com/news/meta-bets-mango-avocado-ai-224956071.html This market will resolve to "Yes" if Meta makes a new frontier AI model for image and video generation, or any model confirmed by Meta to be the model codenamed “Mango” during development, available to the general public by the listed date, 11:59 PM ET. Otherwise, this market will resolve to "No." A fro
Prediction markets are pricing in high confidence that Meta will release its "Mango" AI model by June 30, 2026. On Polymarket, the "Yes" share is trading at 89 cents, implying an 89% probability. This suggests the market views a public release as very likely, though not absolutely guaranteed. The market currently has thin liquidity, with minimal trading volume, indicating this is an early-stage assessment based on available reporting rather than active speculative debate.
The primary factor is Meta's established, aggressive public roadmap in generative AI. Following the releases of Llama language models and the Emu video generator, Meta has demonstrated a consistent strategy of developing and releasing frontier AI capabilities. The "Mango" project, as reported, aligns directly with this pattern, focusing on the competitive and high-demand domain of advanced image and video generation. Secondly, the reported timeline from initial leaks suggests development is already underway, making a release within the next 18 months technically plausible for a well-resourced team. The high probability reflects a belief that Meta will not cede this strategic ground to competitors like OpenAI and Google.
The most significant risk to the current high-confidence odds is development delay, which is common in complex AI projects. Technical hurdles in achieving desired quality or safety standards could push a release beyond the June 2026 deadline. A second catalyst is official communication from Meta. A confirmed announcement or demonstration of the "Mango" model would likely drive the probability toward 95% or higher. Conversely, any official statement downplaying the project or shifting corporate priorities could cause a sharp drop. Key dates to watch are Meta's future earnings calls and developer conferences, where such projects are often formally unveiled.
AI-generated analysis based on market data. Not financial advice.
This prediction market topic concerns the potential public release of Meta's frontier AI model codenamed 'Mango,' which is reportedly under development with a focus on advanced image and video generation. The market resolves based on whether Meta makes this specific model, or any model it confirms is the 'Mango' model from development, available to the general public by a specified deadline. The topic sits at the intersection of corporate AI strategy, technological competition, and the commercialization of generative AI. Recent reporting from Yahoo Finance in June 2024 revealed that Meta is actively developing 'Mango' alongside another model codenamed 'Avocado,' signaling a significant push to compete with leading image and video AI models from companies like OpenAI, Midjourney, and Google's DeepMind. Interest in this market stems from Meta's substantial investments in AI research, its history of open-sourcing models like Llama, and the high-stakes race to dominate the next generation of multimodal AI capable of creating sophisticated visual content. The outcome will indicate whether Meta's internal research has matured into a publicly accessible product that could reshape the creative tools landscape and challenge existing market leaders.
Meta's journey in AI has evolved from academic research to a core product pillar. The company established its Fundamental AI Research (FAIR) lab in 2013, focusing initially on areas like computer vision and natural language processing. A significant historical precedent is the 2022 release of Make-A-Video, a research model for text-to-video generation. While not a public product, it demonstrated Meta's early capabilities in this domain. The more relevant precedent is Meta's strategy with large language models (LLMs). In July 2023, Meta open-sourced Llama 2, and in April 2024, it released the more powerful Llama 3. These moves established a pattern of developing frontier models internally and then releasing them publicly, sometimes in open-source form, to build ecosystem influence and catch up to competitors. The development of 'Mango' follows this playbook but applies it to the multimodal space of image and video, which has been dominated by other players. The historical arc shows Meta transitioning from a social media company to an AI research powerhouse, using open releases to attract developers and offset the market lead of rivals like OpenAI, which has kept its most advanced image and video models like DALL-E 3 and Sora under a more controlled, non-open-source release.
The public release of a frontier model like 'Mango' matters because it could democratize access to high-end generative video tools, potentially lowering costs and spurring innovation in content creation, marketing, and entertainment. If released as open-source, it would allow researchers and startups to build upon a state-of-the-art foundation, accelerating the entire field's progress and potentially leading to unforeseen applications. Conversely, it raises significant societal questions about the proliferation of deepfakes and synthetic media, challenging existing frameworks for content authentication and digital trust. For the tech industry, a successful 'Mango' release would solidify Meta's position as a leading AI entity, affecting stock valuations, talent recruitment, and the strategic direction of competitors. It would also test the prevailing regulatory environment, as policymakers in the US, EU, and elsewhere grapple with how to oversee powerful generative AI models. The outcome influences not just Meta's business but the balance of power in the global AI landscape.
As of June 2024, based on the initial reporting, Meta's 'Mango' model is in active development. The company has not made any official public statement confirming the codename or detailing the model's specifications or release timeline. The project exists within Meta's broader AI research division, likely competing for internal resources with other projects like 'Avocado.' The competitive landscape is intensifying, with OpenAI having previewed Sora and other companies like Runway and Pika Labs advancing video generation tools. Meta's recent focus has been on the rollout of Llama 3 and its AI assistant across its apps, but industry observers are watching for any signals, such as research paper publications or developer conference announcements, that would indicate 'Mango's' progression toward a public release.
Meta Mango is the internal codename for a frontier artificial intelligence model under development at Meta Platforms, Inc. focused primarily on generating images and video from text prompts. It is reported to be part of Meta's next-generation AI efforts aimed at competing with similar advanced models from other tech companies.
Meta has not announced an official release date for the Mango model. Its public availability is currently speculative and the subject of prediction markets. The release timeline will depend on internal development progress, safety evaluations, and competitive strategy.
A direct comparison is not yet possible as Mango is unreleased. Based on reporting, both are frontier AI models for video generation. The key competitive differences will be revealed in factors like output video length, fidelity, coherence, and whether Meta releases it as an open-source model, unlike OpenAI's currently closed approach with Sora.
Meta has not stated if Mango will be open source. However, the company has a recent track record of open-sourcing its large language models (Llama 2, Llama 3), making it a plausible, but not guaranteed, strategy for Mango to build developer adoption and ecosystem influence.
According to reports, both Mango and Avocado are codenames for next-generation AI models at Meta. While Mango is focused on image and video generation, details on Avocado's specific focus are less clear. They likely represent different research vectors or model architectures within Meta's broader AI portfolio.
Educational content is AI-generated and sourced from Wikipedia. It should not be considered financial advice.
Share your predictions and analysis with other traders. Coming soon!
2 markets tracked

No data available
| Market | Platform | Price |
|---|---|---|
![]() | Poly | 88% |
![]() | Poly | 51% |


No related news found
Add this market to your website
<iframe src="https://predictpedia.com/embed/QnouSH" width="400" height="160" frameborder="0" style="border-radius: 8px; max-width: 100%;" title="Meta "Mango" model released by...?"></iframe>