Meta Platforms - statistics & facts The rise and rise of Facebook led to the creation of one of the most influential companies in the world: Meta Platforms Inc. In the ever-evolving landscape of social media, Meta shapes the way billions of people connect and interact online. From humble beginnings to audiences of billions, Mark Zuckerberg’s Meta remains king of the social media jungle. Clear market dominance Meta Platforms owns social media giants Facebook, Instagram, WhatsApp, and Messenger, among others, all of which are household names that continue to see user growth. As of the fourth quarter of 2024, the company reported a staggering 3.35 billion daily active users across its core products. This huge user base translates into significant profit, with Meta's average revenue per user reaching 49.63 U.S. dollars in 2024, up from 44.60 U.S. dollars in the previous year. Facebook is by far the most popular social network in the world, with just over three billion users, followed by YouTube, Instagram, WhatsApp, and TikTok, respectively. Meta’s Threads, which was released in July 2023, broke records by generating 150 million mobile app downloads in just six days. For context, mobile gaming apps Pokémon GO and Call of Duty: Mobile required 33 and 106 days to reach this number of downloads. Let the money talk In 2024, the Meta’s annual revenue reached 164.5 billion U.S. dollars, a significant increase from 134.9 billion U.S. dollars in 2023. Around 98.6 percent of Meta’s revenue is generated by its Family of Apps, with the remaining 1.5 percent being generated by Reality Labs. Moreover, most of Meta’s revenue is produced through advertising, with over 160 billion U.S. dollars being generated via ads in 2024. The company incurred advertising expenses of over 2.06 U.S. dollars in its latest financial year. As of October 2024, Meta Platforms’ market value stood at 1.4 trillion U.S. dollars, ranking in third place after Alphabet and Amazon. Meta’s lead role in social media marketing Social media platforms continue to play a crucial role in brand promotion and customer engagement. A 2024 survey revealed that Facebook was the most important social media platform for marketers worldwide, with 44 percent of respondents highlighting its significance. Additionally, for business-to-consumer (B2C) marketers, Facebook was the most widely used platform, with 91 percent reporting its use for marketing purposes. Meta’s networks fall short when it comes to business-to-business (B2B) marketers, who favor LinkedIn. It's not all plain sailing Meta’s success has not been without problems. The company has struggled to comply with online privacy regulations, which have often resulted in large fines. Among the most recent was the 91 million euro fine issued in 2024 by the Irish Data Protection Commission (DPC). The fine was placed after the Irish data privacy watchdog found out that user passwords were stored in 'plaintext' on Meta's internal systems rather than with cryptographic protection or encryption. Between 2022 and 2024, the company faced numerous penalties from the DPC and the EU. Additionally, Meta’s networks are no stranger to problematic content, with millions of different types of harmful material being removed every quarter. Regardless of the issues that people may face on Meta’s well-known social media platforms, users continue to utilize these networks. After 21 years in the making, Mark Zuckerberg’s company is still ahead of the competition. Source: Statista. Published by Daniel Slotta, Dec 17, 2025 Meta reports mixed financial results amid spree of AI hiring and spending Tech company brings in record quarterly revenue but major tax bill dampens earnings per share The Guardian, Johana Bhuiyan, Wed 29 Oct 2025 17.57 EDT Meta reported mixed financial results for the third quarter of 2025. The company brought in record quarterly revenue but reported a major tax bill that dampened earnings per share, the company announced on Wednesday. The financial results come as Meta ends a multibillion-dollar hiring spree focused on artificial intelligence talent. The tech giant earned $51.24bn in quarterly revenue, beating Wall Street expectations and the company’s own projections for third-quarter sales. However, it reported earnings per share (EPS) of $1.05, far below Wall Street expectations of $6.70 in EPS. The major drop was due to a one-time non-cash income tax charge of $15.93bn. The EPS would have been $7.25 without this one-time charge, the company said. The report, and the scheduled investor call, gives investors another opportunity to find out whether the company’s lavish spending on AI infrastructure is justified. The company projected full-year total expenses would be between $116m and $118bn, upping the lower end of the range from $114bn. The company also expects 2025 capital expenditures to be between $70bn and $72bn, up from a previously projected range of $66bn and $72bn. Meta said its fourth-quarter revenue would likely fall somewhere between $56bn and $59bn. “We had a strong quarter for our business and our community,” said Mark Zuckerberg, Meta’s founder and CEO. “Meta Superintelligence Labs is off to a great start and we continue to lead the industry in AI glasses. If we deliver even a fraction of the opportunity ahead, then the next few years will be the most exciting period in our history.” Jesse Cohen, senior analyst at Investing.com, said the latest report reveals “the growing tension between the company’s massive AI infrastructure investments and investor expectations for near-term returns”. Spending is not expected to slow down any time soon, however. On the earnings call, Susan Li, the company’s chief financial officer, said Meta will need to “invest aggressively” in 2026 to meet the company’s computational needs. Earlier this month, the company announced a new joint venture with Blue Owl Capital that would help the firms build and finance the new $27bn Hyperion data center campus in Louisiana, the biggest Meta is involved in developing. “We also anticipate total expenses will grow at a significantly faster percentage rate in 2026 than 2025, with growth driven primarily by infrastructure costs, including incremental cloud expenses and depreciation,” Li said. “Employee compensation costs will be the second largest contributor to growth, as we recognize a full year of compensation for employees hired throughout 2025, particularly AI talent, and add technical talent in priority areas.” When asked about how the company is balancing releasing products that will show near-term returns on investment with these larger research-focused projects, Zuckerberg said that Meta AI is a “massive latent opportunity” and pointed to the company’s ability to bring its new products to billions of users. “The research is going to enable technological capabilities to exist and then those capabilities can get built into all kinds of different products,” Zuckerberg said. It’s the first financial update since Meta said it planned to lay off 600 staffers from its AI unit – the same unit the company went on a spending and hiring spree to restructure and fill with the top AI talent from other companies. The company said the layoffs were an effort to reduce the bloat within the company’s “superintelligence” unit and brought the number of employees there down to just under 3,000. Zuckerberg said the investment into Meta’s Superintelligence Labs helped the company build what he described as “the highest talent density lab in the industry at this point”. The company’s stock has been on a steady rise over the past six months. Its previous two earnings reports have beaten Wall Street expectations. The wider US stock market likewise reached record highs the week. Meta also launched its new Ray-Ban Display glasses last month, which feature a screen embedded in the lenses, and analysts were eager to hear sales figures. But the unit responsible for these glasses as well as Meta’s virtual reality headsets posted a massive $4.4bn loss. Zuckerberg said the company’s collaborations with Ray-Ban and Oakleys on these AI glasses were going well and that these investments will likely be very profitable. Meta’s original camera glasses, simply dubbed Meta Ray-Bans, proved to be a popular gadget. Both types of glasses have already prompted privacy concerns. While Meta has designed the glasses not to work if a light that notifies people that the glasses are recording is covered, a $60 modification can disable the light, 404 Media reported. “I suspect these glasses, in particular, will predominantly appeal to early ‘tech-curious’ adopters, and that scheduled demos will far outpace sales,” said Mike Proulx, Forrester VP research director. On the advertising side, Meta lost its accreditation from the Media Rating Council, a non-profit that sets industry-wide standards for brand safety, after the company decided to pull out of the organization’s annual audits. The accreditation signals to advertisers that the content on the platform that their ads may appear next to would not be harmful to their brand. Meta received the accreditation just four months before it was stripped. Analysts were optimistic that the loss of accreditation would not ultimately hurt Meta’s ability to attract advertisers. “While this may raise eyebrows among advertisers, it won’t deter them from investing in Meta due to its sheer audience reach and brand reliance,” Proulx said. “Brands will overlook potential brand-safety risks as long as their Meta media investments continue to perform.” From Llamas to Avocados: Meta’s shifting AI strategy is causing internal confusion PUBL I SHED T UE, DEC 9 202 57: 0 0 AM ES T UP D AT E D T UE, DEC 9 202 58: 3 3 PM EST CNBC Jonathan Vanian KEY POINTS • • Meta is pursuing a new frontier AI model, codenamed Avocado, that could be proprietary instead of open source, CNBC has learned. The company is trying to keep pace with artificial intelligence rivals OpenAI and Google after spending $14.3 billion to bring in the founder of Scale AI and a handful of top researchers and engineers. • “In many ways, Meta has been the opposite of Alphabet, where it entered the year as an AI winner and now faces more questions around investment levels and ROI,” analysts at KeyBanc Capital Markets wrote in a note to clients late last month. Meta CEO Mark Zuckerberg was so optimistic last year about his company’s Llama family of artificial intelligence models that he predicted they would become the “most advanced in the industry” and “bring the benefits of AI to everyone.” But after including a whole section on Llama in his opening remarks during Meta’s earnings call in January of this year, he mentioned the brand name only once on the latest call in October. The company’s obsession with its open-source large language model has given way to a very different approach to AI, one focused around a multibillion-dollar hiring spree to bring in top industry talent that could help Meta take on the likes of OpenAI, Google and Anthropic. As 2025 comes to a close, Meta’s strategy remains scattershot, according to insiders and industry experts, feeding the perception that the company has fallen further behind its top AI rivals, whose models are rapidly gaining adoption in the consumer and enterprise markets. Meta is pursuing a new Llama successor and frontier AI model, codenamed Avocado, CNBC has learned. People with knowledge of the matter said many within the company were expecting the model to be released before the end of this year. The plan is for that to happen in the first quarter of 2026, a person familiar with the company’s plans told CNBC. The model is wrestling with various training-related performance testing intended to ensure the system is well received when it eventually debuts, said the people, who asked not to be named because they weren’t authorized to speak on the matter. “Our model training efforts are going according to plan and have had no meaningful timing changes,” a Meta spokesperson said in a statement. With its stock underperforming the broader tech sector this year and badly trailing Google parent Alphabet, Wall Street is looking for a sense of direction and a path to a return on investment after Meta spent $14.3 billion in June to hire Scale AI founder Alexandr Wang and a handful of his top engineers and researchers. Four months after that announcement, which included Meta purchasing a big stake in Scale, the social media company raised its 2025 guidance for capital expenditures to between $70 billion and $72 billion from a prior range of $66 billion to $72 billion. “In many ways, Meta has been the opposite of Alphabet, where it entered the year as an AI winner and now faces more questions around investment levels and ROI,” analysts at KeyBanc Capital Markets wrote in a November note to clients. The firm recommends buying both stocks. At the heart of Meta’s challenge is the sustained dominance of its core business: digital advertising. Even with annual sales in excess of $160 billion, Meta’s ad targeting business, driven by massive improvements in AI and the popularity of Instagram, is growing revenue north of 20% a year. Investors have lauded the company for using AI to bolster the strength of its cash cow and to make the organization more efficient and less bloated. But Zuckerberg has much grander ambitions, and the new guard he’s brought in to push the future vision of AI has no background in online ads. The 41-year-old founder, with a net worth of more than $230 billion, has suggested that if Meta doesn’t take big swings, it risks becoming an afterthought in a world that’s poised to be defined by AI. Until recently, Meta’s unique position in AI was the open-source nature of its Llama models. Unlike other AI models, Meta’s technology was made freely available so third-party researchers and others could access the tools and ultimately improve them. “Today, several tech companies are developing leading closed models,” Zuckerberg wrote in a blog post in July 2024. “But open source is quickly closing the gap.” He’s since started changing his tune. Zuckerberg hinted over the summer that Meta was considering shaking up its approach to open source after the April release of Llama 4, which failed to captivate developers. Zuckerberg said in July that, “We’ll need to be rigorous about mitigating these risks and careful about what we choose to open source.” Avocado, when it’s eventually made available, could be a proprietary model, according to people familiar with the matter. That means outside developers wouldn’t be able to freely download its so-called weights and related software components. Some at Meta were upset that the R1 model released by Chinese AI lab DeepSeek earlier this year incorporated pieces of Llama’s architecture, the people said, further underscoring the risks of open source and hammering home the idea that the company should overhaul its strategy. The company’s high-priced AI hires and leaders of the recently launched Meta Superintelligence Labs, or MSL, have also questioned the open-source AI strategy and favored creating a more powerful proprietary AI model, CNBC reported in July. A Meta spokesperson said at the time that the company’s “position on open source AI is unchanged.” The Llama 4 flub was a significant catalyst in Zuckerberg’s pivot, the people said, and also led to a major internal shake-up. Chris Cox, Meta’s chief product officer and a 20-year company veteran who was hired as its 13th software engineer, no longer oversees the AI division, formally known as the GenAI unit, after the botched release, the people said. Zuckerberg went on a spending spree to retool Meta’s AI leadership. He landed on Wang, then Scale AI’s 28-year-old CEO, who was named Meta’s new chief AI officer and, in August, became the head of an elite unit called TBD Lab. Avocado is being developed inside TBD, people familiar with the matter said. OpenAI CEO Sam Altman said in June that Meta was trying to lure talent from his company with gigantic pay packages, including sky-high $100 million signing bonuses, which Meta said at the time was a misrepresentation. Along with Wang came other tech bigwigs, including former GitHub CEO Nat Friedman, who heads the product and applied research arm of MSL, and Shengjia Zhao, who was a ChatGPT co-creator. They’ve brought with them modern methods that have become the standard for frontier AI development in Silicon Valley, and have upended the traditional software development process inside Meta, the people said. Meta’s AI culture shift Wang is now under pressure to deliver a top-tier AI model that helps the company regain momentum against rivals like OpenAI, Anthropic and Google, the people said. That pressure has only increased as competitors stepped up their game. Google’s Gemini 3, unveiled last month, has drawn solid reviews from users and analysts. OpenAI recently announced new updates to its GPT5 AI model, while Anthropic debuted its Claude Opus 4.5 model in November shortly after releasing two other major models. Analysts previously told CNBC that there’s no clear leading AI model, because some perform better on certain tasks like conversations or coding. But the one constant is that all of the major model creators have to spend a lot of money on AI to maintain any competitive edge, they said. A hefty dose of that spending lines the pockets of Nvidia, the leading developer of AI graphics processing units. Nvidia CEO Jensen Huang laid out the state of play during his company’s earnings call in November, after the chipmaker reported 62% year-over-year revenue growth. He highlighted a number of model developers as big customers, including Elon Musk’s xAI. “We run OpenAI. We run Anthropic. We run xAI because of our deep partnership with Elon and xAI,” Huang said. “We run Gemini. We run Thinking Machines. Let’s see, what else do we run? We run them all.” At no point did Huang reference Llama, although elsewhere on the call he said Meta’s Gem, “a foundation model for ad recommendations trained on large-scale GPU clusters,” drove an improvement in ad conversions at Meta in the second quarter. Wang isn’t the only Meta exec feeling the heat. Friedman has also been tasked with producing a breakout AI product, the people said. He was responsible for Meta’s September launch of Vibes, a feed of AI-generated short videos, which is widely viewed as inferior to OpenAI’s Sora 2, they said. Former employees and creators told CNBC that the product was rushed to market and lacked key features, like the ability to generate realistic lip-synched audio. Although Vibes has attracted more interest to the company’s stand-alone Meta AI app, it trails the Sora app as measured by downloads, according to data provided to CNBC by Appfigures. Pressure is being felt across Meta’s AI organizations, where 70-hour workweeks have become the norm, the people said, while teams have also been hit with layoffs and restructurings throughout the year. In October, Meta cut 600 jobs in MSL to reduce layers and operate more quickly. Those layoffs impacted employees in areas like the Fundamental Artificial Intelligence Research unit, or FAIR, and played a key role in chief AI scientist Yann LeCun’s decision to leave the company to launch a startup, according to people with knowledge of the matter. LeCun declined to comment. Zuckerberg’s high-stakes decision to turn to outsiders like Wang and Friedman to lead the company’s AI efforts represented a major change for a company that’s historically promoted long-tenured workers to top posts, the people said. In Wang and Friedman, Zuckerberg has handed the controls to experts in infrastructure and related systems, rather than consumer apps. The new leaders also brought a different management style and one that’s unfamiliar inside Meta. In particular, insiders told CNBC that Wang and Friedman are more cloistered in their communications, a contrast to a historically open approach of sharing work and chatting on the company’s Workplace internal social network Members of Wang’s TBD Lab, who work near Zuckerberg’s office, don’t use Workplace, people familiar said, adding that they’re not even on the network and that the group operates like a separate startup. However, Zuckerberg isn’t giving the new AI leadership team complete autonomy. Aparna Ramani, engineering vice president, who has been with Meta for nearly a decade, has been put in charge of overseeing the distribution of computing resources for MSL, the people said. And in October, Vishal Shah was moved from leading the company’s metaverse initiatives within Reality Labs, where he’d been for four years, to a new role as vice president of AI Products, working with Friedman. Shah is considered a loyal lieutenant who has helped act as a bridge between the company’s traditional social apps like Instagram and newer projects like Reality Labs, the people said. Meta confirmed last week that it plans to cut resources for its virtual reality and related metaverse initiatives, shifting its attention to its popular AI-infused glasses developed with EssilorLuxottica. ‘Demo, don’t memo’ One of the biggest points of tension between the old and the new is in the realm of software development, people familiar with the matter said. In creating products, Meta has traditionally sought input from numerous groups responsible for areas like frontend user interface, design, algorithmic feeds and privacy, the people said. The multistep process was intended to ensure some level of uniformity among the company’s apps that attract billions of users each day. But the many internal tools built over the years to help coders create software and features weren’t developed to accommodate foundation models. Meta’s new AI leaders, notably Friedman, view them as bottlenecks slowing down what should be a rapid-fire development process, the people said. Friedman has called for MSL to use newer tools that have been calibrated to incorporate multiple AI models and various kinds of coding automation software often called AI agents, the people said. “They have this mantra now saying ‘Demo, don’t memo,’” Lovable CEO Anton Osika said in October at the Masters of Scale Summit in San Francisco, about Meta’s new development process. Osika said Meta employees have been using Lovable’s tools to more quickly build internal apps, specifically referencing the company’s finance teams, which have turned to Lovable to create software for tracking head count and planning. While Meta continues retooling its app development methods and pushes toward releasing Avocado, the company has been experimenting with other AI models on its products. Vibes, for instance, relied on AI models from Black Forest Labs and Midjourney, a startup that counts Friedman as an advisor. Meta is also altering its approach to infrastructure, and is increasingly turning to third-party cloud computing services like CoreWeave and Oracle for developing and testing AI features as it builds out its own massive data centers, the people said. The social media giant announced in October that it signed a joint venture agreement with Blue Owl Capital as part of a $27 billion deal to help fund and develop the gargantuan Hyperion data center in Richland Parish, Louisiana. The company said at the time that the partnership provides the “the speed and flexibility” Meta needs to build the data center and support its “long-term AI ambitions.” Despite the company’s challenges in 2025, Zuckerberg’s message to employees and investors is that he’s more committed than ever to winning. At the top of the company’s earnings call in October, Zuckerberg said MSL is “off to a strong start.” “I think that we’ve already built the lab with the highest talent density in the industry,” Zuckerberg said. “We’re heads down developing our next generation of models and products and I’m looking forward to sharing more on that front over the coming months.” Why Meta could struggle to defend itself against 41 states (and D.C.) suing over Facebook, Instagram’s alleged harm to kids Northeastern Global News, by Cody Mello-Klein January 5, 2026 In what could be a landmark moment in the world of tech, attorneys general in 41 states and Washington, D.C., are suing Meta for knowingly endangering children and getting them addicted to Facebook and Instagram, despite statements to the contrary. Colorado and California are leading the charge with a joint lawsuit that includes 33 other states. They allege that Meta “harnessed powerful and unprecedented technologies to entice, engage, and ultimately ensnare youth and teens,” according to the lawsuit. The District of Columbia and eight other states have filed separate lawsuits against the company. The lawsuits from dozens of attorneys general claiming that Meta violated consumer protections laws evoke the kind of landmark legal actions taken against Big Tobacco and Big Pharma, says Hilary Robinson, associate professor of law and sociology at Northeastern University. If successful, these cases could be transformative for how tech companies are held accountable for consumer protection. “It seems to me that the time is right for harnessing state power,” Robinson says. “This is just another attempt to figure out how to use it in an effective way that doesn’t destroy the benefits for people who use these things but reins in these really negative externalities that have had really serious consequences for individual families.” It’s a big “if,” but Robinson says the attorneys general are “likely to succeed if they’re able to find in discovery the kinds of things that in the opioid lawsuit they were able to find, like clear knowledge of harm.” Meta has already provided a lot of that evidence itself in some ways, Robinson says. In 2021, a massive internal leak led to an investigative series from the Wall Street Journal, called “The Facebook Files.” Documents obtained by the Journal laid out how Facebook was both aware of the negative impact its platform could have on people, including teenage girls, and actively looking at ways of attracting young people. Then, there is an infamous 2014 study published by Facebook in the Proceedings of the National Academy of Sciences. The study, conducted on 700,000 users without their permission, showed evidence of mass-scale “emotional contagion” through the platform. Emotional contagion is the spontaneous spread of emotions between people. This is just another attempt to figure out how to use it in an effective way that doesn’t destroy the benefits for people who use these things. “Meta is going to have a hard time defending against this one because they published that study,” Robinson says. “It’s clear that they looked at their user population as a group whose behavior they are interested in and they can influence.” The negative effect social media platforms like Facebook and Instagram have on children and adolescents has been extensively documented, including by Rachel Rodgers, an associate professor of applied psychology at Northeastern. Rodgers’ research focuses on how Instagram can create body image concerns and risks for disordered eating in teens. “This [occurs] through different mechanisms, one of which is the fact that there are a lot of pictures on these platforms that present unrealistic images of people,” Rodgers says. “This leads to appearance comparisons and the idea that this is achievable, that this is possible and this is the way you should look.” Pursuing a consumer protection case is a novel, and potentially impactful, legal approach in the tech world, where companies often argue they are information service companies, not consumer product providers. “There are all sorts of ways that we regulate what products are in circulation in the market economy and what can go into them,” Robinson says, so if this approach is successful, it could set a precedent for further regulation in the tech sector. But even if the cases are successful, the traditional strategies for how to hold a company like Meta accountable, like financial penalties, “might not go quite far enough,” Robinson says. “The expectation is if you really harm the bottom line or the profit of a business, then you change their practices going forward,” Robinson says. “That said, a lot of these businesses are beyond the scope of the regular economy at this point.” “I’m much more in favor of thinking about how law and technology are both engineering sciences, and how the law can intervene in the design of these kinds of technologies,” she adds. Robinson points to laws adopted in Utah that limit how children can use social media. The law focuses on specific features of social media, specifically push notifications, that can keep young people hooked. “That law requires that the default be put to no notifications, so when you download the app, the notifications are off and that if they’re turned on, they have to be held until later,” Robinson says. “That’s a forward facing intervention into how the app itself operates.” His Job Was to Make Instagram Safe for Teens. His 14-YearOld Showed Him What the App Was Really Like. When a Meta security expert told Mark Zuckerberg that Instagram’s approach to protecting teens wasn’t working, the CEO didn’t reply. Now the former insider is set to tell Congress about the predatory behavior. The Wall Street Journal Nov. 2, 2023 9:00 pm ET By Jeff Horwitz In the fall of 2021 a consultant named Arturo Bejar sent Meta Platforms Chief Executive Mark Zuckerberg an unusual note. “I wanted to bring to your attention what l believe is a critical gap in how we as a company approach harm, and how the people we serve experience it,” he began. Though Meta regularly issued public reports suggesting that it was largely on top of safety issues on its platforms, he wrote, the company was deluding itself. The experience of young users on Meta’s Instagram—where Bejar had spent the previous two years working as a consultant—was especially acute. In a subsequent email to Instagram head Adam Mosseri, one statistic stood out: One in eight users under the age of 16 said they had experienced unwanted sexual advances on the platform over the previous seven days. For Bejar, that finding was hardly a surprise. His daughter and her friends had been receiving unsolicited penis pictures and other forms of harassment on the platform since the age of 14, he wrote, and Meta’s systems generally ignored their reports—or responded by saying that the harassment didn’t violate platform rules. “I asked her why boys keep doing that,” Bejar wrote to Zuckerberg and his top lieutenants. “She said if the only thing that happens is they get blocked, why wouldn’t they?” For the well-being of its users, Bejar argued, Meta needed to change course, focusing less on a flawed system of rules-based policing and more on addressing such bad experiences. The company would need to collect data on what upset users and then work to combat the source of it, nudging those who made others uncomfortable to improve their behavior and isolating communities of users who deliberately sought to harm others. Arturo Bejar left Facebook in 2015 for personal reasons and returned four years later as a consultant on usersafety issues. PHOTO: IAN BATES FOR THE WALL STREET JOURNAL “I am appealing to you because I believe that working this way will require a culture shift,” Bejar wrote to Zuckerberg—the company would have to acknowledge that its existing approach to governing Facebook and Instagram wasn’t working. But Bejar declared himself optimistic that Meta was up to the task: “I know that everyone in m-team team deeply cares about the people we serve,” he wrote, using Meta’s internal shorthand for Zuckerberg and his top deputies. Two years later, the problems Bejar identified remain unresolved, and new blind spots have emerged. The company launched a sizable child-safety task force in June, following revelations that Instagram was cultivating connections among large-scale networks of pedophilic users, an issue the company says it’s working to address. This account is based on internal Meta documents reviewed by The Wall Street Journal, as well as interviews with Bejar and current and former employees who worked with him during his stint at the company as a consultant. Meta owns Facebook and Instagram. Asked for comment for this article, Meta disputed Bejar’s assertion that it paid too little attention to user experience and failed to sufficiently act on the findings of its Well-Being Team. During and after Bejar’s time as a consultant, Meta spokesman Andy Stone said, the company has rolled out several product features meant to address some of the Well-Being Team’s findings. Those features include warnings to users before they post comments that Meta’s automated systems flag as potentially offensive, and reminders to be kind when sending direct messages to users like content creators who receive a large volume of messages. For a consultant, Bejar had unusually deep roots at the company. He had first been hired as a Facebook engineering director in 2009. Responsible for protecting the platform’s users, he’d initially viewed the task as traditional security work, building tools to detect hacking attempts, fight fraud rings and remove banned content. Monitoring the posts of what was then Facebook’s 300 million-odd users wasn’t as simple as enforcing rules. There was too much interaction on Facebook to police it all, and what upset users was often subjective. Bejar loved the work, only leaving Facebook in 2015 because he was getting divorced and wanted to spend more time with his children. Having joined the company long before its initial public offering, he had the resources to spend the next few years on hobbies—including restoring vintage cars with his 14-year-old daughter, who documented her new pastime on Instagram. That’s when the trouble began. A girl restoring old cars drew plenty of good attention on the platform—and some real creeps, such as the guy who told her that the only reason people watched her videos was “because you’ve got tits.” Zuckerberg changed Facebook’s name to Meta Platforms in October 2021, days before Bejar’s twoyear consulting gig ended. PHOTO: CONSTANZA HEVIA H. FOR THE WALL STREET JOURNAL “Please don’t talk about my underage tits,” Bejar’s daughter shot back before reporting his comment to Instagram. A few days later, the platform got back to her: The insult didn’t violate its community guidelines. Bejar was floored—all the more so when he learned that virtually all of his daughter’s friends had been subjected to similar harassment. “DTF?” a user they’d never met would ask, using shorthand for a vulgar proposition. Instagram acted so rarely on reports of such behavior that the girls no longer bothered reporting them. Bejar began peppering his former colleagues at Facebook with questions about what they were doing to address such misbehavior. The company responded by offering him a two-year consulting gig. That was how Bejar ended up back on Meta’s campus in the fall of 2019, working with Instagram’s Well-Being Team. Though not high in the chain of command, he had unusual access to top executives—people remembered him and his work. From the beginning, there was a hurdle facing any effort to address widespread problems experienced by Instagram users: Meta’s own statistics suggested that big problems didn’t exist. During the four years Bejar had spent away from the company, Meta had come to approach governing user behavior as an overwhelmingly automated process. Engineers would compile data sets of unacceptable content— things like terrorism, pornography, bullying or “excessive gore”—and then train machine-learning models to screen future content for similar material. According to the company’s own metrics, the approach was tremendously effective. Within a few years, the company boasted that 99% of the terrorism content that it took down had been removed without a user having reported it. While users could still flag things that upset them, Meta shifted resources away from reviewing them. To discourage users from filing reports, internal documents from 2019 show, Meta added steps to the reporting process. Meta said the changes were meant to discourage frivolous reports and educate users about platform rules. The outperformance of Meta’s automated enforcement relied on what Bejar considered two sleights of hand. The systems didn’t catch anywhere near the majority of banned content—only the majority of what the company ultimately removed. As a data scientist warned Guy Rosen, Facebook’s head of integrity at the time, Meta’s classifiers were reliable enough to remove only a low single-digit percentage of hate speech with any degree of precision. “Mark personally values freedom of expression first and foremost and would say this is a feature and not a bug,” Rosen responded on Facebook’s internal communication platform. Also buttressing Meta’s statistics were rules written narrowly enough to ban only unambiguously vile material. Meta’s rules didn’t clearly prohibit adults from flooding the comments section on a teenager’s posts with kiss emojis or posting pictures of kids in their underwear, inviting their followers to “see more” in a private Facebook Messenger group. Narrow rules and unreliable automated enforcement systems left a lot of room for bad behavior—but they made the company’s child-safety statistics look pretty good according to Meta’s metric of choice: prevalence. Defined as the percentage of content viewed worldwide that explicitly violates a Meta rule, prevalence was the company’s preferred measuring stick for the problems users experienced. Yet Meta’s publicly released prevalence numbers were invariably tiny. According to prevalence, child exploitation was so rare on the platform that it couldn’t be reliably estimated, less than 0.05%, the threshold for functional measurement. Content deemed to encourage self-harm, such as eating disorders, was just as minimal, and rule violations for bullying and harassment occurred in just eight of 10,000 views. “There’s a grading-your-own-homework problem,” said Zvika Krieger, a former director of responsible innovation at Meta who worked with the Well-Being Team. “Meta defines what constitutes harmful content, so it shapes the discussion of how successful it is at dealing with it.” Proving to Meta’s leadership that the company’s prevalence metrics were missing the point was going to require data the company didn’t have. So Bejar and a group of staffers from the Well-Being Team started collecting it. Modeled on a recurring survey of Facebook users, the team built a new questionnaire called BEEF, short for “Bad Emotional Experience Feedback.” A recurring survey of issues 238,000 users had experienced over the past seven days, the effort identified problems with prevalence from the start: Users were 100 times more likely to tell Instagram they’d witnessed bullying in the last week than Meta’s bullying-prevalence statistics indicated they should. “People feel like they’re having a bad experience or they don’t,” one presentation on BEEF noted. “Their perception isn’t constrained by policy.” While “bad experiences” were a problem for users across Meta’s platforms, they seemed particularly common among teens on Instagram. Among users under the age of 16, 26% recalled having a bad experience in the last week due to witnessing hostility against someone based on their race, religion or identity. More than a fifth felt worse about themselves after viewing others’ posts, and 13% had experienced unwanted sexual advances in the past seven days. The initial figures had been even higher, but were revised down following a reassessment. Stone, the spokesman, said the survey was conducted among Instagram users worldwide and did not specify a precise definition for unwanted advances. The vast gap between the low prevalence of content deemed problematic in the company’s own statistics and what users told the company they experienced suggested that Meta’s definitions were off, Bejar argued. And if the company was going to address issues such as unwanted sexual advances, it would have to begin letting users “express these experiences to us in the product.” Other teams at Instagram had already worked on proposals to address the sorts of problems that BEEF highlighted. To minimize content that teenagers told researchers made them feel bad about themselves, Instagram could cap how much beauty- and fashion-influencer content users saw. It could reconsider its AI-generated “beauty filters,” which internal research suggested made both the people who used them and those who viewed the images more self-critical. And it could build ways for users to report unwanted contacts, the first step to figuring out how to discourage them. One experiment run in response to BEEF data showed that when users were notified that their comment or post had upset people who saw it, they often deleted it of their own accord. “Even if you don’t mandate behaviors,” said Krieger, “you can at least send signals about what behaviors aren’t welcome.” But among the ranks of Meta’s senior middle management, Bejar and Krieger said, BEEF hit a wall. Managers who had made their careers on incrementally improving prevalence statistics weren’t receptive to the suggestion that the approach wasn’t working. Meta disputed that the company had rejected the Well-Being Team’s approach. “It’s absurd to suggest we only started user perception surveys in 2019 or that there’s some sort of conflict between that work and prevalence metrics,” Meta’s Stone said, adding that the company found value in each of the approaches. “We take actions based on both and work on both continues to this day.” Stone pointed to research indicating that teens face similar harassment and abuse offline. In an email to Bejar, Meta COO Sheryl Sandberg said she recognized that the misogyny his daughter faced was withering. PHOTO: JACQUES DEMARTHON/AGENCE FRANCE-PRESSE/GETTY IMAGES With the clock running down on his two-year consulting gig at Meta, Bejar turned to his old connections. He took the BEEF data straight to the top. After three decades in Silicon Valley, he understood that members of the company’s C-Suite might not appreciate a damning appraisal of the safety risks young users faced from its product—especially one citing the company’s own data. “This was the email that my entire career in tech trained me not to send,” he says. “But a part of me was still hoping they just didn’t know.” With just weeks left at the company, Bejar emailed Zuckerberg, Chief Operating Officer Sheryl Sandberg, Chief Product Officer Chris Cox and Instagram head Adam Mosseri, blending the findings from BEEF with highly personal examples of how the company was letting down users like his own daughter. “Policy enforcement is analogous to the police,” he wrote in the email Oct. 5, 2021—arguing that it’s essential to respond to crime, but that it’s not what makes a community safe. Meta had an opportunity to do right by its users and take on a problem that Bejar believed was almost certainly industrywide. Instagram head Adam Mosseri, Bejar said, acknowledged the problem Bejar described. PHOTO: TOM WILLIAMS/ZUMA PRESS The timing of Bejar’s note was unfortunate. He sent it the same day of the first congressional hearing featuring Frances Haugen, a former Facebook employee who alleged that the company was covering up internally understood ways that its products could harm the health of users and undermine public discourse. Her allegations and internal documents she took from Meta formed the basis of the Journal’s Facebook Files series. Zuckerberg had offered a public rebuttal, declaring that “the claims don’t make any sense” and that both Haugen and the Journal had mischaracterized the company’s research into how Instagram could under some circumstances corrode the self esteem of teenage girls. In response to Bejar’s email, Sandberg sent a note to Bejar only, not the other executives. As he recalls it, she said Bejar’s work demonstrated his commitment to both the company and his users. On a personal level, the author of the hit feminist book “Lean In” wrote, she recognized that the misogyny his daughter faced was withering. Mosseri wrote back on behalf of the group, inviting Bejar to come discuss his findings further. Bejar says he never heard back from Zuckerberg. In his remaining few weeks, Bejar worked on two final projects: drafting a version of the Well-Being Team’s work for wider distribution inside Meta and preparing for a half-hour meeting with Mosseri. As Bejar recalls it, the Mosseri talk went well. Though there would always be things to improve, Bejar recalled Mosseri saying, the Instagram chief acknowledged the problem Bejar described, and said he was enthusiastic about creating a way for users to report unwelcome contacts rather than simply blocking them. “Adam got it,” Bejar said. But Bejar’s efforts to share the Well-Being Team’s data and conclusions beyond the company’s executive ranks hit a snag. After Haugen’s airing of internal research, Meta had cracked down on the distribution of anything that would, if leaked, cause further reputational damage. With executives privately asserting that the company’s research division harbored a fifth column of detractors, Meta was formalizing a raft of new rules for employees’ internal communication. Among the mandates for achieving “Narrative Excellence,” as the company called it, was to keep research data tight and never assert a moral or legal duty to fix a problem. After weeks of haggling with Meta’s communications and legal staff, Bejar secured permission to internally post a sanitized version of what he’d sent Zuckerberg and his lieutenants. The price was that he omit all of the Well-Being Team’s survey data. “I had to write about it as a hypothetical,” Bejar said. Rather than acknowledging that Instagram’s survey data showed that teens regularly faced unwanted sexual advances, the memo merely suggested how Instagram might help teens if they faced such a problem. Bejar emailed his user-safety findings to Zuckerberg and other executives the same day former Facebook employee Frances Haugen testified on Capitol Hill, alleging that it was covering up risks the company knew about. PHOTO: AL DRAGO/BLOOMBERG NEWS Posting the watered down Well-Being research was Bejar’s final act at the company. He left at the end of October 2021, just days after Zuckerberg renamed the company Meta Platforms. Bejar left dispirited, but chose not to go public with his concerns—his Well-Being Team colleagues were still trying to push ahead, and the last thing they needed was to deal with the fallout from another whistleblower, he told the Journal at the time. The hope that the team’s work would continue didn’t last. The company stopped conducting the specific survey behind BEEF, then laid off most everyone who’d worked on it as part of what Zuckerberg called Meta’s “year of efficiency.” If Meta was to change, Bejar told the Journal, the effort would have to come from the outside. He began consulting with a coalition of state attorneys general who filed suit against the company late last month, alleging that the company had built its products to maximize engagement at the expense of young users’ physical and mental health. Bejar also got in touch with members of Congress about where he believes the company’s user-safety efforts fell short. He’s scheduled to testify in front of a Senate subcommittee on Tuesday. Instagram makes teen accounts private as pressure mounts to protect children September 22, 2024 2:55 AM By Associated Press FILE - Students use their cellphones as they leave for the day the Ramon C. Cortines School of Visual and Performing Arts High School in Los Angeles, California, Aug. 13, 2024. Instagram is making teen accounts private by default as it tries to make the platform safer for children amid a growing backlash against how social media affects young people's lives. Starting September 24, in the U.S., U.K., Canada and Australia, anyone under 18 who signs up for Instagram will be placed into a restrictive teen account. Those under 18 with existing accounts will be migrated over the next 60 days. Teens in the European Union will see their accounts adjusted later this year. Parent company Meta acknowledges that teenagers may lie about their age and says it will require them to verify their ages in more instances, like if they try to create a new account with an adult birthday. The California-based U.S. company also said it is building technology that proactively finds teen accounts that pretend to be grownups and automatically places them into restricted teen accounts. The teen accounts will be private by default. Private messages are restricted so teens can only receive them from people they follow or are already connected to. "Sensitive content" — such as videos of people fighting or those promoting cosmetic procedures — will be limited, Meta said. Teens will also get notifications if they are on Instagram for more than 60 minutes and a "sleep mode" will be enabled that turns off notifications and sends auto-replies to direct messages from 10 p.m. until 7 a.m. While these settings will be turned on for all teens, 16- and 17-year-olds will be able to turn them off. Kids under 16 will need their parents' permission to do so. "The three concerns we're hearing from parents are that their teens are seeing content that they don't want to see or that they're getting contacted by people they don't want to be contacted by or that they're spending too much time on the app," said Naomi Gleit, head of product at Meta. "So teen accounts is really focused on addressing those three concerns." Changes follow lawsuits The announcement comes as the company faces lawsuits from dozens of U.S. states that accuse it of harming young people and contributing to the youth mental health crisis by knowingly and deliberately designing features on Instagram and Facebook that addict children to its platforms. While Meta didn't give specifics on how the changes might affect its business, the company said the changes may mean that teens will use Instagram less in the short term. Emarketer analyst Jasmine Enberg said the revenue impact of the changes "will likely be minimal." "Even as Meta continues to prioritize teen safety, it's unlikely that it's going to make sweeping changes that would cause a major financial hit," she said, adding that the teen accounts are unlikely to significantly affect how engaged teens are with Instagram "not in the least because there are still plenty of ways to circumvent the rules, and could even make them more motivated to work around the age limits." 'An important first step' New York Attorney General Letitia James said Meta's announcement was "an important first step, but much more needs to be done to ensure our kids are protected from the harms of social media." James' office is working with other New York officials on how to implement a new state law intended to curb children's access to what critics call addictive social media feeds. Others were more critical. Nicole Gil, the co-founder and executive director of the nonprofit Accountable Tech, called Instagram's announcement the "latest attempt to avoid actual independent oversight and regulation and instead continue to self-regulate, jeopardizing the health, safety, and privacy of young people." "Today's PR exercise falls short of the safety by design and accountability that young people and their parents deserve and only meaningful policy action can guarantee," she said. "Meta's business model is built on addicting its users and mining their data for profit; no amount of parental and teen controls Meta is proposing will change that." Sen. Marsha Blackburn (R-Tenn.), the co-author of the Kids Online Safety Act that recently passed the Senate, questioned the timing of the announcement "on the eve of a House markup" of the bill. "Just like clockwork, the Kids Online Safety Act moves forward and industry comes out with a new set of selfenforcing guidelines," she said. Parents privy to kids' accounts In the past, Meta's efforts at addressing teen safety and mental health on its platforms have also been met with criticism that the changes don't go far enough. For instance, while kids will get a notification when they've spent 60 minutes on the app, they will be able to bypass it and continue scrolling. That's unless the child's parents turn on "parental supervision" mode, where parents can limit teens' time on Instagram to a specific amount of time, such as 15 minutes. With the latest changes, Meta is giving parents more options to oversee their kids' accounts. Those under 16 will need a parent or guardian's permission to change their settings to less restrictive ones. They can do this by setting up "parental supervision" on their accounts and connecting them to a parent or guardian. Nick Clegg, Meta's president of global affairs, said last week that parents don't use the parental controls the company has introduced in recent years. Meta's Gleit said she thinks the teen accounts will incentivize parents to start using them. "Parents will be able to see, via the family center, who is messaging their teen and hopefully have a conversation with their teen," she said. "If there is bullying or harassment happening, parents will have visibility into who their teen's following, who's following their teen, who their teen has messaged in the past seven days and hopefully have some of these conversations and help them navigate these really difficult situations online." U.S. Surgeon General Vivek Murthy said last year that tech companies put too much responsibility on parents when it comes to keeping children safe on social media. "We're asking parents to manage a technology that's rapidly evolving that fundamentally changes how their kids think about themselves, how they build friendships, how they experience the world — and technology, by the way, that prior generations never had to manage," Murthy said in May 2023. How Australia Will (or Won’t) Keep Children Off Social Media Critics say big questions remain not only about how the new law will be enforced, but also about whether the ban will really protect young people. The New York Times Nov. 28, 2024 By Yan Zhuang Australia has passed a law to prevent children under 16 from creating accounts on social media platforms. The bill, which the government calls a “world leading” move to protect young people online, was approved in the Senate on Thursday with support from both of the country’s major parties. The lower house of Parliament had passed it earlier in the week. “This is about protecting young people — not punishing or isolating them,” said Michelle Rowland, Australia’s communications minister. She cited exposure to content about drug abuse, eating disorders and violence as some of the harms children can encounter online. The legislation has broad support among Australians, and some parental groups have been vocal advocates. But it has faced backlash from an unlikely alliance of tech giants, human rights groups and social media experts. Critics say there are major unanswered questions about how the law will be enforced, how users’ privacy will be protected and, fundamentally, whether the ban will actually protect children. What’s in the law? The law requires social media platforms to take “reasonable steps” to verify the age of users and prohibit those under 16 from opening accounts. Instagram’s New ‘Teen Accounts’ New privacy settings Instagram plans to default all new and existing accounts set up by people who have indicated they are under 18 years old to “private mode.” Here’s what to know about the change. What is ‘Private Mode’? With this change, an account holder must approve new followers before they can see, like or comment on their posts. It would also turn off notifications between 10 p.m. and 7 a.m. Are there age-specific changes? Account holders who are 16 or 17 will be able to make their accounts public and change other default settings by themselves. But children under 16 will need a parent’s permission to alter the privacy default, sleep mode and other restrictions. Further protections The app will also limit sensitive content for minors, such as nudity or discussions about self-harm, and prevent direct messages from people they don’t follow — existing restrictions that the company had previously announced. Can teens lie? Some teens may try to circumvent the privacy changes by setting up new Instagram accounts with older birth dates. The app said it would require these users to verify their age in various ways, such as by sending in a selfie video that will be analyzed by facial age estimation technology. What is the parental supervision tool? Parents can use a supervision tool to set daily time limits on their teen’s app's use. New features will include a list of people their child has recently messaged as well as content topics their child has elected to see more of. It does not specify which platforms the ban will cover — that will be decided later — but the government has named TikTok, Facebook, Snapchat, Reddit, Instagram and X as sites it is likely to include. Three broad categories of platforms will be exempt: messaging apps (like WhatsApp and Facebook’s Messenger Kids); gaming platforms; and services that provide educational content, including YouTube. Those 15 and under will also still be able to access platforms that let users see some content without registering for an account, like TikTok, Facebook and Reddit. Ms. Rowland, the communications minister, said the restriction on creating accounts, rather than on content more broadly, would mitigate harms associated with online life — like “persistent notifications and alerts” that could affect young people’s sleep and ability to focus — while limiting the law’s effect on the broader population. And supporters of the ban say that delaying children’s exposure to the many pressures of social media would allow them the time to develop a more “secure identity,” while taking pressure off parents to police their children’s online activity. But digital media experts and some parental groups have said that the patchwork nature of which platforms will and won’t be included in the ban makes it unclear what exactly it is meant to protect children from. A more effective approach would be to address the problem at its root by requiring social media companies to do a better job of moderating and removing harmful content, said Lisa Given, a professor of information sciences at RMIT University in Melbourne. The new law “does not protect children against potential harms on social media,” Professor Given said. “In fact, it could create other problems by excluding young people from helpful and useful information, as well as opening up a number of privacy concerns for all Australians.” How will it be enforced? That’s not yet entirely clear. The bill states that social media companies must take reasonable steps to assess users’ ages, but the platforms are left to decide how to do that. Those that don’t comply could be fined up to 49.5 million Australian dollars (about $32 million). In a measure that was added in response to privacy concerns, the law states that providing a government-issued identity document cannot be the only option social media platforms give users for verifying their age. Other methods the government has suggested include so-called age assurance technologies, like using a facial scan to determine a user’s approximate age, or estimating it based on online behavior. Some of those technologies are already being tried. Facebook, for example, is teaching A.I. to estimate users’ ages by looking at things like the birthday messages they receive. The Australian government is conducting its own trial of such tools, and the results will inform how it defines the “reasonable steps” that social media platforms must take. But Daniel Angus, the director of the Digital Media Research Centre at the Queensland University of Technology, said it was unrealistic for the government to base its law, even in part, on that kind of technology, which is often driven by A.I., largely still in development and in no way foolproof. He added that “there are huge, huge privacy concerns around these, huge tracking concerns. All of this allows, in some way, the ability to track users online.” What has the response been? Polls show that the majority of Australians favor the ban. Parental groups have been broadly supportive — although some say the law does not go far enough and should cover more platforms. Some parents who blame social media for their children’s deaths have been particularly vocal campaigners for a ban, such as Kelly O’Brien, who said that her 12-year-old daughter, Charlotte, died by suicide after experiencing bullying on and off social media. “Giving our kids these phones, we’re giving them weapons, we’re giving them the world at their fingertips,” Ms. O’Brien told an Australian news outlet. Social media companies have criticized the law. Elon Musk, the owner of X, said on the platform that it “seems like a backdoor way to control access to the internet by all Australians.” Meta, the parent company of Facebook and Instagram, said the proposal “overlooks the practical reality of age assurance technology, as well as the views of a majority of mental health and youth safety organizations in the country.” (LinkedIn argued that it should not fall within the scope of the ban because, in part, it “simply does not have content interesting and appealing to minors.”) Some commentators have described the ban as performative. “The primary use of this legislation — let’s not pretend otherwise — is to make it look like our Parliament is taking a stand,” Annabel Crabb, a top journalist at Australia’s national broadcaster, wrote. Human rights groups have also raised concerns. Exhibit 1 Meta Platform Stock Price 2013 - 2025 Source: 2026 Yahoo Finance Exhibit 2 CAGR 40% Exhibit 3 Meta Platforms Net Income 2008-2024 CAGR 42% Exhibit 4 Exhibit 5 Exhibit 6 Most popular social networks worldwide in monthly active usage Source: Statista, February 2025
0
You can add this document to your study collection(s)
Sign in Available only to authorized usersYou can add this document to your saved list
Sign in Available only to authorized users(For complaints, use another form )