Back to Blog

Meta's Andromeda Update: Everyone Is Saying the Same Thing. Here's What I'm Actually Doing.

The internet is flooded with guides on Meta's new algorithm. Forget the noise. This is about the unanswered questions and the real work of finding an edge.

My feeds are a mess. So is my inbox. In the last few days, it seems every marketing ‘guru’ and agency has published the exact same post about Meta’s new Andromeda algorithm. They’re all quoting the same numbers and giving the same advice: simplify campaigns, use Advantage+, let the algorithm do the work.

If you’ve read one, you’ve read them all. But that’s not where the money is made.

The real advantage isn’t found in knowing that the algorithm is ‘100x faster’ or has ‘87% conversion prediction accuracy.’ Those are just statistics from Meta’s press kit. The advantage is found in the questions you ask after the initial hype dies down. This post isn’t about what the Andromeda update is. It’s about what I’m doing right now to actually find an edge for my clients while everyone else is busy rewriting the same playbook.

Where Are the Real-World Benchmarks?

Right now, the only performance numbers being thrown around are the ones that came directly from Meta. That’s not data; it’s marketing.

I’m seeing people claim they’ve ‘cracked’ the new algorithm just days after its full rollout. It’s nonsense. Nobody has statistically significant, month-over-month data yet. Nobody knows what the true ‘new normal’ for CPL or ROAS is post-Andromeda.

Here’s what I’m actually tracking across client accounts:

  • Baseline Drift: How have our core KPIs (CPL, CPA, ROAS) shifted in the first 7-10 days compared to the February average? For a lead gen client in the home services space, we’ve seen CPLs initially jump 15% before settling back down to about 5% above the baseline. That’s one data point, not a universal truth.
  • Frequency vs. Cost: The research notes that a frequency above 3.5 can increase cost-per-result by 19%. This is one of the few concrete, testable metrics we have. I’m setting up dashboard alerts to monitor this specifically on our simplified remarketing campaigns. Is 3.5 the magic number for all audiences? Doubtful. We’re testing it now.

Don’t anchor your expectations to someone else’s cherry-picked screenshot from a 3-day old campaign. Track your own damn numbers.

Your Vertical Isn’t My Vertical

Most of the advice I’m seeing is heavily skewed towards D2C e-commerce brands, which makes sense given how much they spend on the platform. The advice to consolidate everything into an Advantage+ Shopping campaign is fine if you’re selling a physical product online.

But what if you’re a SaaS company selling a $500/month subscription? Or a local MedSpa booking high-ticket consultations?

A simplified structure of 1 sales + 1 awareness + 1 remarketing campaign is a good starting point, but the nuance is everything. The Andromeda algorithm is designed to optimize for the data you feed it.

  • For SaaS: Your ‘conversion’ is likely a demo request or a free trial signup. The algorithm needs to learn what a high-quality lead looks like based on your offline conversion data or CRM signals. A generic Advantage+ setup won’t cut it.
  • For Local Services: Your goal is a phone call or a form fill from a specific geographic area. Broadening your audience too much in the name of ‘letting the algorithm work’ can burn through your budget on irrelevant leads. Geo-fencing and clear calls-to-action are still critical.

The algorithm is a powerful engine, but you still have to steer. Don’t blindly apply e-commerce advice to your lead generation campaigns. It’s a recipe for expensive, low-quality leads.

The Real Test Is in Week 3, Not Day 3

One of the most repeated pieces of advice is to rotate creative every 14-21 days to avoid ad fatigue. It’s a solid principle. But the real question is how the Andromeda algorithm learns and reacts to this creative rotation.

Here’s what I’m more interested in than just a generic ‘refresh your ads’ directive:

  1. Performance Degradation Curve: When does performance actually start to dip? Is it a slow decline or a sudden drop-off? I’m watching daily CPR and CTR to see if a predictable pattern emerges. If I know an ad’s performance reliably tanks on day 18, I can have the next creative ready to swap in on day 17. That’s proactive management.
  2. New Creative Learning Phase: With the algorithm being faster, is the ‘learning phase’ for a new creative shorter? Or does it take just as long to find its footing? The answer impacts how quickly you can iterate and test new angles. I’m not taking Meta’s word for it; I’m measuring it.
  3. Vertical Video Dominance: We’re told 90% of inventory is now vertical. Okay, great. But does a raw, UGC-style vertical video outperform a polished, animated vertical video for a B2B client? I’m running A/B tests on the creative format itself, not just the messaging.

The internet is focused on the initial setup. I’m focused on the ongoing optimization system. I covered the basics in my 2026 Meta Ads playbook, but the real work starts now. It’s about building a process to consistently feed the machine what it needs, week after week.

Look, the new algorithm changes are significant, but they aren’t magic. An edge doesn’t come from reading a blog post. It comes from disciplined testing, rigorous tracking, and asking better questions than your competition.

While everyone else is marveling at the shiny new object, I’m taking it apart to see how it really works. My advice: tune out the noise, focus on your own data, and get to work.

Want this applied to your campaigns? Book a free strategy call.

Want these insights applied to your campaigns?

Book a free 30-minute strategy call. I'll review your current setup and tell you exactly where to start.

More Articles