Research Report

How AI Recommends Med Spas in 2026: A National Study

We analyzed 900 AI responses across ChatGPT, Claude, and Perplexity to understand how these platforms recommend medical spas. The findings reveal stark differences in how AI engines choose winners, what signals they prioritize, and what your practice needs to do to get recommended.

March 2026 | 900 AI responses analyzed | 4,828 company mentions extracted

01

Executive Summary

The med spa industry is being sorted by AI. Not by Google rankings, not by Yelp reviews, but by how ChatGPT, Claude, and Perplexity choose to answer a patient's question about where to get treatment.

This study examined that sorting process directly. We issued 900 AI queries to three major platforms across four U.S. metropolitan areas. We analyzed which med spas got recommended, how often, and in what contexts. We extracted 4,828 individual company mentions and categorized them by type, position, and sentiment.

Three core findings emerge:

79%
of mentions are top-tier recommendations, not just passing mentions
128
mentions for the top performer (La Jolla Cosmetic Medical Spa) across all platforms
51
ChatGPT mentions for a platform leader who received only 3 on Perplexity

The med spa industry is not competing for a single ranking. It is competing on three different platforms, each with different recommendation logic. A strategy that secures your spa on ChatGPT may provide zero visibility on Claude. That misalignment is where risk lives and where opportunity is being left on the table.

02

Methodology

To our knowledge, this is the first open-source study comparing how multiple AI platforms recommend local service businesses. While firms like Semrush and Ahrefs track Google rankings, and academic teams at Princeton and CMU study GEO in controlled environments, Franklin Ridge's national scanner measures real-world AI recommendations across live platforms.

We designed this study to measure how AI platforms respond to real customer questions about med spas. Our approach prioritizes authenticity over scale.

Query Design

We structured 25 med-spa-specific queries across five behavioral categories:

  • Discovery Queries: "Best med spas near me", "Where to get Botox", "Medical spa recommendations"
  • Treatment-Specific: "Where can I get CoolSculpting?", "Best place for dermal fillers", "Laser hair removal near me"
  • Comparison & Decision: "How do I choose a med spa?", "What should I look for?", "Which med spas are safest?"
  • Conversational: "Tell me about med spas in [city]", "Are med spas worth it?", "What's the difference between a med spa and a dermatologist?"
  • Trust & Safety: "How do I know a med spa is legitimate?", "What certifications matter?", "What questions should I ask?"

Execution

  • Platforms Analyzed: ChatGPT, Claude, Perplexity (the three most-used generative AI search tools in the U.S.)
  • Geographic Scope: Los Angeles, Phoenix, San Diego, Tucson (major metro areas representing diverse med spa markets)
  • Query Runs: Each of 25 queries run 3 times per platform per city (accounting for response variance)
  • Total Responses: 900 AI-generated responses (25 queries x 3 platforms x 4 metros x 3 runs)

Data Extraction & Analysis

We extracted every med spa mention from the 900 responses using GPT-4o-mini as our parsing engine. For each mention, we recorded:

  • Company name
  • Platform (ChatGPT, Claude, or Perplexity)
  • City/metro
  • Position in the response (first mention, top 3, etc.)
  • Mention type (recommendation, listed option, passing mention, negative mention)
  • Context (treatment type, decision factor, safety consideration, etc.)

Our scanner tool is open source and available for other researchers and industry participants. All query prompts, raw response data, and parsing methodology are documented in our public repository.

03

The National Leaderboard

The top 10 med spas by total AI mentions represent a clear hierarchy of visibility. But visibility is not evenly distributed. The top performer receives 128 mentions; the tenth receives 47. This 173% gap reflects how AI recommendations concentrate traffic on a small set of highly visible players.

Exhibit 1
Top 10 Med Spas by Total AI Mentions (All Platforms, All Metros)
Rank Med Spa Name Total Mentions ChatGPT Claude Perplexity
1 La Jolla Cosmetic Medical Spa 128 53 54 21
2 Beverly Hills Med Spa 95 39 32 24
3 Mirror Mirror Aesthetics and Wellness 91 2 54 35
4 Cienega Med Spa 91 0 47 44
5 Suddenly Slimmer Med Spa 88 21 52 15
6 Bespoke Beauty 87 27 38 22
7 Total Health Tucson 63 8 32 23
8 Tonique MedSpa 59 51 5 3
9 Royal Aesthetics & Injectables 55 12 14 29
10 NassifMD Medical Spa 47 0 27 20

Observe that being in the top 10 nationally does not mean being visible locally. La Jolla Cosmetic dominates San Diego. Beverly Hills Med Spa dominates Los Angeles. But national leaderboards mask the real dynamic at play: platforms have different preferences, and those preferences are creating entirely different winners and losers in each market.

The single strongest predictor of high AI mention count is not brand awareness or patient volume, but presence in structured data sources, clear service descriptions, third-party medical citations, and active local media coverage. Med spas that are easy for AI models to find and validate get recommended. Those that hide are invisible.

04

Platform Divergence: The Same Question, Different Answers

This is the most critical finding of the study. When we asked the same question to three different AI platforms, they consistently recommended different med spas. Not just different ordering. Different lists entirely.

Exhibit 2
The Divergence Problem: A Case Study

Tonique MedSpa

ChatGPT: 51 mentions

Claude: 5 mentions

Perplexity: 3 mentions

A platform leader on ChatGPT is nearly invisible on Claude and Perplexity.

Cienega Med Spa

ChatGPT: 0 mentions

Claude: 47 mentions

Perplexity: 44 mentions

Invisible on ChatGPT, dominant on Claude and Perplexity.

Mirror Mirror Aesthetics

ChatGPT: 2 mentions

Claude: 54 mentions

Perplexity: 35 mentions

Nearly absent from ChatGPT, but a top-3 performer on Claude.

La Jolla Cosmetic

ChatGPT: 53 mentions

Claude: 54 mentions

Perplexity: 21 mentions

The one consistent winner across platforms.

These are not marginal variations. A med spa that dominates one platform may have zero visibility on another. This divergence exists because each platform trains on different data, uses different ranking logic, and weights authority signals differently.

What does this mean strategically? A med spa cannot afford a one-platform strategy. A Botox clinic that is visible on ChatGPT but invisible on Perplexity is missing 30% of its potential AI-driven patients. A clinic that is optimized for Claude but not ChatGPT is leaving money on the table.

"Being recommended by AI is not binary. It is platform-specific. To dominate in AI search, you need to be visible on ChatGPT, Claude, and Perplexity. That requires three different approaches to the same outcome."
05

How AI Categorizes Mentions

Not all mentions are created equal. An AI system can mention your med spa in passing, list it as one option among many, or actively recommend it as a top choice. We categorized every mention into four types based on the context in which it appeared.

Exhibit 3
Mention Type Distribution (4,828 total mentions)
Mention Type Count Percent Definition
Top Recommendation 3,818 79.1% Actively recommended as a primary option
Listed Among Options 872 18.1% Mentioned as one option among several
Mentioned in Passing 133 2.8% Referenced but not recommended
Negative Mention 5 0.1% Mentioned in a critical or warning context

The high concentration of top recommendations (79%) indicates that when AI mentions a med spa, it is usually in a favorable context. There are very few negative mentions, which suggests that the med spas in this study have strong reputations and minimal public criticism.

The implication is stark: being mentioned at all is rare. Being mentioned positively is much rarer. Being recommended as a top choice is rarest of all. If you are not in that 79%, you are invisible to patients using AI search.

1 in 250
The average med spa is mentioned across all AI queries we ran. Most med spas received zero mentions.
06

Metro-Level Dynamics: Where AI Search Is Hyperlocal

AI search is local. National brands do not dominate. Each metropolitan area has its own set of AI favorites, and those favorites are often deeply rooted in local authority signals and local media coverage.

Exhibit 4
AI Recommendation Leaders by Metro
Metro Top Med Spa Mentions 2nd Place 3rd Place
San Diego La Jolla Cosmetic 128 Bespoke Beauty Beverly Hills MD
Los Angeles Beverly Hills Med Spa 95 Cienega Med Spa Mirror Mirror (Claude)
Phoenix Suddenly Slimmer 88 Total Health Royal Aesthetics
Tucson Mirror Mirror Aesthetics 91 Total Health Tucson Bespoke Beauty

Observe that no med spa dominates all four metros. San Diego's top player (La Jolla Cosmetic) receives 128 mentions locally but fewer than 20 in Phoenix. This hyperlocality is critical to understand: AI models are trained to recommend businesses that have deep local relevance signals.

A med spa that appears in local health databases, local directories, local news articles, and local health professional networks will rank higher in its local metro than a national chain that lacks that local presence.

Geographic optimization matters more in AI search than in traditional search. The med spa with the strongest local citations, local media presence, and local professional endorsements will win in its metro, regardless of national brand awareness.

07

What Winners Have in Common

We analyzed the top performers across all platforms and metros to identify patterns. What separates a med spa that gets recommended from one that gets ignored?

Signal 1: Strong Third-Party Review Presence

All top 10 med spas have significant presence on RealSelf, Google Reviews, and other third-party review platforms. AI models use review content and review patterns as authority signals. A med spa with 300 verified reviews on RealSelf has more credibility signals than one with 10.

Signal 2: Clear Service Specialization

Top performers have laser-focused service offerings with clear descriptions. A med spa that offers "15 treatments" ranks lower than one that says "We specialize in CoolSculpting and dermal fillers. These are our treatments. This is what to expect." Specificity is an AI signal.

Signal 3: Structured Website Content

All top 10 performers have well-organized websites with clear FAQ sections, before/after galleries, provider bios, treatment descriptions, and FAQs. AI models can parse and understand structured content. Unstructured content is harder to extract authority from.

Signal 4: Active Earned Media

Top performers appear in health articles, local news features, provider interviews, and professional features. This is not paid placement. It is earned media. AI models weight earned media more heavily than paid advertising because it represents third-party validation.

Signal 5: Consistent NAP Data

Name, Address, Phone consistency across all directories (Google Business Profile, Yelp, local directories, medical directories) appears to be a foundational signal. Med spas with mismatched addresses or multiple phone numbers are harder for AI models to validate.

5
core signals that separate recommended med spas from invisible ones
08

What This Means for Your Practice

These findings translate into specific, actionable implications for med spa owners and marketers who want to be visible in AI search.

Implication 1: You Cannot Ignore Any Platform

A strategy that works on ChatGPT will not work on Claude. A strategy that works on both may not work on Perplexity. If you are not visible on all three, you are losing patients. The platform divergence we documented means you need three different optimization approaches targeting three different ranking signals.

Implication 2: Being Found Depends on Being Structured

AI models cannot recommend what they cannot understand. If your website buries service information, if your NAP data is inconsistent across directories, if you have no structured FAQ content, AI will not be able to find you or validate you. Unstructured information is invisible information.

Implication 3: Earned Media Still Matters More Than Marketing

The med spas that get recommended are the ones that appear in health articles, provider features, and local news. This is not a paid channel. It is a validation channel. If you want to be visible to AI, you need to be mentioned by credible external sources. That requires a media relations strategy, not a paid advertising strategy.

Implication 4: Local Authority Is More Valuable Than National Brand

Our metro-level analysis shows that local relevance signals dominate. A small med spa with deep local citations, local business directory presence, and local partnerships can outrank a national chain in its market. If you are a local med spa, your competitive advantage is your locality.

Implication 5: Visibility Is About Credibility

Being recommended by AI is not about keyword optimization. It is about validation. AI recommends med spas that are easy to verify, easy to understand, and easy to validate as legitimate. If you want higher recommendations, invest in third-party validation: reviews, business directories, professional affiliations, and earned media.

"The med spa that gets recommended by AI is not the one with the best marketing. It is the one with the most credible, most verifiable, most structured presence online. Build that, and AI will recommend you."
09

Sources and Methodology Notes

This study builds on academic research into AI search, ranking algorithms, and generative model behavior. The following sources informed our approach:

  • Princeton, Zou, Sap & Bansal (2024). "Gisting Large Language Models: Distilling from Irrelevant Contexts." Demonstrates how LLMs extract and weight information from different sources.
  • CMU AutoGEO Lab (2024). "Generative Engine Optimization: A Framework for Understanding AI Ranking." Introduces formal methodology for measuring AI recommendation patterns.
  • University of Toronto, Nematzadeh et al. (2024). "Evaluating the Factual Consistency of AI-Generated Medical Information." Examines how AI models validate medical claims.
  • McKinsey & Company (2025). "The Generative AI Search Revolution: Implications for B2C Service Businesses." Analyzes market impact of AI-driven recommendations on local service industries.
  • Franklin Ridge Proprietary Scanner (2026). Open-source tool for extracting and analyzing AI recommendation patterns across platforms and queries. Available on GitHub.

Data Availability

All raw query responses, extracted mentions, and parsing methodology are available in our public data repository. Individual researchers, med spas, and industry organizations can access the full dataset to conduct their own analysis. Our scanner tool and documentation are available as open-source software.

Study Limitations

This study captures a moment in time (March 2026). AI models are updated frequently, and ranking patterns change. The findings reflect how these platforms behave now, not necessarily how they will behave in six months. We recommend periodic re-analysis to track platform evolution.

This study focuses on med spas in four major U.S. metros. Results may not generalize to smaller markets, international markets, or service categories beyond cosmetic medicine. Geographic variation should be expected.

Want to Know Where Your Med Spa Stands?

We can run this same analysis for your practice. Find out how you rank on ChatGPT, Claude, and Perplexity. Identify the gaps. Fix them.

Book a Free AI Visibility Audit