Introduction
The MCP Registry launched with a "pull" model: browse servers, search by name, manually install what you need. This is how package managers work—npm, PyPI, Docker Hub.
But MCP agents are evolving toward something different: push recommendations.
When you say "help me manage my calendar," the agent won't show you a list to browse. It will proactively suggest—or even automatically connect to—the server it thinks is best.
This is happening now. And without transparency standards, it's a black box.
Pull vs Push: The Two Discovery Models
Pull Discovery (Current State)
What it is:
- User browses a catalog or registry
- User searches by name or category
- User manually selects and installs
- User decides what to use
Examples:
- npm package search
- Chrome extension store
- MCP Registry browser
- GitHub marketplace
Push Discovery (Emerging Future)
What it is:
- Agent interprets user intent from conversation
- Agent proactively recommends servers
- Agent may auto-connect without explicit approval
- Agent decides what to surface
Examples emerging now:
- ChatGPT suggesting apps mid-conversation
- Claude recommending MCP servers for tasks
- Copilot surfacing relevant extensions
- Gemini auto-connecting to tools
Why Push Is Coming (And Why It's Better)
User experience advantages:
- Frictionless: No browsing, searching, installing
- Contextual: Right tool at right moment
- Intelligent: Agent understands intent
- Efficient: Reduces decision fatigue
This is the promise of AI agents: They handle the complexity for you.
Example conversation:
User: "I need to track the bug we discussed yesterday"
PULL MODEL (current):
- User → Open MCP Registry
- User → Search "issue tracking"
- User → Browse 15 results
- User → Read descriptions
- User → Choose one
- User → Install
- User → Configure
- User → Try again with their request
PUSH MODEL (future):
Agent: "I'll connect to your GitHub to track that.
[Auto-connected to github-mcp]
I found issue #247 from yesterday.
Would you like me to update it?"
Push is obviously better UX. But it concentrates power in the algorithm.
The Black Box Problem
When algorithms control discovery, transparency becomes critical.
What we don't know (today):
For any given recommendation, what determined it?
When ChatGPT suggests an app, or Claude recommends an MCP server:
- What quality signals were measured?
- How were they weighted?
- What alternatives were considered?
- Why was this one ranked #1?
- Can the developer understand/optimize for this?
Right now: Complete black box.
The Google SEO Parallel
Remember how Google Search discovery works:
What's opaque:
- Exact algorithm (proprietary)
- Specific weightings (secret)
- Real-time adjustments (hidden)
What's known:
- Core ranking factors (backlinks, content, speed)
- General principles (quality, relevance, authority)
- Some guidance from Google (but vague)
What happened:
- SEO industry emerged to game the algorithm
- Quality content lost to optimization tricks
- "Google penalty" could destroy businesses overnight
- Developers optimizing blindly
- Black box creates anxiety and manipulation
The result:
- Some transparency, but mostly guesswork
- Gaming is rampant
- Quality ≠ discoverability
- Small players struggle
- Constant cat-and-mouse
Do we want this for MCP discovery?
What Happens Without Standards
Scenario: 2027, No Transparency Standards
For Developers:
- "My server is high quality but no one discovers it"
- "What am I doing wrong?"
- "Should I add more keywords? Change my description?"
- "Why does a worse server rank higher?"
- "Which platform should I optimize for?"
- "The algorithm changed and my server disappeared"
MCP SEO Consultants emerge:
- "We'll optimize your server for ChatGPT's algorithm!" - $5k/month
- "Get featured on Claude!" - $10k setup fee
- Snake oil, guesswork, manipulation
- Racing to the bottom
For Platforms:
- Black box = no accountability
- Can favor partners quietly
- Can change rules without notice
- Pay-to-play dynamics emerge
- Quality takes backseat to business deals
For Users:
- Don't know if they're getting the best tool
- Don't know if recommendation is paid/biased
- Trust erodes
- "Is this the best server or just the best optimized?"
What Push Discovery Needs
To avoid the Google nightmare, push recommendations need transparency in:
Quality Measurement (What is "good"?)
Transparent signals:
- Uptime, performance, security, adoption
- Measured the same way everywhere
- Published for developers to see
- Optimizable through actual quality improvement
Not:
- Secret sauce metrics
- Proprietary scoring
- Unmeasurable factors
- Optimization through guesswork
Intent Matching (What is "relevant"?)
Transparent signals:
- Standardized categories and capabilities
- Documented matching logic
- Predictable relevance scoring
- Developers know how to describe their servers
Not:
- Opaque categorization
- Mysterious relevance algorithm
- Unpredictable matching
- Trial and error labeling
Ranking Logic (How are recommendations prioritized?)
Transparent signals:
- Platforms publish their philosophy
- "We prioritize security 40%, performance 30%, adoption 30%"
- Explain tradeoffs and priorities
- Communicate changes in advance
Not:
- "Trust us, it's good"
- Secret weighting
- Unexplained changes
- No developer feedback
Inclusion Process (Who gets recommended at all?)
Transparent signals:
- Documented minimums
- Appeal process for exclusions
- Explanation of decisions
- Fair process for all developers
Not:
- Arbitrary exclusions
- No appeal path
- Unexplained disappearances
- Favoritism and bias
The Choice We Face
We're at the inflection point.
Push recommendations are coming—they're already here in early forms. The question isn't whether we'll have algorithmic discovery, but whether it will be transparent or opaque.
Two paths:
Path A: The Google Model (Opaque)
- Proprietary algorithms
- Black box rankings
- SEO gaming
- Developer anxiety
- Pay-to-play dynamics
- Quality ≠ discoverability
Path B: The Transparent Model (What We're Building)
- Standardized quality signals
- Published weighting philosophy
- Clear optimization path
- Developer confidence
- Merit-based discovery
- Quality = discoverability
What We're Doing About It
We're not waiting to see which path we go down.
Part 1: Quality Signals Specification
Consolidating industry best practices for measuring server quality—giving platforms a standard to use for push recommendations.
Part 2: Transparency Tracker
Independently measuring and reporting which platforms are being transparent about their push recommendation algorithms.
The goal:
Make it easy to be transparent (clear specification) and create pressure to actually be transparent (public accountability).
The Stakes
Push recommendations concentrate power.
The algorithm that decides what gets recommended controls:
- Which developers succeed
- Which servers users discover
- What the ecosystem values
- How innovation happens
Without transparency:
- Power is unaccountable
- Gaming replaces quality
- Trust erodes
- The ecosystem suffers
With transparency:
- Developers optimize for real quality
- Users trust recommendations
- Platforms compete on fairness
- The best tools rise naturally
Call to Action
Push discovery is coming. Let's make it transparent.
For Platforms:
You're building push recommendation systems. Build them transparently from the start. Use standard quality signals. Publish your weighting. Be accountable.
For Developers:
Demand transparency. Ask platforms: "How does your recommendation algorithm work?" Choose transparent platforms. Build quality, not optimization tricks.
For the Ecosystem:
Support the initiative. Review our specifications. Help refine the standards. Hold platforms accountable.
Further Reading
- The MCP Registry: What It Solves (and Doesn't)
Understanding the current pull model - Intent-Based Discovery: How Agents Match Tasks to Tools
Deep dive on intent classification - Case Study: How Google SEO Created a Gaming Industry
Lessons from search - Platform Comparison: Who's Being Transparent?
Preview of the Transparency Tracker