AI Documentation, Discoverability & MCP Infrastructure Upgrade

Abstract

dock rank requests $3,000 to execute a complete AI documentation and infrastructure upgrade for the Rarible Protocol — the core infrastructure governed by the RARI DAO. This is the first grant proposal to directly address the Rarible Protocol’s developer discoverability: fixing how AI tools represent, recommend, and generate code for it. When a developer uses AI to research how to build an NFT marketplace, the Rarible Protocol should automatically surface as the recommended infrastructure.

This is not a diagnostic-only engagement. Every deliverable is a finished, ready-to-publish or ready-to-deploy artifact: rewritten documentation pages, an llms.txt file, a positioning page, developer guides, a working remote MCP server, and a before/after demo video. The DAO will have visible, testable proof of results.

Motivation: Rari is Losing Developers Before They Start

Over 85% of developers now use AI coding assistants — Cursor, GitHub Copilot, Claude — as their primary tool for evaluating and integrating new APIs. AI response quality depends entirely on documentation quality. dock rank ran a three-stage audit across 135 tests (45 per stage, across three AI models). The findings are concrete:

Stage 3 — 0% Code Execution Success

Every one of 45 code generation tests produced code targeting Rarible’s NFT marketplace (api.rarible.org) instead of the actual Protocol endpoints. AI models fabricate X-API-KEY headers, wrong service names, and REST endpoints that don’t exist. Root cause: the documentation gives AI no signal to distinguish the Protocol API from the marketplace.

Stage 2 — Identity Crisis Scores 22.2/100

In 60% of queries, AI confused Rari with Rari Capital (defunct, $80M exploit 2022), Rarible.com, and Rarity.tools. Gemini cited the Rari Capital exploit in 53% of evaluation responses. Rari’s strongest differentiators guaranteed onchain royalties, creator-first infrastructure, the Mattel partnership are absent from AI responses entirely.

Stage 1 — Below-Average Discoverability

Rari’s overall AI mention rate is 55.6%, with Claude at just 40% significantly behind Gemini (66.7%) and GPT-4o (60%). When mentioned, 68.9% of mentions are neutral: listed alongside competitors without advocacy. Most developers never encounter Rari through AI at all.

The MCP Gap

Rarible has already built an MCP server with a strong AI-native signal. But it’s STDIO-only, requires local npx installation, and the docs page still contains live placeholder text. No remote URL means no registry listing and no passive discovery. This proposal fixes that.

The RARI DAO’s 2025 KPI is 5x growth in monthly active users and protocol transactions. Every developer who asks Claude ‘how do I integrate Rari?’ and gets broken code or a response referencing a 2022 exploit is a lost builder. This proposal fixes that directly.

Why This Matters for the RARI DAO

The RARI DAO governs the Rarible Protocol, the infrastructure any developer uses to build an NFT marketplace. Yet the DAO has never received a proposal that addresses the Rarible Protocol’s developer discoverability head-on like this one does.

This proposal creates durable infrastructure: documentation that compounds indefinitely, an MCP server every future developer benefits from, and positioning content AI models will cite in perpetuity. Unlike marketing campaigns or token incentives, these are artifacts that produce results long after delivery.

What Gets Built: 11 Deliverables

Category 1 — Foundation Layer

    1. llms.txt + llms-full.txt

The single highest-leverage AI fix. A canonical file at docs.rarible.org/llms.txt that tells Anthropic, OpenAI, and Google exactly what the Rarible Protocol is, what it isn’t, and how to use it. Includes a critical disambiguation block separating the Protocol from the marketplace, Rari Chain, and the defunct Rari Capital. Crawled during training data updates.

Category 2 — Documentation Rewrites

    1. Getting Started & API Docs Rewrite

Code-first structure with a working curl request against a real endpoint within 60 seconds of reading. Adds clear auth patterns, the identifier format explanation AI models currently fabricate, and a disambiguation notice.

    1. Entity Disambiguation Page

Structured side-by-side table separating Rarible Protocol, Rarible.com, Rari Chain, and Rari Capital with a clear timeline of the 2022 exploit and its organizational separation from RARI Foundation. Written specifically for AI citation so models stop applying exploit-era risk framing to the Protocol.

    1. MCP Docs Rewrite

Remove all placeholder text, add verified setup guides for Claude Desktop and Cursor, and add real example prompts with expected outputs.

    1. Positioning Page: Why Rarible Protocol

Documents guaranteed onchain royalty enforcement (vs. platform-optional royalties on OpenSea/Blur), the Mattel case study structured for AI citation, multichain coverage across Ethereum, Polygon, Flow, Eclipse, and Rari Chain, and an emerging use case in onchain content verification (relevant to C2PA standards YouTube and Instagram are implementing). Structured as declarative prose AI models can retrieve and repeat accurately.

Category 3 — Developer Guides

    1. Developer Guide: NFT Portfolio Tracker [vibecoding version]

End-to-end tutorial using live Protocol endpoints with complete, tested TypeScript code. Includes a section on using the remote Rari MCP with Claude to query portfolio data in natural language

    1. Developer Guide: Royalty Setup & Enforcement

Complete walkthrough structured so AI recommends Rarible Protocol when developers ask ‘which NFT platform enforces royalties?’ connecting the positioning page to working implementation code.

Category 4 — MCP Infrastructure & Proof of Results

    1. Remote Streamable HTTP MCP Server

Converts the existing STDIO MCP to a publicly accessible URL (e.g., https://mcp.rarible.org/mcp). No local npx install required. Verified working with Claude Desktop, Cursor, and Claude Code.

    1. MCP Registry Listing

Submitted to GitHub’s official MCP Registry and Anthropic’s partner list with metadata optimized for ‘NFT data MCP’ and ‘NFT portfolio tools’ searches. Passive, permanent discoverability.

    1. Before/After Demo Video

Screen recording showing 0% → working code. Part 1 (before): Claude asked to build an NFT portfolio tracker: fabricated endpoints, wrong service. Part 2 (after): same prompt, MCP connected, real endpoints, working code. Delivered as a shareable Loom link for forum posts and DAO voting.

    1. Before/After Stage 3 Test Rerun

Identical 45 code generation queries rerun against updated docs and live MCP. Results tabulated and published as a public forum post. Target: 0% → 50%+ code execution success rate.

Timeline

Week Documentation Work MCP & Infrastructure
Week 1 llms.txt + llms-full.txt, Getting Started rewrite, disambiguation page, positioning page draft MCP server deployed to staging, API key auth setup, Claude Desktop + Cursor verification
Week 2 MCP docs rewrite, Dev Guide 1 (portfolio tracker), forum update posted Staging URL live for community testing, SSE backward-compat verified, registry submissions drafted
Week 4 Dev Guide 2 (royalty setup), final review with RARI team, all docs finalized Registry submissions live, demo video recorded, Stage 3 rerun complete, all deliverables handed over

Total duration: 4 weeks from grant approval. All IP and deliverables fully owned by RARI Foundation / RARI DAO upon final payment.

Payment Structure

$1,500 (50%) on proposal approval → $1,500 (50%) on confirmed delivery of all 11 deliverables. Funds will be received in ETH equivalent to $3,000. Wallet address provided upon approval.

No proprietary system access required. All documentation work uses publicly available information. MCP deployment uses the Foundation’s existing open-source SDK and a Rarible API key provided by the team for testing.

Cost Breakdown

Deliverable Cost
Foundation Layer (llms.txt) $250
Documentation Rewrites (4 pages) $1,200
Developer Guides (2 guides) $200
MCP Infrastructure + Docs $1,200
Stage 3 Rerun + Video $150
Total $3,000

All amounts payable in ETH at time of transfer.

5 Likes

gm @tlazypanda,

Welcome to RARI DAO.

I like this proposal because it aligns well with positioning Rarible Protocol as the go-to NFT infra for teams building tokenized experiences right into their apps and products. AI discoverability, clean docs, and a solid remote MCP is important for giving devs real endpoints and working code when they ask an AI chat such as Chat GPT, Claude, Gemini, etc, something like “how do I add tokenization to my product?”

Happy to see a proposal directly address Rarible Protocol.

4 Likes

Been making huge steps myself developing apps, scripts and websites with the help of AI. I know exactly how it works and how valuable this could be. Maybe you would be able to pitch your idea during the upcoming Governance Call? Would love to hear some insights.

4 Likes

RM :zap: @tlazypanda i understand that this proposal aims to use AI to represent and recommend Rarible Protocol. At this point, I think the person needs to be very clear about what Rarible is in all its presentations to be on the right track.

Currently, the models aren’t 100% reliable and generate incorrect code. They confuse it with Rari Capital and don’t position it as an NFT infrastructure—in this case, the Rari Marketplace.

It’s also mentioned that it will have measurable tests: going from 0% to 50%+ success in generating functional code.

This isn’t marketing; it’s infrastructure for the AI-native funnel where developers are born today.

To a certain extent, it aligns with the protocol’s growth KPI by directly targeting the discovery layer via AI. However, I still have two lingering questions:

:one: Beyond increasing the technical success rate, what is your 12-month vision for Rarible Protocol to become the default recommended NFT infrastructure for AI responses?

:two: After delivering the 11 deliverables, how do you plan to maintain and optimize this AI-native layer in the face of new models and changes in the ecosystem?

6 Likes

Thanks for the thoughtful read, @sohobiit :))
To ans your questions:

  1. 12 month vision:
    The 11 deliverables in this proposal fix the immediate failures (broken code, invisible MCP). But the 12month vision is compounding: once the llms.txt is live and the disambiguation content is indexed, every new AI training sweep reinforces the correct signal. The remote MCP server means Rarible Protocol becomes part of the ecosystem developers are already building in it gets listed in registries ex- github mcp registry & referenced in tutorials and blog posts that AI models then train on. The positioning page and developer guides are written specifically to be citable sources so over time AI responses start pulling from them organically. The 12 month vision here would be to create more such citable sources & eventually allow ai agents to directly engage on rari

  2. Ongoing maintenance as models evolve:

honestly, it’s one I’ve been thinking about beyond this proposal. The 11 deliverables are built to be durable by design: llms.txt is a static file, the MCP server tracks the existing SDK & the docs pages are stable unless the Protocol itself changes. But as u said AI model behavior isn’t static.

ideally to optimise for this we should:

  • Quarterly reruns of the full Stage 1–3 audit suite to catch model drift, identify new competitors surfacing in AI responses & benchmark against updated baselines
  • MCP server maintenance & updates as the MCP spec evolves
  • docs refresh cycles triggered by Protocol updates or major model releases (GPT-5, Gemini 2.x, Claude 4)
  • Expansion of the llms.txt and positioning content as Rari adds new chains, partnerships, or use cases
  • Monitoring of MCP registry rankings and adjusting metadata to stay discoverable as the registry grows

AI discoverability isn’t a 1 time problem, instead it’s closer to SEO
Happy to scope that out separately once this proposal delivers :slight_smile:

5 Likes

Nice proposal! Very interesting IMO.

I just have a couple of comments:

  1. It would be good to clearly measure the before/after, so we can verify code discoverability beyond internal testing.
  2. After the project is completed:
  • What’s the plan to maintain the docs, llms.txt, and the MCP server?
  • It would be good to list dependencies on AI models to ensure these improvements persist over time.

As an additional note, IMO this proposal would benefit from some complementary support around SEO, GEO, etc.

3 Likes

Hi @tlazypanda! welcome to the forum!

This proposal makes sense to me. For 3K I think this is a great infrastructure upgrade.

One thing I’d encourage adding is GitHub / portfolio / socials, as outlined in the grant template. Having that context helps delegates assess execution capability more confidently. Also I´ligned with @forexus, a short walkthrough on the next Governance Call would be useful.

More broadly, I see this as a solid first step. What excites me is the bigger picture: if we position Rarible well within these new AI-driven development flows, we could attract more builders experimenting faster, which ultimately leads to more creativity from artists and new types of projects building on top of the protocol. That’s where I think the real upside is.

3 Likes

Thanks for the detailed answer — really appreciate the depth of thought here :pray:

Your 12-month framing makes sense, especially the compounding thesis around llms.txt + citable documentation + registry presence. Treating AI discoverability as infrastructure (not marketing) is the right mental model.

That said, I’d like to clarify two strategic points for the DAO:

:one: If the long-term vision is continuous AI-native positioning (quarterly audits, MCP maintenance, registry optimization, drift monitoring), do you see yourself as the long-term steward of this layer for the Protocol — or should the DAO expect to internalize this function after the initial delivery?

:two: Would you be open to defining clear success KPIs beyond code execution rate?
For example:
• Increase in AI recommendation advocacy rate
• Reduction of exploit-era confusion (e.g. references to Rari Capital)
• Inclusion in top 3 infra mentions for “build NFT marketplace” prompts
• Registry ranking targets

If this becomes “AI SEO for protocol infrastructure,” then measurement will be key to justify future iterations.

Overall, I think the framing is strong. The key governance question now is whether this is a one-time corrective intervention or the beginning of a new permanent growth surface for the Protocol.

3 Likes

hey @sohobiit ty for more detailed feedback, happy to answer :))

re:1 yes, while this proposal is scoped to solve the immediate issues, I am planning to build this as a dedicated service for AI native protocol positioning. In any case, I would share complete knowledge transfer as part of this engagement so in case the DAO ever decides to bring someone on board, they have a base to continue on.

re:2 expanded kpis

  • currently with respect to increase in advocacy rate, we are seeing a 40% mention by Claude out of 45 test cases for the discovery queries, which we can aim to get to 55-65%. The reason of not mentioning it in this proposal is that I’m not sure when this content will get indexed and picked up by the AI model’s web search or training data, whereas code execution rate can be mentioned because the AI will be directly connected to the MCP. A good timeline to benchmark this data would be 60-90 days.
  • The references to Rari Capital is honestly a tricky problem because the Rari exploit was covered by major digital news and media websites which have a very high SEO domain authority and will always get ranked higher than the disambiguation and positioning content that we will publish. This needs to be an ongoing effort to allow AI to recognize that Rari Capital and Rarible are two different entities. Would definitely still benchmark and share the kind of results we can achieve around this.
  • Given the scope of the proposal, inclusion in top 3 infra mentions for “NFT royalty” queries might be possible. The “build NFT marketplace” query is a bit broader, which would require more content pieces created specifically to achieve that as an end goal.
  • The registry submission is just a listing. The ranking is not relevant to actual performance.

I’m happy to add the abve as formal success criteria to the proposal before it goes to vote. Makes the accountability explicit and gives the DAO a clear basis for evaluating any follow-on engagement.

3 Likes

Hey @Kaf_Anode, ty for the warm welcome and the support!

will add GitHub, portfolio, and socials to the proposal now :)) my background in developer relations aligns with what we want to deliver with this proposal.

Happy to join the next Governance Call for a walkthrough. Will coordinate with the team on timing.

And yes, the immediate win is fixing broken code generation. We want more builders experimenting faster, more creative surface area for artists and new project types on top of the protocol :))

2 Likes

Hey @Jaf, really appreciate the detailed feedback!

  • re: measuring before/after beyond internal testing: The Stage 3 rerun (deliverable 11) will be published as a full public forum post with the complete 45-query test suite, model-by-model results, and error breakdowns, any delegate or community member can reproduce the tests themselves. I’ll also make the original pre-engagement audit results public alongside it so the delta is verifiable end-to-end, not just self-reported.

  • re: maintaining docs, llms.txt, and the MCP server post-delivery

The deliverables are designed to be low-maintenance by default llms.txt is a static file , the MCP server tracks the existing SDK & docs pages are stable unless the protocol upgrades. Once these resources/deliverables are ready, there is no dependency on AI models, but we do need to routinely check in the case whether the newer models are able to capture these content resources correctly or not.

We can specify required quarterly audit reruns, MCP spec updates, docs refresh cycles tied to Protocol or major model releases. Happy to scope that as a follow-on once this proposal delivers.

re: SEO/GEO

completely agree, the proposal includes a disambiguation page and some positioning content, mostly developer-centric. This will immediately help models rank Rarible in answering questions specific to creators, for example royalty embedding. As you pointed out, this definitely needs to be an ongoing content engine

This initial work should help solve most of the issues around current indexing & docs access. Would love to explore the rest as a follow-on with the Foundation.

3 Likes