From the course: Model Context Protocol (MCP): Hands-On with Agentic AI

Using MCP servers in Claude Desktop

- I think the best way to understand the model context protocol is to see it in action. So I'll start this course by showing you some MCPs running in Claude Desktop on my computer. Then when you've seen what's possible, jump to the next article, and you'll learn how to install and use MCPs in Claude Desktop on your computer. And if you're watching this and you're not a coder, don't get deterred by the developer-centric language I'm using and the developer-centric approach that currently exists around this technology. Using MCPs does not require you to be a coder. It's just, as I'm recording this, this technology is less than six months old, and it's been developer-focused so far so there aren't good interfaces for them yet. But I expect as MCPs get more traction, we'll also get better user interfaces, and they'll be more tightly integrated in tools like Claude Desktop. So the code portion is probably a short-term blocker. But enough talk, let me show you some MCPs running in Claude Desktop so you understand what this is all about. And for that, I need to jump over to my recording booth 'cause I need my computer. To use MCPs in Claude, you need to install Claude Desktop on your computer. They're currently not supported in the Claude web app, and that makes sense because many of your MCP servers will exist on your computer and interact with files and applications on your computer. I've already equipped my Claude Desktop with several MCPs. To see a list of them, I can click on this button here, Attach from MCP. That opens a model. From here, I can click Installed MCP Servers and get a list of the available servers. I can also click Choose an integration and choose a specific tool or resource I want to use. And if I want more information about the available tools, I can click on the tools icon here. This pops up, another model window with a full list of all the tools available and descriptions of what all those tools do. Now, seeing this list, you're probably wondering, how am I supposed to remember all of these tools and what they do? And that's part of the magic of using MCPs. You don't need to remember what tools are available or even know which tools are available. The language model will surface the appropriate tools when available. Let me show you three quick examples. First, I'll ask Claude to generate a haiku for me. Now that I have this haiku, I want to know how many characters are in the haiku, both with and without spaces, and how many words. Now, language models are famously bad at this type of counting task, so I've created an MCP that does that work for the language model. When I ask, "count the characters, with and without spaces, and also give me the total word count," Claude recognizes there's an MCP server called Text Assist that has a tool called count_total_characters, and it asks me if I want to run it. And then it also says, "Malicious MCP servers or conversation content could potentially trick Claude into attempting harmful actions through your installed tools. Review each action carefully before approving." So here, it's my job as the operator of Claude to look at what's going on. And what I'm seeing is the tool input here is just the text, so I'm going to allow this for this chat. Now Claude starts using the tool and immediately asks to use another tool, count_characters_without_spaces. I'll allow it again. And finally, it asks to use count_words. So I'll allow a third time. What happens behind the scenes now is Claude takes whatever tools are packaged in the MCP server, spins it up inside the Claude environment, runs the software, and then captures the response, and in return, I get total characters, including spaces, characters without spaces, and a total word count. And I can open each of these tabs and see what Claude did with the tools. This shows you MCP's tools operating within the context of Claude. But that's not the only thing they can do. Our family is going on a road trip tomorrow, and we need to know the weather forecast so we can pack the appropriate clothes. Now, language models have no access to the current weather or weather forecasts, so I've created an MCP that pulls Open-Meteo for that data and later on in the course, you'll build that same MCP. Let's see if I can trigger it. "My family's going on a two-day roadtrip to Squamish tomorrow. What clothes should we bring to fit with the weather?" Claude identifies the weather MCP and the get_forecast tool. Then recognizes the get_current_weather tool. And when I allow these tools, the MCP server makes a call to the API, retrieves information from the API, sends that information back into Claude, and Claude can then process information in accordance with my prompt. In response, I get a detailed breakdown of what the weather's going to be like and what type of clothes it recommends we bring. And from the looks of it, we're going to hit a lot of rain in cold weather, so we're bringing all our winter rain gear. The weather example is advanced, but let me show you something even more complex. GitHub models provides many different models for experimenting. Now let's say I want to compare several different models. I could do that by building my own custom software, but that would be labor-intensive and tedious. Instead, I've created an MCP and authenticated it with my GitHub personal token, and that MCP can give me a list of all the available models and then run completions on any of the models to my specifications. When I ask what models are currently available from GitHub models, Claude recognizes GitHub Models Comparison as an MCP and the tool list_available_models. When I allow that tool, it makes a call to GitHub models and gets a complete list of all available models. And when I give it the prompt, "compare GPT-4o-mini, Phi-3-mini, and Mistral Small using the following prompt: how many n's in the word bananas?" It surfaces compare_models. The compare_models tool in the MCP now makes three separate calls to GitHub models to get the inference of the same prompt for those three different models. And here are the results along with an analysis. And as you can see, GPT-4o-mini and Phi-3-mini gave the right answer, two n's, while Mistral-small gave the wrong answer, three n's. Now that you've seen what's possible with MCPs in Claude desktop, jump to the next article, follow the instructions, and test these and other MCPs on your own computer.

Contents