rmcp
is the official Rust implementation of the Model Context Protocol (MCP), a protocol designed for AI assistants to communicate with other services. This library can be used to build both servers that expose capabilities to AI assistants and clients that interact with such servers.
wait for the first release.
Creating a server with tools is simple using the #[tool]
macro:
use rmcp::{Error as McpError, ServiceExt, model::*, tool, transport::stdio};
use std::sync::Arc;
use tokio::sync::Mutex;
#[derive(Clone)]
pub struct Counter {
counter: Arc<Mutex<i32>>,
}
#[tool(tool_box)]
impl Counter {
fn new() -> Self {
Self {
counter: Arc::new(Mutex::new(0)),
}
}
#[tool(description = "Increment the counter by 1")]
async fn increment(&self) -> Result<CallToolResult, McpError> {
let mut counter = self.counter.lock().await;
*counter += 1;
Ok(CallToolResult::success(vec![Content::text(
counter.to_string(),
)]))
}
#[tool(description = "Get the current counter value")]
async fn get(&self) -> Result<CallToolResult, McpError> {
let counter = self.counter.lock().await;
Ok(CallToolResult::success(vec![Content::text(
counter.to_string(),
)]))
}
}
// Implement the server handler
#[tool(tool_box)]
impl rmcp::ServerHandler for Counter {
fn get_info(&self) -> ServerInfo {
ServerInfo {
instructions: Some("A simple calculator".into()),
capabilities: ServerCapabilities::builder().enable_tools().build(),
..Default::default()
}
}
}
// Run the server
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create and run the server with STDIO transport
let service = Counter::new().serve(stdio()).await.inspect_err(|e| {
println!("Error starting server: {}", e);
})?;
service.waiting().await?;
Ok(())
}
Creating a client to interact with a server:
use rmcp::{
model::CallToolRequestParam,
service::ServiceExt,
transport::{TokioChildProcess, ConfigureCommandExt}
};
use tokio::process::Command;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Connect to a server running as a child process
let service = ()
.serve(TokioChildProcess::new(Command::new("uvx").configure(
|cmd| {
cmd.arg("mcp-server-git");
},
))?)
.await?;
// Get server information
let server_info = service.peer_info();
println!("Connected to server: {server_info:#?}");
// List available tools
let tools = service.list_tools(Default::default()).await?;
println!("Available tools: {tools:#?}");
// Call a tool
let result = service
.call_tool(CallToolRequestParam {
name: "increment".into(),
arguments: None,
})
.await?;
println!("Result: {result:#?}");
// Gracefully close the connection
service.cancel().await?;
Ok(())
}
RMCP supports multiple transport mechanisms, each suited for different use cases:
Low-level interface for asynchronous read/write operations. This is the foundation for many other transports.
For working directly with I/O streams (tokio::io::AsyncRead
and tokio::io::AsyncWrite
).
Run MCP servers as child processes and communicate via standard I/O.
Example:
use rmcp::transport::TokioChildProcess;
use tokio::process::Command;
let transport = TokioChildProcess::new(Command::new("mcp-server"))?;
let service = client.serve(transport).await?;
You can get the Peer
struct from NotificationContext
and RequestContext
.
# use rmcp::{
# ServerHandler,
# model::{LoggingLevel, LoggingMessageNotificationParam, ProgressNotificationParam},
# service::{NotificationContext, RoleServer},
# };
# pub struct Handler;
impl ServerHandler for Handler {
async fn on_progress(
&self,
notification: ProgressNotificationParam,
context: NotificationContext<RoleServer>,
) {
let peer = context.peer;
let _ = peer
.notify_logging_message(LoggingMessageNotificationParam {
level: LoggingLevel::Info,
logger: None,
data: serde_json::json!({
"message": format!("Progress: {}", notification.progress),
}),
})
.await;
}
}
For many cases you need to manage several service in a collection, you can call into_dyn
to convert services into the same type.
let service = service.into_dyn();
RMCP uses feature flags to control which components are included:
client
: Enable client functionalityserver
: Enable server functionality and the tool systemmacros
: Enable the#[tool]
macro (enabled by default)- Transport-specific features:
transport-async-rw
: Async read/write supporttransport-io
: I/O stream supporttransport-child-process
: Child process supporttransport-sse-client
/transport-sse-server
: SSE supporttransport-streamable-http-client
/transport-streamable-http-server
: HTTP streaming
auth
: OAuth2 authentication supportschemars
: JSON Schema generation (for tool definitions)
transport-io
: Server stdio transporttransport-sse-server
: Server SSE transporttransport-child-process
: Client stdio transporttransport-sse-client
: Client sse transporttransport-streamable-http-server
streamable http server transporttransport-streamable-http-client
streamable http client transport
Transport
The transport type must implemented [`Transport`] trait, which allow it send message concurrently and receive message sequentially. There are 3 pairs of standard transport types:transport | client | server |
---|---|---|
std IO | [child_process::TokioChildProcess ] |
[io::stdio ] |
streamable http | [streamable_http_client::StreamableHttpClientTransport ] |
[streamable_http_server::session::create_session ] |
sse | [sse_client::SseClientTransport ] |
[sse_server::SseServer ] |
IntoTransport trait
[IntoTransport
] is a helper trait that implicitly convert a type into a transport type.
These types is automatically implemented [IntoTransport
] trait
- A type that already implement both [
futures::Sink
] and [futures::Stream
] trait, or a tuple(Tx, Rx)
whereTx
is [futures::Sink
] andRx
is [futures::Stream
]. - A type that implement both [
tokio::io::AsyncRead
] and [tokio::io::AsyncWrite
] trait. or a tuple(R, W)
whereR
is [tokio::io::AsyncRead
] andW
is [tokio::io::AsyncWrite
]. - A type that implement Worker trait.
- A type that implement [
Transport
] trait.
This project is licensed under the terms specified in the repository's LICENSE file.