Introduction

In this discussion, I will try to create a TCP tunnel server in Rust that forwards local TCP connections to a Tor v3 Onion Service. The server will listen on a localhost port and transparently proxy any TCP-based protocol (such as HTTP/2 or gRPC) to a specified .onion address via the Tor network. We will embed a Tor client using Arti (the Rust Tor implementation) instead of relying on an external Tor daemon. This means our application will run its own Tor client internally, so no separate Tor process or SOCKS proxy is required. I was primarily thinking about this for the use of Haveno Mobile but realised it probably has a use case for many apps that want tor integration without the dependancy of having the user install Orbot or similar VPN configurations which route the traffic through Tor.

Why embed Tor with Arti? Using Arti allows our Rust program to manage Tor connectivity itself. An alternative approach would be to run a Tor SOCKS proxy (like the Tor daemon or Arti's proxy mode) and connect through it. For example, the tor-tunnels utility creates a TCP tunnel to hidden services by using a provided SOCKS5 proxy (a running Tor instance). It accepts local connections, opens a connection to the hidden service via the SOCKS proxy, and then forwards data between the client and the onion service. While that approach works, it requires an external Tor service. In this project, we prioritize a self-contained solution using modern Rust crates, specifically arti-client for Tor, to avoid external dependencies.

Architecture and Design

Overview

Our tunnel server consists of two main parts:

  • Tor Client (Arti): Responsible for establishing connections to the Tor network and ultimately to the target onion service.
  • Local TCP Listener: Listens on a localhost port for incoming client connections and forwards them through the Tor client to the onion service.

Only loopback (127.0.0.1) is used for the listening socket to ensure the tunnel is accessible only to local applications. We assume any authentication/encryption at the application level (e.g. TLS in gRPC or HTTPS) is handled by the client and server on either end of the tunnel, so the tunnel itself does not perform TLS. It simply pipes bytes from local to Tor and back.

Using an Embedded Tor Client (Arti)

Arti is a pure-Rust implementation of Tor provided as a library. We will use the high-level arti-client crate, which exposes a TorClient API for making anonymized connections. Under the hood, Arti will manage Tor circuits and directory information. Notably, Arti’s TorClient returns a DataStream that implements asynchronous read/write traits, behaving like a normal TCP stream carrying data over the Tor network. This design allows us to use standard async I/O operations on Tor connections.

Onion service support: By default, Arti (for security reasons) does not allow connecting to .onion addresses unless explicitly enabled. The onion service feature is considered experimental and is behind a Cargo feature flag. We will enable the onion-service-client feature in our Cargo.toml so that Arti knows about onion addresses. Additionally, we must configure the Tor client to permit onion address connections. We can do this via the Arti configuration builder by setting address_filter.allow_onion_addrs to true. Without this step, any attempt to connect to an .onion would result in an error (e.g. OnionAddressDisabled).

Tor client initialization: We will create a single TorClient instance at startup and let it bootstrap a connection to the Tor network (downloading consensus, establishing guard connections, etc.). Bootstrapping may take a few seconds. Alternatively, one could configure the client for on-demand bootstrap (so it connects lazily on first use), but for simplicity we’ll do it upfront. Once bootstrapped, the TorClient can be used to open multiple streams to our onion service as needed. The TorClient is thread-safe and cloneable, cloning it will give another handle to the same underlying Tor session. This is efficient and recommended over creating new Tor clients for each connection. In our design, the main thread will hold the TorClient and clone it for each incoming connection handler task.

Local TCP Tunnel Server

The tunnel server will use Tokio for async networking. We bind a Tokio TcpListener to the chosen localhost address and port. The server runs an accept loop, handling each incoming client connection in a separate asynchronous task (via tokio::spawn). Within each task, the steps are:

  1. Tor Connection: Using the cloned TorClient, initiate an anonymized TCP connection to the target onion service (specified by its .onion address and port). This yields a DataStream if successful, which represents the TCP stream to the hidden service over Tor.
  2. Data Forwarding: Once the Tor connection is established, the task will forward data between the local client socket and the Tor stream. We use tokio::io::copy_bidirectional to efficiently pump bytes in both directions until EOF. This Tokio utility reads from one stream and writes to the other, and vice versa, concurrently. It will continue copying until one side closes or an error occurs, at which point the connection is terminated.
  3. Cleanup: After copy_bidirectional returns, we close the connections. Typically, if either the client or the onion service closes the connection, the forwarding loop ends for that task. The task then exits, and resources for that connection are freed. The listener continues to accept new connections in the meantime.

By using copy_bidirectional, our tunnel can carry any arbitrary protocol without needing to understand it, be it plain HTTP/1.1, HTTP/2, gRPC, or any custom TCP-based protocol. The tunnel is transparent at the byte level.

Flow Summary

To summarize the runtime workflow, here's the sequence of operations:

  1. Initialize Tor Client: The application starts by configuring and bootstrapping an Arti TorClient (with onion support enabled).
  2. Bind Local Port: The application binds to 127.0.0.1:<local_port> and begins listening for incoming TCP connections.
  3. Accept Connection: When a client connects to the local port, the server accepts it and spawns a new async task to handle the connection.
  4. Connect to Onion: In the handler task, the server uses TorClient to connect to the target .onion:port. This establishes a Tor circuit and TCP stream to the onion service (via the Arti client).
  5. Relay Data: The handler then relays data between the local client socket and the Tor stream. Data from the client is sent into the Tor stream, and data coming from the onion service is written back to the client. This continues simultaneously using bidirectional copying until the session ends.
  6. Connection Teardown: If either side closes or an error occurs, the copying stops. The handler task closes both the local and Tor sockets and then ends. Meanwhile, the main server loop continues accepting other connections indefinitely.

With this design, multiple clients can be forwarded to the onion service concurrently. Each connection runs in its own task, sharing a common Tor client instance (which manages circuits to the onion service as needed).

Implementation Details

Let's dive into the implementation. We will present the Cargo.toml configuration for the project and the full source code of src/main.rs, along with explanations for each component.

Cargo.toml Dependencies

We'll use a few crates to implement this project:

  • Tokio for the asynchronous runtime and TCP networking.
  • Arti (arti-client) for the Tor client functionality.
  • Anyhow (optional) for simple error handling in main. (Any error returned by Arti or I/O will be consolidated into an anyhow::Error for easy use of the ? operator.)

In Cargo.toml:

[package]
name = "tor_tcp_tunnel"
version = "0.1.0"
edition = "2021"

[dependencies]
tokio = { version = "1", features = ["full"] }
arti-client = { version = "0.31.0", features = ["onion-service-client", "tokio", "rustls"] }
anyhow = "1.0"

A few notes on these dependencies:

  • We enabled the onion-service-client feature for arti-client to include support for connecting to onion addresses. We also enabled the "tokio" feature so that Arti's DataStream implements Tokio’s AsyncRead/Write traits (for compatibility with tokio::io utilities). The "rustls" feature brings in a modern TLS stack for connecting to Tor relays – Arti requires either Rustls or native TLS to be enabled for its networking (here we choose Rustls for a pure-Rust solution).
  • Tokio is pulled in with the "full" feature to ensure all necessary I/O components (TCP, etc.) are available. This is the simplest way to get a complete Tokio stack.
  • The code is written with the 2021 edition of Rust.

Source Code Breakdown (src/main.rs)

Below is the full source code for the tunnel server. We will go through it section by section to explain its functionality and design decisions:

use tokio::net::{TcpListener, TcpStream};
use tokio::io;
use tokio::io::AsyncWriteExt;
use arti_client::{TorClient, TorClientConfig};
use anyhow::Result;

// Constants for the local listening address and target onion service.
const LOCAL_LISTEN_ADDR: &str = "127.0.0.1:8080";  // Only listen on localhost
const ONION_ADDR: &str = "duckduckgogg42xjoc72x3sjasowoarfbgcmvfimaftt6twagswzczad.onion";
const ONION_PORT: u16 = 80;

#[tokio::main]
async fn main() -> Result<()> {
    // 1. Configure the Tor client.
    let mut cfg_builder = TorClientConfig::builder();
    // Allow connecting to .onion addresses (disabled by default for security) [oai_citation:12‡forum.torproject.org](https://forum.torproject.org/t/arti-api-allow-onion-addrs/13340#:~:text=I%20think%20what%20you%20want,do%20here%20is%20more%20like).
    cfg_builder.address_filter().allow_onion_addrs(true);
    let tor_config = cfg_builder.build();
    
    // 2. Start (bootstrap) the Tor client. This connects to the Tor network.
    let tor_client = TorClient::create_bootstrapped(tor_config).await?;
    println!("Tor client bootstrapped - connected to Tor network.");

    // 3. Bind to the local address and port on localhost.
    let listener = TcpListener::bind(LOCAL_LISTEN_ADDR).await?;
    println!("Tunnel server listening on {}", LOCAL_LISTEN_ADDR);
    println!("Forwarding TCP traffic to {}:{}", ONION_ADDR, ONION_PORT);

    // 4. Accept incoming connections in a loop.
    loop {
        let (client_socket, client_addr) = listener.accept().await?;
        println!("Accepted connection from {}", client_addr);
        
        // Clone the TorClient handle for the new task (cheap, shared reference).
        let tor_client_clone = tor_client.clone();
        let onion_addr = ONION_ADDR.to_owned();
        tokio::spawn(async move {
            // Handle the connection; log any errors.
            if let Err(err) = handle_client(client_socket, tor_client_clone, onion_addr, ONION_PORT).await {
                eprintln!("Connection handling error: {:?}", err);
            }
        });
    }
}

// Asynchronous function to handle forwarding for one client connection.
async fn handle_client(mut client_socket: TcpStream, tor_client: TorClient, onion_domain: String, onion_port: u16) -> Result<()> {
    // 5. Connect to the onion service through Tor.
    println!("Connecting to onion service {}:{}", onion_domain, onion_port);
    let mut tor_stream = tor_client.connect((onion_domain.as_str(), onion_port)).await?;
    println!("Tor connection established to onion service.");

    // 6. Forward data between client_socket and tor_stream.
    // Use copy_bidirectional to shuttle data in both directions until done.
    let (bytes_to_onion, bytes_to_client) = io::copy_bidirectional(&mut client_socket, &mut tor_stream).await?;
    println!("Forwarded {} bytes to onion, and {} bytes to client (connection closed).", bytes_to_onion, bytes_to_client);

    // 7. Ensure the Tor stream is fully flushed and shutdown the write side.
    tor_stream.shutdown().await.ok();
    // The TcpStream (client_socket) will be closed when this function returns and it is dropped.
    println!("Closed connection from client.");
    Ok(())
}

Let's break down what this code does:

Configuration and Tor Initialization: After importing the necessary crates, we define constants for the local address and the onion target. In this example, we use DuckDuckGo's onion service on port 80 as the target (ONION_ADDR), and listen on localhost port 8080 for incoming connections. You can change these constants to any v3 onion address and desired port.

Inside main, we first configure the Tor client. We create a TorClientConfig using the builder pattern. The important step is calling .address_filter().allow_onion_addrs(true) on the config builder. This flips the configuration to allow .onion addresses as connection targets (by default Arti’s address filter would reject onion addresses for safety, since Tor exits normally don’t handle them and Arti wants explicit opt-in). We then build the config and use it to create a bootstrapped TorClient with TorClient::create_bootstrapped. This asynchronous call launches the Tor protocol initialization: contacting directory authorities / relays, downloading the consensus, and establishing required circuits. It returns once the Tor client is ready to make connections. We print a message when bootstrapping is complete. (Note: If this function is called outside an async runtime, it would panic – but here we are inside a Tokio runtime due to the #[tokio::main] macro, which is required as Arti needs an async runtime to operate.

Binding the local listener: Next, we bind a TcpListener to LOCAL_LISTEN_ADDR (127.0.0.1:8080 in this case). Binding to 127.0.0.1 ensures the port is not accessible externally, this is intentional since the tunnel traffic is unencrypted and meant for local use only. We then start an infinite loop to accept incoming connections. For each accepted client_socket (of type TcpStream) along with the client address, we log the connection and then spawn a new task to handle it.

We use tokio::spawn to allow concurrent handling of multiple connections. Inside the spawn, we clone the TorClient for use in the task. Cloning a TorClient gives us a new handle to the same underlying Tor instance, which is perfect here, all tasks share the same Tor network connectivity (and thus can reuse circuits if appropriate), but they can make independent streams. Cloning is cheap and is the recommended way to share an Arti client across tasks, rather than creating a new Tor client for each connection. We pass the cloned Tor client, the client socket, and the onion address info into an async function handle_client which performs the forwarding. If handle_client returns an error (for example, if the onion service connection fails), we catch it and log it, then the task ends.

Handling a connection (handle_client): This async function is where the core proxying happens. The steps here:

  • Tor connection: We call tor_client.connect((onion_domain.as_str(), onion_port)).await to open a TCP connection through Tor to the specified onion address and port. We provide the address as a tuple of (domain, port); Arti’s connect method accepts anything implementing IntoTorAddr, and a .onion string with a port is supported (because we enabled the onion feature). This returns a DataStream representing the remote TCP connection over Tor. If the onion service is online and our Tor client can reach it, the await will succeed and yield a tor_stream. If there is an issue (Tor not fully bootstrapped, invalid onion, service down, etc.), an error will be returned (and logged by our caller). Assuming success, we print that the Tor connection was established.

  • Bidirectional forwarding: Now we have two streams: client_socket (the local client connection) and tor_stream (the Tor-mediated connection to the onion service). We need to forward data between them. We use tokio::io::copy_bidirectional for this purpose. This function internally reads from each stream and writes to the other in a loop, handling both directions simultaneously. It returns only when EOF (end-of-stream) is reached on one side or an error occurs. The result is a tuple (bytes_to_onion, bytes_to_client) indicating how many bytes were sent in each direction. We print these counts for logging purposes, and to indicate the connection has closed. Using copy_bidirectional greatly simplifies the code compared to manually managing two directional pipes; it effectively ties the two sockets together and shuttles bytes as needed.

  • Cleanup: After copy_bidirectional completes, we perform some cleanup. We call tor_stream.shutdown().await to flush and close the Tor side gracefully (this ensures the remote side knows no more data is coming). We ignore any error from shutdown (using .ok()) since if the stream is already closed or an error happened, there's nothing more to do. The local client_socket will be closed when it is dropped at function exit (we could also explicitly shut it down, but dropping is sufficient here). A final log line notes that the client connection is closed. At this point, handle_client returns Ok, and the task ends.

Note: We wrapped the function returns in anyhow::Result for convenience, so we can use the ? operator on both I/O errors and Arti's errors without implementing custom error types. In a real application, you might want more nuanced error handling (for example, distinguishing Tor errors from socket errors), but for our tunnel's purposes logging and continuing to serve new connections is acceptable.

Building and Running the Project

To build the project, ensure you have Rust installed (with the latest stable toolchain) and run:

cargo build --release

This will fetch the dependencies (Tokio, Arti, etc.) and compile the project. The release build is recommended for performance, as Arti and Tokio can be computationally intensive (cryptography, networking).

Before running, you may want to adjust the constants in the code:

  • LOCAL_LISTEN_ADDR if you want a different local port.
  • ONION_ADDR and ONION_PORT to point to your desired onion service. The example uses DuckDuckGo's read-only search service onion address on port 80.

Now run the tunnel server:

cargo run --release

On startup, you should see the Tor client begin bootstrapping. This may take some time (several seconds) as it connects to the Tor network – you might see log output or just the printed message when bootstrapping is done. Once you see "Tunnel server listening on 127.0.0.1:8080", the local port is ready to accept connections.

Testing the tunnel: Open another terminal or use a browser/application to connect to the local endpoint. For example, since we forwarded DuckDuckGo's onion on port 80, you can try a simple HTTP request using curl:

curl -v --connect-timeout 10 http://127.0.0.1:8080/

This should result in an HTTP response (HTML content) from DuckDuckGo, fetched over Tor through the onion service. The terminal running the tunnel will log the connection and bytes transferred. Similarly, any TCP client pointed at 127.0.0.1:8080 will have its traffic relayed to duckduckgogg42xjoc72x3sjasowoarfbgcmvfimaftt6twagswzczad.onion:80.

If you were forwarding gRPC or another protocol, you would configure the gRPC client to connect to localhost:8080 (or whichever port you chose) without any other changes, the tunnel is protocol-agnostic.

Limitations and Considerations

While our implementation should work for basic use cases, there are several important considerations and limitations to be aware of:

  • Arti’s Experimental Onion Support: The Tor Project’s Arti client is still marked as experimental for onion services. It currently lacks certain hardening features (like vanguards for preventing guard discovery attacks) that the C Tor client uses. As a result, the Tor developers recommend not using Arti for security-critical or high-volume onion service usage yet. Our tunnel will function, but one should understand the security trade-offs. In the future, as Arti matures, this limitation will be addressed and onion support will become stable.

  • Performance: Arti is under active development. Its performance may not yet match the optimized C implementation of Tor. However, it is generally sufficient for moderate throughput. The tunnel will be as fast as Tor allows, but heavy workloads or many concurrent connections could be limited by Arti’s current performance characteristics.

  • Upkeep: The Tor network protocols evolve. The Arti developers note that you need to keep Arti up-to-date; older versions may stop working if the Tor network deprecates something. In fact, Arti will terminate the process if it detects that it has become obsolete due to protocol changes. Therefore, using a recent version of Arti (arti-client) is important, and you should plan to upgrade the dependency regularly to stay compatible with the Tor network.

  • No Authentication on Tunnel: Our tunnel does not implement any authentication or access control for local connections. We bind to localhost to mitigate risk (only local users can connect). If you were to bind to a public interface, you’d want to secure the tunnel (e.g., with firewall rules or by adding an authentication layer) to prevent arbitrary remote access to the onion service. Generally, keep the listener on 127.0.0.1 as we did, and only local clients that you trust will use the tunnel.

  • No TLS Termination: We explicitly do not perform TLS in this tunnel. If the protocol you are forwarding requires TLS (for example, you might be forwarding to an HTTPS onion service or a gRPC service with TLS), the TLS handshake will simply pass through the tunnel. This is fine if the client knows to use TLS. In some scenarios (like HTTP over onion), services often operate without TLS because the Tor circuit is already end-to-end encrypted. The bottom line is our tunnel is transport-layer only; it neither knows nor cares if the bytes are encrypted or not.

  • Single Destination: This server is set to forward to one specific onion address (hardcoded or configured at startup). All clients connecting to the local port will be tunneled to that same onion service. If you need a dynamic or multi-destination proxy, you would have to extend the design (for instance, by inspecting the first bytes to route based on a hostname, or running separate listeners for different onion targets, or building a SOCKS proxy interface for Tor). In our simplified use case, one tunnel corresponds to one destination.

  • Client Authorization (Hidden Services): Our implementation does not handle onion services that require client authorization (a feature where an onion service is restricted to clients with a key). Arti does support client authentication to onion services if provided with the right keys (via its configuration), but setting that up is beyond the scope of this article. If you attempt to connect to an onion service that uses client auth, the connection will fail unless the Arti client is configured appropriately beforehand.

  • Graceful Shutdown: For brevity, we haven't implemented a signal handler or graceful shutdown logic. If you stop the program, any active connections will drop. In a real deployment, you might catch a shutdown signal (CTRL+C) and instruct the listener to stop accepting new connections, etc. Also, Arti provides a TorClient::wait_for_stop() that can be used to asynchronously wait for it to shut down its background tasks. In our case, simply ending the process is fine.

Despite these considerations, the solution demonstrates a working pattern for integrating Tor into a Rust application. By leveraging Arti and Tokio, we achieve a concise yet powerful TCP tunnel. The core logic, connecting via Tor and forwarding bytes – is only a few lines of code, thanks to high-level APIs in these crates.

Conclusion

We have built a complete Rust project that acts as a TCP tunnel from a local port to a Tor hidden service. The design uses a single embedded Tor client (Arti) to proxy all connections, removing the need for an external Tor process. We can handle arbitrary TCP protocols, making this tunnel quite flexible, for example, exposing a database or gRPC service that runs as an onion service to local clients, or accessing an onion-only web service via a normal browser through localhost.

Using modern Rust crates made the implementation straightforward. The arti-client crate provided an easy asynchronous interface to the Tor network, allowing us to connect to onion services with just a function call (after enabling the necessary feature and config). Meanwhile, Tokio’s networking and I/O utilities (like copy_bidirectional) handled the heavy lifting of asynchronous data transfer for us.

Further exploration: This basic tunnel could be extended or improved. For instance, one could integrate logging via tracing for more insight into Tor’s operation, add configuration files or command-line arguments for the addresses instead of hardcoding, or even build a UI or control protocol to manage tunnels at runtime. Additionally, as Arti develops, one might leverage its more advanced APIs (like stream isolation preferences, or running onion services via Arti). The crate also supports launching onion services (TorClient::launch_onion_service), meaning one could create the reverse, a local service exposed as an onion, using similar patterns.

In summary, by combining Tokio and Arti, Rust developers can create secure networking tools that interface directly with the Tor network. This TCP tunnel server is a prime example, showing how relatively simple it is to anonymize and forward traffic to the dark web (onion services) entirely within a Rust program, we should now be able to easily integrate mobile wallets or exchanges like Haveno