KeyPears

KeyPears

Decentralized Diffie-Hellman Key Exchange System

← Back to blog

TypeScript for the KeyPears MVP: Why We're Not Really Using Rust (Yet)

November 16, 2025 · KeyPears Team

Note: KeyPears is a work-in-progress open-source password manager and cryptocurrency wallet. The design decisions described here represent our development approach and may evolve before our official release.

Three weeks ago, we published a blog post titled "Building KeyPears with Rust: Backend Architecture and Blake3 Proof-of-Concept." We were excited about Rust's performance, memory safety, and type system. We had a working /api/blake3 endpoint. We had plans for rs-lib and rs-node packages.

Today, we're writing to tell you we've changed direction.

The current KeyPears codebase is almost entirely TypeScript. The Rust backend from our October post—rs-lib for cryptography and rs-node for the API server—was fully built and working. And then we deleted it. After several weeks of development with both implementations side by side, we concluded that TypeScript is the right architecture for our MVP.

This post explains why we made that decision.

What Actually Happened

Let's start with the facts. Here's what we did:

October 2025: Built a complete Rust backend

  • rs-lib: Full cryptography library (Blake3, ACB3, key derivation)
  • rs-node: Axum-based API server with OpenAPI via utoipa
  • Dual-server deployment: Node.js webapp proxying to Rust API
  • Everything worked as described in the October blog post

November 2025: Removed the entire Rust backend

  • Deleted rs-lib package completely
  • Deleted rs-node package completely
  • Rewrote cryptography in TypeScript using @webbuf WASM packages
  • Rewrote API server in TypeScript using orpc
  • Integrated API server directly into Express webapp

Current state:

  • Rust code: 33 lines total (just the minimal Tauri shell)
  • TypeScript code: ~5,400 lines (lib, api-server, tauri app, webapp)
  • All cryptography now TypeScript + WASM
  • All API endpoints now orpc (TypeScript RPC)
  • Single-server deployment (no more Node → Rust proxy)

This wasn't a case of the Rust backend "not working out." It worked perfectly. We had working Blake3 hashing, working key derivation, working API endpoints. We deleted it anyway because TypeScript simplifies development in ways that matter more than Rust's advantages for our MVP.

Why We Removed the Rust Backend

With both implementations working, we had to make a choice: continue maintaining two parallel implementations (Rust for crypto/API, TypeScript for UI) or consolidate on one language. We chose TypeScript for three critical reasons:

1. Better API tooling - orpc provides superior type safety compared to Axum + utoipa + openapi-generator

2. Better database tooling - Drizzle ORM supports both SQLite and PostgreSQL with the same API (no Rust equivalent exists)

3. Single-language simplicity - Avoiding context switching between Rust and TypeScript saves mental overhead on a side project

Here's what we learned by building and then removing the Rust backend:

1. orpc vs Axum + utoipa: Type Safety Without Codegen

We built the Rust API server with Axum and utoipa for OpenAPI generation. It worked, but the workflow had friction:

The Rust approach we actually used:

  1. Define routes in Rust with Axum
  2. Generate OpenAPI spec with utoipa macros
  3. Run openapi-generator to create TypeScript client
  4. Discover generated client doesn't match our TypeScript patterns
  5. Manually adjust generated code or fix Rust annotations
  6. Repeat on every schema change

The TypeScript approach (orpc) we switched to:

// Define the procedure
export const blake3Procedure = os
  .input(Blake3RequestSchema)
  .output(Blake3ResponseSchema)
  .handler(async ({ input }) => {
    const data = WebBuf.fromBase64(input.data);
    const hash = blake3Hash(data);
    return { hash: hash.buf.toHex() };
  });

// Use it in the client with full type safety
const client = createClient({ url: "/api" });
const result = await client.blake3({ data: "..." });
// TypeScript knows `result.hash` is a string

Zero codegen. Complete type safety. Instant IDE autocomplete.

The difference is night and day. With orpc, the client knows every endpoint, every parameter type, every response shape—all inferred directly from the server code. Change the server? Client errors appear immediately in your IDE, not at runtime. No build step, no generated files, no version mismatches.

This is what made us delete working Rust code. The Axum + utoipa + codegen workflow worked, but orpc's zero-codegen type safety is so much better that maintaining the Rust version wasn't worth it.

2. No Rust ORM Supports Both SQLite and PostgreSQL Well

KeyPears needs two databases:

  • SQLite in the Tauri desktop app (client-side storage)
  • PostgreSQL on the server (multi-user vault synchronization)

In TypeScript, Drizzle ORM handles both with the same API:

// Client (SQLite)
import { drizzle } from "drizzle-orm/sqlite-proxy";
const db = drizzle(/* Tauri SQL plugin */);

// Server (PostgreSQL)
import { drizzle } from "drizzle-orm/node-postgres";
const db = drizzle(/* pg connection */);

// Same schema definition works for both
export const TableVault = sqliteTable("vault", {
  id: text("id").primaryKey(),
  name: text("name").notNull(),
  // ...
});

We looked for Rust equivalents. Diesel supports Postgres and MySQL but has poor SQLite support. SeaORM is newer but still requires separate schema definitions for different databases. Neither provides the unified, type-safe query builder that Drizzle gives us.

When you're building a sync protocol where the client and server need matching schemas, having one ORM that works everywhere is critical. This was the second reason we deleted the Rust backend—we would have needed two separate database implementations (one for Tauri's SQLite, one for the server's Postgres) with manual work to keep them in sync.

3. Single Language Reduces Mental Overhead

The final reason we removed the Rust backend: context switching costs.

With the dual-language architecture, every feature required:

  • Writing Rust for crypto/API logic
  • Writing TypeScript for UI/database logic
  • Translating between Rust and TypeScript idioms
  • Maintaining two build systems (Cargo + pnpm)
  • Debugging across language boundaries
  • Different testing frameworks (Cargo test + Vitest)

For a side project where development happens in short evening sessions, this mental overhead compounds. You spend the first 10 minutes remembering whether you're writing Rust or TypeScript, and the last 10 minutes before bed context switching back.

With TypeScript-only:

  • One type system
  • One package manager
  • One testing framework
  • One set of idioms
  • Hot reload in ~100ms (vs 3-10s Rust recompile)

The productivity gain isn't just about compile times. It's about flow state. When you're not context switching between languages, you write more code and make fewer mistakes.

4. Deployment Simplification

The Rust backend also complicated deployment:

With Rust (October architecture):

  • Dual-server: Node.js webapp (port 4273) + Rust API (port 4274)
  • HTTP proxy from webapp to Rust server
  • Docker image: Node + Rust toolchain + cross-compilation
  • Larger image size (~500MB with Rust)
  • More complex service coordination

With TypeScript-only (current):

  • Single Express server (port 4273)
  • orpc API mounted directly at /api
  • Docker image: Just Node.js (~200MB)
  • Simpler deployment (one service, one port)
  • No HTTP proxy overhead

Removing the Rust backend made deployment cleaner and faster.

What We Didn't Lose: Rust Cryptography via WASM

Here's the critical insight that made removing the Rust backend viable: We still use Rust for cryptography. We just use it through WebAssembly instead of writing it ourselves.

When we deleted rs-lib (our Rust cryptography library), we didn't rewrite crypto in pure JavaScript. We switched to the @webbuf packages, which compile Rust cryptography to WebAssembly:

  • @webbuf/blake3: Blake3 hashing (Rust → WASM)
  • @webbuf/acb3: AES-256-CBC + Blake3-MAC (Rust → WASM)
  • @webbuf/webbuf: Binary data utilities (Rust → WASM)
  • @webbuf/fixedbuf: Fixed-size buffers (Rust → WASM)

These packages compile Rust cryptography to WebAssembly. We get:

Rust's memory safety (WASM sandbox) ✅ Rust's performance (near-native speed) ✅ Cross-platform consistency (works in Node, browsers, Tauri) ✅ TypeScript ergonomics (native Uint8Array integration)

Here's our complete three-tier key derivation system in TypeScript:

// 100,000 rounds of Blake3-based PBKDF
export function blake3Pbkdf(
  password: string | WebBuf,
  salt: FixedBuf<32>,
  rounds: number = 100_000,
): FixedBuf<32> {
  const passwordBuf = typeof password === "string"
    ? WebBuf.fromUtf8(password)
    : password;

  let result = blake3Mac(salt, passwordBuf);
  for (let i = 1; i < rounds; i++) {
    result = blake3Mac(salt, result.buf);
  }
  return result;
}

// Derive password key from user's master password
export function derivePasswordKey(password: string): FixedBuf<32> {
  const salt = derivePasswordSalt(password);
  return blake3Pbkdf(password, salt, 100_000);
}

// Derive encryption key (for vault data)
export function deriveEncryptionKey(passwordKey: FixedBuf<32>): FixedBuf<32> {
  const salt = deriveEncryptionSalt();
  return blake3Pbkdf(passwordKey.buf, salt, 100_000);
}

// Derive login key (sent to server)
export function deriveLoginKey(passwordKey: FixedBuf<32>): FixedBuf<32> {
  const salt = deriveLoginSalt();
  return blake3Pbkdf(passwordKey.buf, salt, 100_000);
}

This is production-ready cryptography. It's type-safe. It's fast (200,000 Blake3 operations complete in milliseconds). And the actual hashing happens in Rust-compiled WASM—the same Rust cryptography we had in rs-lib, just packaged differently.

We didn't abandon Rust's security properties. We just stopped maintaining our own Rust codebase. The cryptography is still Rust. It's just compiled to WASM and consumed as TypeScript packages, which eliminates the build complexity of a dual-language project.

The Architecture That Emerged

Here's what the current KeyPears stack looks like:

Package Structure

@keypears/lib (TypeScript)
├── Blake3 hashing via @webbuf/blake3 (Rust→WASM)
├── ACB3 encryption via @webbuf/acb3 (Rust→WASM)
├── Three-tier key derivation (100k rounds each)
├── Password generation with entropy calculation
└── Zod schemas for validation

@keypears/api-server (TypeScript)
├── orpc router with type-safe procedures
├── Blake3 endpoint (working proof-of-concept)
├── Drizzle ORM + PostgreSQL schema (ready for server DB)
└── Client factory for end-to-end type safety

keypears-tauri (TypeScript + Rust shell)
├── Tauri 2.0 app (33 lines of Rust)
├── Full vault management UI (~5,020 lines TypeScript)
├── SQLite with Drizzle ORM
├── React Router 7 for navigation
├── Shadcn components + Catppuccin theme
└── Calls production API server for crypto endpoints

@keypears/webapp (TypeScript)
├── Production website + blog
├── Integrated API server (orpc mounted at /api)
├── Single Express server on port 4273
└── Deployed on AWS Fargate

What Works Today

The Tauri app has a complete vault management workflow:

✅ Create vault with password ✅ Unlock vault with password verification ✅ Store passwords with encryption ✅ Generate secure passwords ✅ SQLite persistence via Drizzle ✅ Three-tier key derivation working ✅ Vault encryption with ACB3 ✅ Multi-step wizards (name → password → confirm → success) ✅ Test page calling production Blake3 API

The webapp has:

✅ Landing page with blog system ✅ Working /api/blake3 endpoint ✅ orpc integrated with Express ✅ Docker deployment to AWS Fargate ✅ Canonical URL redirects ✅ Blog posts with TOML frontmatter + Markdown

What's Not Built (Intentionally Deferred)

We haven't built server-side features yet because the MVP is local-first:

  • ⏸️ User authentication (login/logout)
  • ⏸️ Vault synchronization protocol
  • ⏸️ Multi-user server support
  • ⏸️ Diffie-Hellman key exchange across domains
  • ⏸️ Public key infrastructure

These are v2 features. The MVP is a password manager that works 100% offline in the Tauri app. The server is only needed for multi-device sync, which we'll add after validating the core product.

The TypeScript Ecosystem Has Caught Up

Five years ago, this blog post would have been different. Rust was the only way to get type-safe backends with good performance. But the TypeScript ecosystem has evolved dramatically:

orpc gives us end-to-end type safety that Rust can't match (no codegen, instant IDE feedback)

Drizzle provides type-safe SQL for both SQLite and PostgreSQL (no Rust ORM does this well)

WASM lets us use Rust crypto without writing Rust applications (best of both worlds)

Vitest gives us fast ESM-native testing (simpler than Cargo's test framework for web apps)

React Router 7 provides SSR + type-safe routing (no Rust equivalent)

For building web applications with cryptography, TypeScript + WASM is now a better choice than native Rust. You get comparable performance, better tooling, and a much larger ecosystem of web-focused libraries.

When Would We Use Rust?

This isn't a rejection of Rust. It's a recognition that Rust solves the wrong problems for our MVP.

Rust makes sense when you need:

  1. Extreme performance - Handling 10k+ concurrent WebSocket connections
  2. Embedded systems - Running on IoT devices with 64MB of RAM
  3. Custom crypto - Implementing novel cryptographic algorithms
  4. Kernel-level code - Writing device drivers or OS components

KeyPears doesn't need any of these yet. Our server will handle dozens of concurrent users, not thousands. Our desktop app runs on modern laptops with gigabytes of RAM. Our cryptography comes from well-tested libraries (Blake3, AES-256). We're building a user-facing application, not infrastructure.

Later, Rust might make sense for:

  • High-throughput sync server (if we grow to enterprise scale)
  • Mobile performance optimization (if WASM proves too slow)
  • Custom Diffie-Hellman implementation (if existing libraries don't fit)

But even then, we'd keep the API layer in TypeScript (orpc is too good to give up) and only move performance-critical sync logic to Rust via FFI.

The Right Tool for the Right Job

Software architecture isn't about using the "best" language—it's about using the right tool for the constraints you're facing.

Our constraints:

  • Side project timeline: Limited evening/weekend hours
  • Solo developer: No team to split Rust vs TypeScript work
  • MVP goal: Prove the concept before scaling
  • Rapid iteration: Features change based on user feedback

For these constraints, TypeScript is objectively better:

  • Faster iteration (100ms hot reload vs 5s compile)
  • Single mental model (no context switching)
  • Richer ecosystem (orpc, Drizzle, React Router)
  • Lower cognitive overhead (one type system, one package manager)

We still get Rust's security properties through WASM. We still get type safety through TypeScript. We still get performance (crypto is WASM, API is fast enough).

What We Learned

1. Working code isn't always the right code

The Rust backend worked perfectly. Blake3 hashing worked. Key derivation worked. The API server worked. We shipped it to production. But "working" doesn't mean "optimal for the constraints." When we evaluated developer experience vs performance gains, TypeScript won decisively for our MVP.

2. Ecosystem maturity matters more than language performance

The Rust language is excellent. But for web applications, the TypeScript ecosystem is years ahead. orpc's zero-codegen type safety is revolutionary. Drizzle's unified SQLite + Postgres support is essential for our architecture. These don't exist in Rust.

3. WASM changes the game

Ten years ago, you had to choose: safe languages (Ruby, Python, JavaScript) or fast languages (C, C++, Rust). Today, you can write your performance-critical code in Rust, compile it to WASM, and use it from any language. This is what made deleting our Rust backend viable—we didn't lose Rust's performance, we just stopped writing it ourselves.

4. Deleting working code is liberating

We spent weeks building rs-lib and rs-node. They worked. They were deployed. And we deleted them anyway because the TypeScript alternative was better for our constraints. This felt wrong at first—"But we already built it!"—but the productivity gain from consolidating on one language was immediate and substantial.

5. Side project constraints are different

If KeyPears were a VC-funded startup with a team of 5 engineers, we'd keep the Rust backend. Someone could own the Rust API while others work on the TypeScript UI. But for a solo side project with limited evening/weekend hours, the mental overhead of context switching between Rust and TypeScript was too high. One language means more velocity.

The Current Priority: Shipping the MVP

With this architecture decision settled, we're focused on shipping a working product:

Next milestones:

  1. Server vault CRUD - Create/read/update vaults via API
  2. User authentication - Session-based login with hashed login key
  3. Basic sync protocol - Last-write-wins synchronization
  4. Mobile Tauri build - iOS + Android apps
  5. Import/export - Backup and restore vaults

All of this will be TypeScript. The API server will use orpc. The database will use Drizzle (Postgres on server, SQLite on clients). The cryptography will remain Rust-compiled WASM.

And if we're wrong—if we hit performance walls or need Rust for specific features—we can always add Rust modules later. The architecture supports it. But we're not starting there.

Try It Yourself

The Blake3 endpoint is live:

curl -X POST https://keypears.com/api/blake3 \
  -H "Content-Type: application/json" \
  -d '{"data": "SGVsbG8sIEtleVBlYXJzIQ=="}'

That data field is base64-encoded "Hello, KeyPears!". The API will return the Blake3 hash computed by Rust (via WASM) running in Node.js on our TypeScript server.

It's a small proof-of-concept, but it validates the entire architecture: TypeScript for the API layer, Rust-via-WASM for cryptography, type safety end-to-end.

Conclusion

We built a Rust backend. It worked. We deployed it. And then we deleted it.

This wasn't a failure of Rust or a mistake in architecture. It was a deliberate choice to optimize for developer velocity over theoretical performance at this stage of the project. The Rust backend would have been fine for production, but the TypeScript backend is better for rapid MVP development.

We're building KeyPears with TypeScript + WASM, which gives us Rust's security properties (via WASM crypto) without the complexity of maintaining a dual-language codebase.

For a solo side project with MVP goals, this is the right architecture. If we scale to millions of users and need extreme performance, we can always bring Rust back for specific hot paths. But we're not starting there.

Rust is an incredible language. We proved that by building a working backend with it. But for this project, at this stage, TypeScript is the pragmatic choice—and we're comfortable deleting working Rust code to prove it.

We'll keep sharing our progress—both the wins and the pivots. If you're interested in following along:

More updates coming soon. Next post: Implementing the vault synchronization protocol.