Building Type-Safe APIs with Next.js 15 and Drizzle ORM
Learn how to build fully type-safe API routes in Next.js 15 using Drizzle ORM, from schema definition to runtime validation.
The Vector is a personal, non-profit archive exploring the shift from writing code to directing intent. In an era where AI handles the how, we must master the why — structural integrity, trade-offs, and architectural decisions that still need a human behind the wheel.
Code is becoming a commodity. As syntax becomes automated, the value of the software engineer shifts upward. It moves away from the lines of code and toward the structural integrity of the system.
A vector is magnitude and direction. In software, magnitude is the raw power of our tools — the scale of AI-generated output. Direction is the human element: the intentionality behind every component, every interface, and every data flow.
This space serves as a personal repository of thoughts on high-level system design, the ethics of automated development, and the enduring importance of architectural intent.
System patterns
Human vs. AI architecture
The tech stack
Tools that amplify intent
Meta-development
The evolving senior engineer
AI-friendly
llms.txt and structured data
Handpicked entries worth reading
Learn how to build fully type-safe API routes in Next.js 15 using Drizzle ORM, from schema definition to runtime validation.
A hands-on guide to learning Rust, written specifically for developers coming from the JavaScript/TypeScript ecosystem.
Latest from the archive
Moving from raw PHI to a HIPAA-compliant dataset requires more than just dropping columns. In this guide, we explore the SQL implementation of Safe Harbor de-identification, the nuances of salted SHA-256 hashing, and the critical architectural differences between De-identified and Limited Data Sets.
Mapping geographic data is more than just shading regions on a screen; it's about uncovering spatial patterns that standard charts often hide. Whether you are leveraging the speed of Tableau, the precision of Python, or the open-source scale of Apache Superset, the success of your map depends on one critical rule: normalize your data. Discover how to choose the right tools and design principles to ensure your choropleth maps tell the true story behind your data.
The BETWEEN operator is one of the most common tools in a SQL developer’s kit, yet it is often the source of subtle, production-breaking bugs. From the "inclusive" trap of DATETIME stamps to handling messy strings with TRIM and LEFT, this article explores how to use BETWEEN effectively without sacrificing performance or accuracy.
In a world of infinite code, the bottleneck is no longer production—it’s discernment. Here is how a syntactically perfect tracking pixel turned into a self-inflicted DDoS, and what it teaches us about the shifting role of the architect.
As clinical datasets grow in complexity, the traditional tools of data science are hitting a performance wall. This article explores the architectural "why" behind Polars from its Rust-based parallelism to its intelligent query optimizer and explains how these features solve the unique data integrity and memory challenges faced by health data scientists today.
In the age of agents, security stops being about what software knows and becomes about what it can do. These tool-using systems don’t just answer questions—they browse internal docs, call APIs, open PRs, trigger CI, message people in Slack, and basically operate like a junior engineer with superpowers… as long as you’ve handed them OAuth scopes and tokens. That collapses the gap between “thinking” and “acting,” which means everyday inputs like emails, tickets, and random webpages can quietly become control channels (hello prompt injection / indirect prompt injection). So the new attack surface isn’t just models—it’s permissions, connectors, skills/plugins, secrets in configs/logs, and workflow-based lateral movement. If we want to use agents safely, we can’t rely on “be careful” or “better prompts.” We need agent-specific controls: least-privilege tool access, short-lived creds, policy gates before/after tool calls, sandboxing + egress controls, DLP, and strong provenance/audit trails so every action is attributable and reviewable.
Every entry in this archive is also available as structured data — JSON API, llms.txt, and RSS — so humans and AI agents can discover and consume content efficiently.