Back to Videos

How to SHIP Code in 2026 • Fast Software Development in the Age of AI

Three critical practices for shipping fast in 2026—stacked diffs for faster reviews, agentic workflows beyond code, and code-defined infrastructure—plus the tech stack that makes it work.

sashkode

Shipping fast in 2026 isn't about adopting every new framework or letting AI write random code. It's about structure, leverage, and removing friction across your entire product surface—not just application code.

This article breaks down three critical practices for velocity: stacked diffs for PR reviews, agentic workflows beyond engineering, and code-defined infrastructure. Watch the video for the full discussion; this article provides the implementation details and links to real code.

Stacked Diffs: The UX for AI-Generated Code

Traditional pull request workflows create bottlenecks. You branch from main, write code, wait for review, then branch again. This encourages large PRs (to avoid blocking), which take longer to review, which creates more blocking.

Stacked diffs invert this: create small, incremental PRs that build on each other without waiting for reviews to complete.

How Stacked Diffs Work

Instead of this:

main → feature-branch (wait for review) → main → next-feature

You do this:

main → PR1 (small change, under review)

       PR2 (builds on PR1, under review)

       PR3 (builds on PR2)

Each PR contains a single logical change—one file, one endpoint, one component. Developers aren't blocked; they keep building while reviews happen concurrently.

Why This Matters More in 2026

AI coding tools like Cursor and GitHub Copilot generate large amounts of code quickly. Without structure, you end up with massive PRs that are hard to review and slow to merge.

Stacked diffs force granularity:

  1. AI creates feature flag → PR1 (one file)
  2. AI creates API endpoint → PR2 (builds on PR1)
  3. AI creates UI component → PR3 (builds on PR2)

Each PR is reviewable in minutes, not hours.

Tooling: Graphite and GitHub

Graphite pioneered the stacked diff UX—handling the git complexity (rebasing dependent branches, syncing with main) behind a clean interface.

Graphite was acquired by Cursor in December 2024, signaling that stacked diffs and AI coding are converging. The future: your IDE and your PR review workflow are one system.

GitHub is also building native stacked diffs, so this pattern will become mainstream regardless of tooling choice.

Feature Flags and Small PRs

Stacked diffs pair perfectly with feature flags. Ship incomplete features behind flags:

app/api/new-endpoint/route.ts
import { featureFlags } from "~/config/feature-flags";

export async function POST(request: Request) {
  if (!featureFlags.newFeatureEnabled) {
    return new Response("Feature not enabled", { status: 404 });
  }

  // New feature code
  return new Response("Success");
}

PR1 adds the flag. PR2 adds the endpoint (hidden). PR3 adds the UI (hidden). PR4 flips the flag. Each PR is small, safe, and reviewable.

Agentic Workflows Beyond Engineers

In 2025, developers got AI-powered coding assistants. In 2026, the rest of the product team needs them too.

The Gap

Developers use agentic tools daily:

  • Cursor for code generation
  • GitHub Copilot for autocomplete
  • Code Rabbit for PR reviews

But product managers, designers, and project managers still work in tools that don't understand the codebase. They can't:

  • Query the codebase for feasibility
  • Generate specs that reference actual implementations
  • Update roadmaps based on code changes

Where This Is Heading

AI agents for non-technical teammates need to:

  1. Understand the codebase — query APIs, data models, feature flags
  2. Integrate with their tools — Linear, ClickUp, Figma, Notion
  3. Operate with guardrails — suggest changes, not break things

Tools like ClickUp Agents and Linear's AI features are early examples, but they're not yet code-aware in the way developer tools are.

The goal: a product manager can ask "Is the new checkout flow ready to ship?" and the agent checks feature flags, test coverage, and PR status—then updates Linear automatically.

Durable Workflows for Product Automation

This repository uses Vercel Workflow Dev Kit to automate video-to-article generation. The same pattern applies to product workflows:

src/features/videos/server/video-to-article.workflow.ts
export async function videoToArticleWorkflow(videoId: string) {
  "use workflow"; 

  // Step 1: Fetch video metadata
  const metadata = await fetchVideoMetadata(videoId);

  // Step 2: Fetch transcript (with retry logic)
  let transcript = await fetchTranscript(videoId);

  if (!transcript) {
    await sleep("2h"); // Durable execution: workflow pauses and resumes
    transcript = await fetchTranscript(videoId);
  }

  // Step 3: Create GitHub issue assigned to Copilot agent
  const issueUrl = await createCopilotIssue({ videoId, metadata, transcript });

  return { success: true, issueUrl };
}

See the full workflow implementation.

The "use workflow" directive tells the framework this function needs durable execution—the ability to pause, retry, and resume across server restarts. No manual queue management, no state tracking.

This pattern extends to product workflows:

  • Spec creation — agent drafts spec, pauses for human review, then creates tickets
  • Feature validation — agent checks code, runs tests, updates roadmap
  • Documentation — agent watches PRs, generates docs, publishes on merge

The key: workflows that integrate tools (code + product + docs) without requiring developers to build custom integrations.

Code-Defined Infrastructure

Infrastructure-as-code (Terraform, Pulumi, SST) has a fundamental problem: your infrastructure intent lives separately from your application code.

Code-defined infrastructure flips this: your application code declares what infrastructure it needs, and the framework provisions it automatically.

The Problem with IaC

Traditional IaC requires you to:

  1. Write application code that uses a queue
  2. Write separate Terraform/Pulumi config for that queue
  3. Keep both files synchronized
  4. Manage state across environments

The application code is the source of truth for what infrastructure you need. Config files are a translation layer that introduces drift.

How Code-Defined Infrastructure Works

Instead of config files, the framework analyzes your code at build time:

  1. Parses application code to discover infrastructure needs
  2. Extracts intent (what resources are required)
  3. Generates provisioning config automatically
  4. Provisions resources for that deployment

This is already how Next.js works. When you write:

app/api/users/route.ts
export async function GET() {
  // This is a dynamic route
  return Response.json({ users: [] });
}

Next.js knows this needs a serverless function. Vercel provisions it automatically. No config file.

Vercel Workflow: Library-Defined Infrastructure

The Workflow SDK demonstrates library-defined infrastructure. The "use workflow" directive declares infrastructure needs:

export async function processOrderWorkflow(orderId: string) {
  "use workflow"; // Declares: I need durable execution

  const order = await fetchOrder(orderId);
  await sleep("1d"); // Declares: I need scheduled execution
  await sendReminder(orderId);
}

The framework provisions:

  • Workflow execution engine for step coordination
  • State persistence for durability
  • Queue infrastructure for scheduling
  • API endpoints to trigger workflows

The withWorkflow adapter in next.config.ts handles provisioning:

next.config.ts
import { withWorkflow } from "workflow/next"; 

const nextConfig: NextConfig = {
  // ... config
};

export default withWorkflow(withMDX(nextConfig)); 

See the Next.js config for the complete setup.

The adapter scans your code for "use workflow" directives and generates infrastructure automatically. No separate config files.

Preview Environments and Database Branching

Code-defined infrastructure shines for preview environments. When you push a PR:

  1. Vercel deploys a preview from that commit
  2. Analyzes the code to determine infrastructure needs
  3. Provisions isolated resources for that preview
  4. Tears down resources when the PR closes

Combined with PlanetScale database branching, each PR gets:

  • Isolated compute
  • Isolated database branch
  • Isolated queues/workflows

All from the same code. No manual environment configuration.

Build-Time Codegen in This Repository

This repository uses build-time analysis to generate types and enforce correctness:

Type-Safe Public Images

The public-images plugin scans /public and generates types for Next.js Image components:

plugins/next/public-images.ts
function scanPublicImages(dir: string, basePath = ""): string[] {
  const images: string[] = [];
  const entries = fs.readdirSync(dir, { withFileTypes: true });

  for (const entry of entries) {
    const relativePath = basePath ? `${basePath}/${entry.name}` : entry.name;

    if (entry.isDirectory()) {
      images.push(...scanPublicImages(path.join(dir, entry.name), relativePath));
    } else if (IMAGE_EXTENSIONS.has(path.extname(entry.name))) {
      images.push(`/${relativePath}`); 
    }
  }

  return images;
}

This generates a type definition:

.next/types/public-images.d.ts
declare module "next/image" {
  export type PublicImagePath =
    | "/next.svg"
    | "/vercel.svg"
    | "/og-image.png";

  type ImageSrc = PublicImagePath | ExternalUrl | StaticImageData;

  export interface ImageProps extends Omit<OriginalImageProps, "src"> {
    src: ImageSrc; 
  }
}

Now using invalid image paths is a compile-time error, not a runtime error.

The plugin runs automatically because it's imported in next.config.ts:

next.config.ts
// Generate TypeScript types for public images
import "./plugins/next/public-images"; 

const nextConfig: NextConfig = {
  // ... config
};

See the complete plugin implementation.

TypeScript Language Service Plugin

The next-safe-page TypeScript plugin validates that page routes match file locations:

export default Page.create({
  path: "/blog/[slug]", // Must match file location: app/blog/[slug]/page.tsx
  name: "blog-post",
})
  .searchParamsSchema({ page: z.coerce.number().default(1) })
  .page(async ({ getPathParams, getSearchParams }) => {
    const { slug } = await getPathParams();
    const { page } = await getSearchParams();
    return <div>Blog Post: {slug}</div>;
  });

If the path doesn't match the file location, you get IDE squigglies immediately. This prevents mismatched routes before runtime.

The Stack: Next.js, PlanetScale, Drizzle, Vercel

Here's the actual stack for 2026, built for velocity and type safety:

Framework: Next.js

Staying with Next.js because:

  1. Mature ecosystem — tooling, plugins, patterns are established
  2. Custom utilities built — type-safe pages, search params, server actions
  3. Framework-defined infrastructure — Vercel understands Next.js natively

Not ruling out TanStack Start, but the migration cost doesn't justify the benefits yet. See Why I'm NOT switching to TanStack Start for the full comparison.

Type-Safe Utilities

Custom wrappers for type safety:

  • next-safe-action — Type-safe server actions with validation
  • Custom Page.create() API — Type-safe search params and path params

See the Page implementation for details.

Example server action with logging and error handling:

src/platform/server/server-action.ts
export const ServerAction = {
  create: <T extends string>(metadata: { name: KebabCase<"name", T> }) =>
    createSafeActionClient({
      defineMetadataSchema: () => metadataSchema,
      handleServerError: (e, utils) => {
        const { clientInput, ctx } = utils;
        const actionLogger = ctx.logger;

        if (e instanceof ServerError) { 
          // Known error - return message to client
          actionLogger.debug(`Known error: ${e.message}`);
          return e.message;
        }

        // Unknown error - log and return generic message
        actionLogger.error("Unknown server error", { error: e });
        return "An unexpected error occurred";
      },
    })
      .metadata(metadata)
      .use(({ next }) => {
        const logger = Logger.child({ scope: "SERVER_ACTION", topic: metadata.name });
        return next({ ctx: { logger } });
      }),
};

See the full server action implementation.

UI: shadcn + Tailwind

This combination provides speed without framework lock-in.

Database: PlanetScale + Drizzle

PlanetScale's database branching enables preview environments with isolated schema changes. Safe migrations allow zero-downtime schema updates.

Infrastructure: Vercel + Workflow Dev Kit

  • Vercel — Deployment platform with framework-defined infrastructure
  • Workflow Dev Kit — Durable execution without managing queues

The Workflow SDK provides background jobs, retries, and scheduled execution without manual queue management.

Type Safety Utilities

Custom utilities enforce correctness at compile time:

src/utils/shared/kebab-case.ts
export type KebabCase<VariableName extends string, S extends string> =
  S extends `${string}${UppercaseLetter}${string}`
    ? Never<`${VariableName} contains uppercase letters - kebab-case requires lowercase only`>
    : S extends ` ${string}` | `${string} ` | `${string} ${string}`
      ? Never<`${VariableName} contains spaces - kebab-case uses hyphens instead`>
      : S;

This provides compile-time errors with descriptive messages when naming conventions are violated.

Shipping Fast: The System

Velocity isn't about writing code faster. It's about designing systems—technical and human—that don't fight you:

  1. Stacked diffs — Merge small PRs continuously instead of blocking on large reviews
  2. Agentic workflows — Extend AI leverage beyond code to product and project work
  3. Code-defined infrastructure — Let code declare its infrastructure needs automatically

The tech stack matters, but the process matters more. AI will write more code in 2026. Your job is to structure the system so that code ships safely and fast.

Further Exploration

Related implementations in this repository:

Related articles:

How to SHIP Code in 2026 • Fast Software Development in the Age of AI | Videos | sashkode