Skip to content
Last updated

Webhook Best Practices

This page is for teams running MVMNT webhooks in production. The goal is the same every time: accept the request fast, prove it’s from MVMNT, and process events in a way that won’t double-write your system.

Quick Reference

Do

  • Always verify the x-api-key header
  • Respond with 200 OK within 5 seconds
  • Process events asynchronously (queue + worker, or background jobs)
  • Treat deliveries as at-least-once (idempotency required)
  • Use data.key to correlate MVMNT entities to your system
  • Check deletedAt on entity payloads (soft-deletes)
  • Log every delivery with enough fields to replay/debug

Don’t

  • Don’t accept requests without verifying x-api-key (your shared secret)
  • Don’t do long work before responding
  • Don’t return errors for duplicate events (return 200)
  • Don’t rely on event ordering
  • Don’t use HTTP endpoints (HTTPS only)

See: Webhooks Overview


1) Verify the request (x-api-key)

MVMNT includes your webhook token in the x-api-key header. Reject anything that doesn’t match.

Implementation notes that hold up in production:

  • Read the header exactly as sent: x-api-key (case-insensitive per HTTP, but your framework may normalize keys).
  • Compare against a secret value stored outside your codebase (env var/secret manager).
  • If you rotate tokens, accept both the “current” and “previous” token for a short window so you don’t drop deliveries mid-rotation.
function verifyWebhookToken(req) {
  const received = req.headers["x-api-key"];
  const expected = process.env.WEBHOOK_TOKEN;

  if (!received || received !== expected) {
    const err = new Error("Invalid token");
    err.statusCode = 401;
    throw err;
  }
}

2) Acknowledge fast, process later (< 5 seconds)

MVMNT will retry if you time out or return a non-2xx. Your handler should do only three things synchronously:

  1. Verify x-api-key
  2. Validate the payload shape enough to enqueue it
  3. Return 200

Pattern: enqueue the full delivery, then let a worker process events.

app.post("/webhooks/mvmnt", express.json(), async (req, res) => {
  try {
    verifyWebhookToken(req);

    // Minimal validation: deliveries contain an events array
    const { sentAt, events } = req.body || {};
    if (!sentAt || !Array.isArray(events) || events.length === 0) {
      return res.status(400).send("Invalid payload");
    }

    // Put the raw delivery on your queue (SQS, RabbitMQ, Kafka, DB table, etc.)
    await enqueueWebhookDelivery(req.body);

    return res.status(200).send("OK");
  } catch (err) {
    const status = err.statusCode || 500;
    return res.status(status).send(status === 401 ? "Unauthorized" : "Error");
  }
});

Why this structure works:

  • You stay under the 5-second limit even when downstream systems are slow.
  • Retries don’t fan out into repeated side effects because processing is idempotent (next section).
  • You get a durable audit trail if you store raw deliveries.

3) Idempotency: assume duplicates and replays

Webhook delivery is at-least-once. You should expect:

  • duplicates (retry after timeout / transient failure)
  • replays (you redeploy, restore a DB backup, or reprocess a dead-letter queue)
  • batches (events can contain multiple events in one delivery)

Rule: a duplicate event should be a no-op and still return 200.

A practical dedupe key

Use the most stable fields you have:

  • event type: event
  • entity id: data.id
  • event time: timestamp

If your payload includes a unique event identifier in your environment, use it. Otherwise:

dedupeKey = `${event}:${data.id}:${timestamp}`

Store the key with a TTL

Use a fast store (Redis is common). Keep the TTL long enough to cover MVMNT retries plus your own reprocessing window (often 24–72 hours).

async function alreadyProcessed(dedupeKey) {
  // SETNX pattern (set if not exists). Return true if seen before.
  const inserted = await redis.set(dedupeKey, "1", { NX: true, EX: 60 * 60 * 48 });
  return inserted !== "OK";
}

Make your writes idempotent too

Dedupe helps, but your downstream writes should also tolerate repeats:

  • Upsert by data.id (or by your own mapping if you use data.key)
  • For state transitions, prefer “set status to X” over “advance status”

4) Don’t rely on ordering

Events can arrive out of order. Handle each event as independent input, and use timestamps to protect your data.

Concrete patterns:

  • If you maintain a “current status” locally, apply updates only if event.timestamp is newer than the last processed timestamp for that entity.
  • If you process *_UPDATED diffs, treat them as hints; your system of record should still be able to reconcile from the full entity state you store.

5) Use data.key as your correlation handle

data.key is for your identifier (ERP/customer code/internal shipment id). Use it to avoid brittle joins on names or friendly IDs.

Recommended pattern:

  • When you create/update entities in MVMNT, set key to your internal reference.
  • In your webhook processor, look up your local record by key first; fall back to data.id if needed.

6) Deletions: check deletedAt (soft-delete behavior)

Some entity payloads include deletedAt. Treat it as a tombstone:

  • If deletedAt is non-null, mark the record deleted in your system (or hard-delete if that’s your policy).
  • Don’t recreate an entity just because you receive an older “created/updated” after a delete; use your “last processed timestamp” guard.

7) Logging and monitoring that actually helps

Log each delivery in a way your on-call can grep and replay:

At minimum, capture:

  • sentAt
  • each event.event, event.timestamp
  • event.data.id, event.data.friendlyId (when present), event.data.key (when present)
  • your queue job id / message id
  • processing outcome (success/failed + error)

Alert on:

  • sustained non-2xx rates at the webhook endpoint
  • queue lag (deliveries waiting too long)
  • repeated failures for the same event key (stuck poison messages)

8) Error handling and retries

Keep the HTTP handler strict and fast:

  • Return 401 for invalid x-api-key.
  • Return 400 only for malformed payloads you can’t parse/enqueue.
  • Return 200 once the delivery is accepted into your processing pipeline, even if downstream work fails later.

Handle processing failures in your worker:

  • Retry with backoff in your queue/worker system.
  • Use a dead-letter queue/table for messages that keep failing so they don’t block newer events.
  • Don’t “fix” duplicates by returning errors; that increases retries and load.

9) Transport: HTTPS only

Use HTTPS in production. If you need to test locally, terminate HTTPS in a tunnel (ngrok, Cloudflare Tunnel) and forward to your dev server.