This page is for teams running MVMNT webhooks in production. The goal is the same every time: accept the request fast, prove it’s from MVMNT, and process events in a way that won’t double-write your system.
- Always verify the
x-api-keyheader - Respond with
200 OKwithin 5 seconds - Process events asynchronously (queue + worker, or background jobs)
- Treat deliveries as at-least-once (idempotency required)
- Use
data.keyto correlate MVMNT entities to your system - Check
deletedAton entity payloads (soft-deletes) - Log every delivery with enough fields to replay/debug
- Don’t accept requests without verifying
x-api-key(your shared secret) - Don’t do long work before responding
- Don’t return errors for duplicate events (return
200) - Don’t rely on event ordering
- Don’t use HTTP endpoints (HTTPS only)
See: Webhooks Overview
MVMNT includes your webhook token in the x-api-key header. Reject anything that doesn’t match.
Implementation notes that hold up in production:
- Read the header exactly as sent:
x-api-key(case-insensitive per HTTP, but your framework may normalize keys). - Compare against a secret value stored outside your codebase (env var/secret manager).
- If you rotate tokens, accept both the “current” and “previous” token for a short window so you don’t drop deliveries mid-rotation.
function verifyWebhookToken(req) {
const received = req.headers["x-api-key"];
const expected = process.env.WEBHOOK_TOKEN;
if (!received || received !== expected) {
const err = new Error("Invalid token");
err.statusCode = 401;
throw err;
}
}MVMNT will retry if you time out or return a non-2xx. Your handler should do only three things synchronously:
- Verify
x-api-key - Validate the payload shape enough to enqueue it
- Return
200
Pattern: enqueue the full delivery, then let a worker process events.
app.post("/webhooks/mvmnt", express.json(), async (req, res) => {
try {
verifyWebhookToken(req);
// Minimal validation: deliveries contain an events array
const { sentAt, events } = req.body || {};
if (!sentAt || !Array.isArray(events) || events.length === 0) {
return res.status(400).send("Invalid payload");
}
// Put the raw delivery on your queue (SQS, RabbitMQ, Kafka, DB table, etc.)
await enqueueWebhookDelivery(req.body);
return res.status(200).send("OK");
} catch (err) {
const status = err.statusCode || 500;
return res.status(status).send(status === 401 ? "Unauthorized" : "Error");
}
});Why this structure works:
- You stay under the 5-second limit even when downstream systems are slow.
- Retries don’t fan out into repeated side effects because processing is idempotent (next section).
- You get a durable audit trail if you store raw deliveries.
Webhook delivery is at-least-once. You should expect:
- duplicates (retry after timeout / transient failure)
- replays (you redeploy, restore a DB backup, or reprocess a dead-letter queue)
- batches (
eventscan contain multiple events in one delivery)
Rule: a duplicate event should be a no-op and still return 200.
Use the most stable fields you have:
- event type:
event - entity id:
data.id - event time:
timestamp
If your payload includes a unique event identifier in your environment, use it. Otherwise:
dedupeKey = `${event}:${data.id}:${timestamp}`Use a fast store (Redis is common). Keep the TTL long enough to cover MVMNT retries plus your own reprocessing window (often 24–72 hours).
async function alreadyProcessed(dedupeKey) {
// SETNX pattern (set if not exists). Return true if seen before.
const inserted = await redis.set(dedupeKey, "1", { NX: true, EX: 60 * 60 * 48 });
return inserted !== "OK";
}Dedupe helps, but your downstream writes should also tolerate repeats:
- Upsert by
data.id(or by your own mapping if you usedata.key) - For state transitions, prefer “set status to X” over “advance status”
Events can arrive out of order. Handle each event as independent input, and use timestamps to protect your data.
Concrete patterns:
- If you maintain a “current status” locally, apply updates only if
event.timestampis newer than the last processed timestamp for that entity. - If you process
*_UPDATEDdiffs, treat them as hints; your system of record should still be able to reconcile from the full entity state you store.
data.key is for your identifier (ERP/customer code/internal shipment id). Use it to avoid brittle joins on names or friendly IDs.
Recommended pattern:
- When you create/update entities in MVMNT, set
keyto your internal reference. - In your webhook processor, look up your local record by
keyfirst; fall back todata.idif needed.
Some entity payloads include deletedAt. Treat it as a tombstone:
- If
deletedAtis non-null, mark the record deleted in your system (or hard-delete if that’s your policy). - Don’t recreate an entity just because you receive an older “created/updated” after a delete; use your “last processed timestamp” guard.
Log each delivery in a way your on-call can grep and replay:
At minimum, capture:
sentAt- each
event.event,event.timestamp event.data.id,event.data.friendlyId(when present),event.data.key(when present)- your queue job id / message id
- processing outcome (success/failed + error)
Alert on:
- sustained non-2xx rates at the webhook endpoint
- queue lag (deliveries waiting too long)
- repeated failures for the same event key (stuck poison messages)
Keep the HTTP handler strict and fast:
- Return
401for invalidx-api-key. - Return
400only for malformed payloads you can’t parse/enqueue. - Return
200once the delivery is accepted into your processing pipeline, even if downstream work fails later.
Handle processing failures in your worker:
- Retry with backoff in your queue/worker system.
- Use a dead-letter queue/table for messages that keep failing so they don’t block newer events.
- Don’t “fix” duplicates by returning errors; that increases retries and load.
Use HTTPS in production. If you need to test locally, terminate HTTPS in a tunnel (ngrok, Cloudflare Tunnel) and forward to your dev server.