Log Entry

Lossless JSON With JSON.parse Source Text Access

Mar 16, 2026 · 15 min read

JSON has one of those annoying failure modes that looks fine until it really does not.

If another system sends a large integer as a JSON number, JavaScript will happily parse it as number, and the precision loss can happen before your reviver gets a chance to help.

That is the part that used to make this problem irritating: the extension point existed, but by the time your code ran, the exact digits were already gone.

JSON.parse source text access fixes that side of the problem by giving the reviver the original source text for primitive values.

JSON.rawJSON fixes the other side by letting JSON.stringify emit an exact primitive JSON literal instead of forcing everything through JavaScript's number representation first.

Together, those two features finally make a sane lossless path possible for cases like:

  • 64-bit IDs sent as JSON numbers
  • database values that exceed Number.MAX_SAFE_INTEGER
  • wire formats where the producer insists on numeric JSON, not quoted strings

The old problem

Take a payload like this:

{"invoiceId":12345678901234567890}

If you parse that normally, the digits do not survive intact:

const input = '{"invoiceId":12345678901234567890}';
const parsed = JSON.parse(input);

parsed.invoiceId;
// 12345678901234567000

That is not a JSON.parse bug. That is JavaScript doing what JavaScript numbers do.

The annoying part was always this: even if you used a reviver, the value argument had already gone through number.

So this did not actually recover the original value:

const input = '{"invoiceId":12345678901234567890}';

const broken = JSON.parse(input, (key, value) => {
  if (key === "invoiceId") {
    return BigInt(value);
  }
  return value;
});

broken.invoiceId;
// 12345678901234567168n (wrong digits, because the rounding already happened)

Once the parse step collapsed the literal into a floating-point number, the exact source digits were gone.

What changed

For primitive values, JSON.parse revivers can now receive a third argument: context.

The useful bit is context.source, which is the original JSON source text for the current primitive value.

That means you can ignore the already-lossy value and rebuild from the original literal instead:

const input = '{"invoiceId":12345678901234567890}';

const parsed = JSON.parse(input, (key, value, context) => {
  if (key === "invoiceId") {
    return BigInt(context.source);
  }
  return value;
});

parsed.invoiceId;
// 12345678901234567890n

That is the first half of the fix.

The second half is serialization.

If you try to send that BigInt back through plain JSON.stringify, you hit the familiar wall:

JSON.stringify({ invoiceId: 12345678901234567890n });
// TypeError

JSON.rawJSON gives you an escape hatch for exact primitive JSON text:

const json = JSON.stringify(
  { invoiceId: 12345678901234567890n },
  (key, value) => {
    if (typeof value === "bigint") {
      return JSON.rawJSON(value.toString());
    }
    return value;
  }
);

json;
// {"invoiceId":12345678901234567890}

That gives you a real round-trip path:

const bigintToRawJSON = (key, value) => {
  if (typeof value === "bigint") {
    return JSON.rawJSON(value.toString());
  }
  return value;
};

const reviveLargeInts = (key, value, context) => {
  if (typeof context?.source === "string" && /^-?\d+$/.test(context.source)) {
    return BigInt(context.source);
  }
  return value;
};

const original = {
  invoiceId: 12345678901234567890n,
  retryCount: 3,
};

const wire = JSON.stringify(original, bigintToRawJSON);
const roundTripped = JSON.parse(wire, reviveLargeInts);

roundTripped.invoiceId === original.invoiceId;
// true

Why I like this approach

The main reason I like this is that it keeps the wire format honest.

If the payload really contains a JSON number, I can keep it a JSON number end to end. I do not have to invent a separate convention like:

  • "all big integers are strings now"
  • "this field is numeric-looking but must never be parsed as a number"
  • "this API returns numbers except when it does not"

That matters when you do not control both ends of the system, or when the contract already exists and changing it would be more painful than adapting to it.

It also keeps the conversion logic close to the parse boundary instead of scattering string-to-BigInt cleanup code around the rest of the app.

The tradeoff

I would not use this as a blanket "every integer becomes BigInt" rule without thinking.

There are two important tradeoffs.

First, semantics still matter.

Some large numeric-looking values are not really numbers in the business sense. They are identifiers. If a field is an ID that should never be added, subtracted, sorted numerically, or compared arithmetically, then keeping it as a string may still be the cleaner model.

Second, support is not universal yet.

JSON.rawJSON is still marked as limited availability on MDN, so this is a feature-detect-and-adopt path, not a blind assumption path.

That means I would treat this as:

  • a good fit for controlled runtimes,
  • a progressive enhancement for newer environments,
  • and a reason to keep a string-based fallback when broad compatibility matters.

Failure modes and gotchas

1) context is only there for primitive values

You do not get context.source for objects or arrays.

So this:

JSON.parse('{"outer":{"inner":123}}', (key, value, context) => {
  console.log(key, context?.source);
  return value;
});

is useful for "inner" and other primitive leaves, but not for the object containers themselves.

That is fine for the precision problem, because the precision problem is about primitive literals anyway. It just matters if you expected "full AST with source spans" style behavior. This is not that.

2) Reviver order is still bottom-up

The reviver still walks leaf-first, then parents.

That means this is a recovery hook, not a parser rewrite API. You can rebuild leaf values from exact source text, but you are still operating inside the normal reviver model.

3) JSON.rawJSON is intentionally narrow

JSON.rawJSON only accepts valid JSON text for a primitive value.

That means these are fine:

JSON.rawJSON("123");
JSON.rawJSON('"hello"');
JSON.rawJSON("true");
JSON.rawJSON("null");

These are not:

JSON.rawJSON("[1,2,3]");
JSON.rawJSON('{"a":1}');

I actually like that restriction. If it accepted arbitrary object or array fragments, it would become much easier to smuggle in weird serialization behavior that nobody wants to debug later.

4) Do not trust value for lossy fields

This is the easiest mistake to make.

If you care about exact digits, the parsed value is not the source of truth. The source of truth is context.source.

That means the safe mental model is:

  • use value for ordinary cases,
  • use context.source when exact lexeme matters.

5) You still need an explicit contract

This feature does not magically tell your app which fields should become BigInt, which ones should stay number, and which ones should stay string.

You still need a rule.

That rule might be:

  • specific field names
  • a schema-driven decoder
  • a regex for integer literals in a constrained payload

The point is that source text access gives you the raw material. It does not remove the need to choose a domain model.

What I would actually ship

I would put this behind small helpers and feature detection.

Something like:

export function supportsLosslessJSON() {
  return (
    typeof JSON.rawJSON === "function" &&
    (() => {
      let seen;
      JSON.parse("1", (key, value, context) => {
        seen = context?.source;
        return value;
      });
      return seen === "1";
    })()
  );
}

export function stringifyWithBigInt(value) {
  return JSON.stringify(value, (key, current) => {
    if (typeof current === "bigint" && typeof JSON.rawJSON === "function") {
      return JSON.rawJSON(current.toString());
    }

    if (typeof current === "bigint") {
      return current.toString();
    }

    return current;
  });
}

Then I would pair that with a parse helper that only upgrades the fields I explicitly care about.

That keeps the behavior boring, which is what you want around serialization code.

Current result

Before this change, the exact-digits path for numeric JSON in JavaScript was awkward:

  • accept precision loss,
  • quote the values as strings,
  • or bring in a custom parser/library.

Now there is finally a standard path for the cases where the JSON literal itself matters.

That does not mean every API should start sending giant integers as numeric JSON. It does mean JavaScript is less boxed in when another system already does.

Next steps

The next thing I want to test with this is schema-aware revival instead of key-name checks.

That is the point where this stops being a neat feature demo and starts feeling like infrastructure:

  • decode selected fields as BigInt
  • leave ordinary numbers alone
  • keep one serializer path that can emit exact JSON primitives when the runtime supports it

That feels much closer to something I would trust in a real service boundary.

If you want the proposal details, the two useful references are the TC39 proposal for JSON.parse source text access and the MDN docs for JSON.parse and JSON.rawJSON.