-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Description
There is a report on possible DoS attack, against certain versions of Jackson 2.10.x - 2.13.x (does not affect earlier versions like 2.9, nor future 2.14 and 3.0).
CVE: https://nvd.nist.gov/vuln/detail/CVE-2021-46877
Fix has been included in versions:
- 2.12.6
- 2.13.1
No current plans to back-porting into 2.10 or 2.11 branches (2.9 and earlier not affected).
CVE description
Applicability
The vulnerability is available only when using JDK serialization to serialize, deserialize JsonNode
values: this is not something most users ever do, nor is it recommended for general usage.
So, any other use of JsonNode
is completely unrelated to the reported CVE: this ONLY APPLIES WITH JDK SERIALIZATION.
Example
So how does one use JDK Serialization with Jackson's JsonNode
?
Example of such usage (copied from test NodeJDKSerializationTest.java
) is:
ObjectNode root = MAPPER.createObjectNode();
root.put("answer", 42);
// Instead of usual "write as JSON" (using "node.toString()" or serialize using ObjectMapper)
// something wants to use JDK serialization: some caching frameworks do this
ByteArrayOutputStream bytes = new ByteArrayOutputStream();
ObjectOutputStream obOut = new ObjectOutputStream(bytes);
obOut.writeObject(root);
obOut.close();
byte[] jdkSer = bytes.toByteArray();
// and somewhere something wants to read it back like so:
ObjectInputStream objIn = new ObjectInputStream(new ByteArrayInputStream(jdkSer));
ObjectNode fromSer = (ObjectNode) objIn.readObject();
// ^^^^ to cause DoD, attacker would need to produce a specifically changed version of
// payload!
The issue with JDK serialization is due to combination of format used and original code (see class NodeSerialization
for details).
First: JsonNode
is serialized as a sequence of bytes where first 4 bytes indicate length of actual content; and contents are JSON serialization itself. When reading it back (JDK deserialization) length is read first, original code allocates a byte[]
with that size, and then contents are read. This works, functionally speaking.
But if attacker provides, instead, a payload that contains only 4-byte length, with value of Integer.MAX_VALUE
, then decoder will:
- Read the length
- Allocate 2 gig
byte[]
array - If it succeeds, try to read contents, fail
The problem here is that during step (2), a large buffer allocation may well run process out of (heap) memory -- especially so if attacker manages to inject multiple broken messages.
Fix is to avoid eager allocation of big buffers and only allocate buffers as needed, along reading of the payload.