There is an ever so slight chance that this issue could be exploited for a remote DoS, so I file this as a confidential issue.
fuzzing/fuzz_uri_parse.c currently fails for certain large (>300KB) inputs, see e.g. https://oss-fuzz.com/testcase-detail/4691327330156544
Upon some investigation this is due to the fact that the
remove_dot_segments function has quadratic complexity (after each replacement it will start at the beginning of the string again, see
I see 3 possibilities to resolve this issue:
- Ignore large inputs in
fuzz_uri_parse.c. The fuzzing algorithm will realize this part of the fuzzing space is cut off and will not explore it further. This is the simplest solution to get rid of the fuzzing problem, but leaves the potential for large URLs to soak up time on people's machines in what could potentially be considered a remote DoS.
- Return an error from all relevant functions if the URL is bigger than, say, 100kB. There is really no good reason for URLs to be this large. Famous last words.
remove_dot_segmentswith a linear implementation. It's actually possible to do this also in-place. The output and input buffers in the linear pseudo code of https://datatracker.ietf.org/doc/html/rfc3986#section-5.2.4 can be the same, as the output buffer will never overwrite something in the input buffer, which is not read yet.
Let me know, what you think. I'm happy to do the legwork.