Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Register
  • Sign in
  • G GLib
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 856
    • Issues 856
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 56
    • Merge requests 56
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Releases
  • Packages and registries
    • Packages and registries
    • Container Registry
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • GNOMEGNOME
  • GLib
  • Issues
  • #2526
Closed
Open
Issue created Nov 09, 2021 by Sebastian Wilhelmi@wilhelmiReporter

fuzz_uri_parse failure

There is an ever so slight chance that this issue could be exploited for a remote DoS, so I file this as a confidential issue.

fuzzing/fuzz_uri_parse.c currently fails for certain large (>300KB) inputs, see e.g. https://oss-fuzz.com/testcase-detail/4691327330156544

Upon some investigation this is due to the fact that the remove_dot_segments function has quadratic complexity (after each replacement it will start at the beginning of the string again, see glib/guri.c:1343).

I see 3 possibilities to resolve this issue:

  • Ignore large inputs in fuzz_uri_parse.c. The fuzzing algorithm will realize this part of the fuzzing space is cut off and will not explore it further. This is the simplest solution to get rid of the fuzzing problem, but leaves the potential for large URLs to soak up time on people's machines in what could potentially be considered a remote DoS.
  • Return an error from all relevant functions if the URL is bigger than, say, 100kB. There is really no good reason for URLs to be this large. Famous last words.
  • Replace remove_dot_segments with a linear implementation. It's actually possible to do this also in-place. The output and input buffers in the linear pseudo code of https://datatracker.ietf.org/doc/html/rfc3986#section-5.2.4 can be the same, as the output buffer will never overwrite something in the input buffer, which is not read yet.

Let me know, what you think. I'm happy to do the legwork.

Assignee
Assign to
Time tracking