Skip to content

arg-cache: Always use an unsigned int mask for flags

When retrieving flags from the GIArgumentInfo in set_return_ffi_arg_from_giargument(), we use the v_int member of the union (arguably we ought to use the v_uint member for flags, but we don't), so to avoid undefined behaviour from type-punning we need to store flags via that same member.

In particular, even if we somehow convince ourselves that a flags type can have values outside the 32-bit range, it would be incorrect to store them in the v_int64 or v_uint64 member, because when we read them back out it'll be via the v_int member. That's formally undefined behaviour, but in practice will access the first 32 bits of the 64-bit integer, which is the same as (real_value & 0xffff'ffff) on little-endian CPUs, but gives a meaningless result on big-endian. Sure enough, after 1f3984c9 the unit tests start failing on s390x, which is 64-bit big-endian.

GLib's GFlagsClass.mask and GFlagsValue.value are of type guint (equivalent to unsigned int), so it can't possibly represent flags outside the range of guint as a GType anyway. To be consistent with that (and have reasonable unsigned arithmetic), go via an unsigned integer.

The corresponding fix for enum types was done as a side-effect of 8b2927b0 "arg-cache: Save space in enum bounds".

Fixes: 1f3984c9 "arg-cache: extend to handle interface types too"
Resolves: #341 (closed)


This fixes all the tests on s390x, except #319 (closed) which wasn't a recent regression.

Merge request reports