1. 20 Jan, 2020 3 commits
  2. 19 Jan, 2020 1 commit
  3. 15 Jan, 2020 1 commit
  4. 14 Jan, 2020 1 commit
  5. 12 Jan, 2020 2 commits
  6. 11 Jan, 2020 2 commits
  7. 10 Jan, 2020 1 commit
  8. 04 Dec, 2019 7 commits
  9. 03 Dec, 2019 1 commit
  10. 02 Dec, 2019 13 commits
    • Mike Fleetwood's avatar
      Stop requesting partition paths of free space and metadata · 047a2481
      Mike Fleetwood authored and Curtis Gedak's avatar Curtis Gedak committed
      In GParted_Core::set_device_partitions() the partition path is being
      queried from libparted.  However this is done before the switch
      statement on the type of the partition, so is called for all libparted
      partition objects including PED_PARTITION_FREESPACE and
      PED_PARTITION_METADATA ones.  As libparted numbers these partition
      objects as -1, it returns paths like "/dev/sda-1".
      
      Additionally when using GParted, with it's default DMRaid handling, on a
      dmraid started array this results in paths like
      "/dev/mapper/isw_ecccdhhiga_MyArray-1" being passed to
      is_dmraid_device() and make_path_dmraid_compatible().  Fortunately
      make_path_dmraid_compatible() does nothing and returns the same name.
      Call chain looks like:
      
          GParted_Core::set_device_partitions()
            get_partition_path(lp_partition)
              // where:
              // lp_partition->disk->dev->path = "/dev/mapper/isw_ecccdhhiga_MyArray"
              // lp_partition->type == PED_PARTITION_FREESPACE |
              //                       PED_PARTITION_METADATA
              //              ->num == -1
              ped_partition_get_path(lp_partition)
                return "/dev/mapper/isw_ecccdhhiga_MyArray-1"
              dmraid.is_dmraid_supported()
              dmraid.is_dmraid_device("/dev/mapper/isw_ecccdhhiga_MyArray-1")
                return true
              dmraid.make_path_dmraid_compatible("/dev/mapper/isw_ecccdhhiga_MyArray-1")
                return "/dev/mapper/isw_ecccdhhiga_MyArray-1"
      
      Fix by moving the get_partition_path() call inside the switch statement
      so that it is only called for PED_PARTITION_NORMAL,
      PED_PARTITION_LOGICAL and PED_PARTITION_EXTENDED partition types.
      
      Relevant commits:
      *   53c49349
          Simplify logic in set_device_partitions method
      
      *   81986c09
          Ensure partition path name is compatible with dmraid (#622217)
      047a2481
    • Mike Fleetwood's avatar
      Make 4 internally used only DMRaid methods private · fa682d37
      Mike Fleetwood authored and Curtis Gedak's avatar Curtis Gedak committed
      fa682d37
    • Mike Fleetwood's avatar
      Recognise ATARAID members started by dmraid (#75) · 21cad97d
      Mike Fleetwood authored and Curtis Gedak's avatar Curtis Gedak committed
      This is not strictly necessary as members are already recognised using
      blkid since this commit earlier in the sequence "Recognise ATARAID
      members (#75)".  However it makes sure active members are recognised
      even if blkid is not available and matches how file system detection
      queries the SWRaid_Info module.
      
      Closes #75 - Errors with GPT on RAID 0 ATARAID array
      21cad97d
    • Mike Fleetwood's avatar
      Display array device as mount point of dmraid started ATARAID members (#75) · bb865aaa
      Mike Fleetwood authored and Curtis Gedak's avatar Curtis Gedak committed
      This matches how the array device is displayed as the mount point for
      mdadm started ATARAID members by "Display array device as mount point of
      mdadm started ATARAID members (#75)" earlier in this patchset.
      
      Extend the DMRaid module member cache to save the array device name and
      use as needed to display as the mount point.
      
      Closes #75 - Errors with GPT on RAID 0 ATARAID array
      bb865aaa
    • Mike Fleetwood's avatar
      Detect busy status of dmraid started ATARAID members (#75) · caec2287
      Mike Fleetwood authored and Curtis Gedak's avatar Curtis Gedak committed
      Again this is to stop GParted allowing overwrite operations being
      performed on an ATARAID member while the array is actively using the
      member.  This time for dmraid started arrays using the kernel DM (Device
      Mapper) driver.
      
      The DMRaid module already uses dmraid to report active array names:
      
          # dmraid -sa -c
          isw_ecccdhhiga_MyArray
      
      To find active members in this array, (1) use udev to lookup the kernel
      device name:
      
          # udevadm info --query=name /dev/mapper/isw_ecccdhhiga_MyArray
          dm-0
      
      (2) list the member names exposed by the kernel DM driver through the
      /sys file system.
      
          # ls /sys/block/dm-0/slaves
          sdc  sdd
          # ls -l /sys/block/dm-0/slaves
          lrwxrwxrwx 1 root root 0 Nov 24 09:52 sdc -> ../../../../pci0000:00/0000:00:0d.0/ata3/host2/target2:0:0/2:0:0:0/block/sdc
          lrwxrwxrwx 1 root root 0 Nov 24 09:52 sdc -> ../../../../pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdd
      
      Closes #75 - Errors with GPT on RAID 0 ATARAID array
      caec2287
    • Mike Fleetwood's avatar
      Enable basic supported actions for ATARAID members (#75) · 425dfa37
      Mike Fleetwood authored and Curtis Gedak's avatar Curtis Gedak committed
      When an ATARAID member is inactive allow basic supported actions of
      copy and move to be performed like with other recognised but only basic
      supported types.
      
      Closes #75 - Errors with GPT on RAID 0 ATARAID array
      425dfa37
    • Mike Fleetwood's avatar
      Prevent unmount of busy ATARAID members (#75) · 1f1f44ff
      Mike Fleetwood authored and Curtis Gedak's avatar Curtis Gedak committed
      Since earlier commit "Display array device as mount point of mdadm
      started ATARAID members (#75)" GParted allows attempting to unmout a
      busy ATARAID member as if it was a file system.  This is not a valid
      thing to do, so disallow it.
      
      Closes #75 - Errors with GPT on RAID 0 ATARAID array
      1f1f44ff
    • Mike Fleetwood's avatar
      Display array uuid of mdadm recognised ATARAID members (#75) · f6c86835
      Mike Fleetwood authored and Curtis Gedak's avatar Curtis Gedak committed
      Closes #75 - Errors with GPT on RAID 0 ATARAID array
      f6c86835
    • Mike Fleetwood's avatar
      Display array device as mount point of mdadm started ATARAID members (#75) · 538c866d
      Mike Fleetwood authored and Curtis Gedak's avatar Curtis Gedak committed
      This matches how other non-file systems are handled, by displaying the
      access reference in the mount point column.  For LVM Physical Volumes
      the Volume Group name is displayed [1] and for an active Linux Software
      RAID array the array device is displayed [2].
      
      [1] 8083f11d
          Display LVM2 VGNAME as the PV's mount point (#160787)
      
      [2] f6c2f00d
          Populate member mount point with SWRaid array device (#756829)
      
      Closes #75 - Errors with GPT on RAID 0 ATARAID array
      538c866d
    • Mike Fleetwood's avatar
      Detect busy status of mdadm started ATARAID members (#75) · 6e990ea4
      Mike Fleetwood authored and Curtis Gedak's avatar Curtis Gedak committed
      This stops GParted allowing overwrite operations (such as create
      partition table or format with a whole device file system) being
      performed on an ATARAID member while the array is actively using the
      member.
      
      Closes #75 - Errors with GPT on RAID 0 ATARAID array
      6e990ea4
    • Mike Fleetwood's avatar
      Display correct type of mdadm recognised ATARAID members (#75) · ef6794b7
      Mike Fleetwood authored and Curtis Gedak's avatar Curtis Gedak committed
      The previous commit, made mdadm recognised IMSM and DDF type ATARAID
      members get displayed as "linux-raid" (Linux Software RAID array
      member).  This was because of query method 1 in detect_filesystems().
      
      Fix this now by exposing and using the fstype of the member from the
      SWRaid_Info cache.
      
      Closes #75 - Errors with GPT on RAID 0 ATARAID array
      ef6794b7
    • Mike Fleetwood's avatar
      Parse ATARAID members from mdadm output and /proc/mdstat (#75) · 73bf8bef
      Mike Fleetwood authored and Curtis Gedak's avatar Curtis Gedak committed
      Since mdadm release 3.0 (2009-06-02) [1] it has also supported external
      metadata formats IMSM (Intel Matrix Storage Manager) and DDF, previously
      only managed by dmraid.
      
      A number of distributions have switched to use mdadm and kernel MD
      (Multiple Devices) driver for managing these Firmware / BIOS / ATARAID
      arrays.  These include: Fedora >= 14 [2], RHEL / CentOS >= 6 [3],
      SLES >= 12 [4], Ubuntu >= 16.04 LTS.
      
      Therefore additionally parse members in these ATARAID arrays included in
      mdadm output, and when activated using the kernel MD driver, in file
      /proc/mdstat.  Add fstype to the SWRaid_Info cache records to
      distinguish members apart.  So far the rest of the GParted code
      continues to treat all members as FS_LINUX_SWRAID.  This will be
      resolved in following commits.
      
      Note that this in no way affects how GParted shows and partitions the
      array device itself, even those managed by dmraid and use the GParted
      DMRaid module.  It only affects how GParted shows the member drives
      themselves.
      
      [1] mdadm ANNOUNCE-3.0 file
          https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/tree/ANNOUNCE-3.0?h=mdadm-3.0
      
      [2] Fedora 14, Storage Administration Guide, 12.5. Linux RAID Subsystem
          https://docs.fedoraproject.org/en-US/Fedora/14/html/Storage_Administration_Guide/raid-subsys.html
          "...  Fedora 14 uses mdraid with external metadata to access ISW /
          IMSM (Intel firmware RAID) sets.  mdraid sets are configured and
          controlled through the mdadm utility."
      
      [3] RHEL 6, Storage Administration Guide, 17.3. Linux RAID Subsystem
          https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/storage_administration_guide/raid-subsys
          "mdraid also supports other metadata formats, known as external
          metadata.  Red Hat Enterprise Linux 6 uses mdraid with external
          metadata to access ISW / IMSM (Intel firmware RAID) sets.  mdraid
          sets are configured and controlled through the mdadm utility."
      
      [4] SUSE Linux Enterprise Server 12 Release Notes, 7.2.3 Driver for IMSM
          and DDF
          https://www.suse.com/releasenotes/x86_64/SUSE-SLES/12/#fate-316007
          "For IMSM and DDF RAIDs the mdadm driver is used unconditionally."
      
      Closes #75 - Errors with GPT on RAID 0 ATARAID array
      73bf8bef
    • Mike Fleetwood's avatar
      Recognise ATARAID members (#75) · aea6200d
      Mike Fleetwood authored and Curtis Gedak's avatar Curtis Gedak committed
      PATCHSET OVERVIEW
      
      A user had a Firmware / BIOS / ATARAID array of 2 devices configured as
      a RAID 0 (stripe) set.  On top of that was a GPT with the OS partitions.
      GParted displays the following errors on initial load and subsequent
      refresh:
      
              Libparted Error
          (-) Invalid argument during seek for read on /dev/sda
                                [ Retry ] [ Cancel ] [ Ignore ]
      
              Libparted Error
          (-) The backup GPT table is corrupt, but the
              primary appears OK, so that will be used.
                                    [  Ok  ] [ Cancel ]
      
      This is an Intel Software RAID array which stores metadata at the end of
      each member device, and so the first 128 KiB stripe of the set is stored
      in the first 128 KiB of the first member device /dev/sda which includes
      the GPT for the whole RAID 0 device.  Hence when libparted reads member
      device /dev/sda it finds a GPT describing a block device twice it's
      size and in results the above errors when trying to read the backup GPT.
      
      A more dangerous scenario occurs when using 2 devices configured in an
      Intel Software RAID 1 (mirrored) set with GPT on top.  On refresh
      GParted display this error for both members, /dev/sda and /dev/sdb:
      
              Libparted Warning
          /!\ Not all of the space available to /dev/sda appears to be used,
              you can fix the GPT to use all of the space (an extra 9554
              blocks) or continue with the current setting?
                                                        [  Fix  ] [ Ignore ]
      
      Selecting [Fix] gets libparted to re-write the backup GPT to the end of
      the member device, overwriting the ISW metadata!  Do that twice and both
      copies of the metadata are gone!
      
      Worked example of this more dangerous mirrored set case.  Initial setup:
      
          # dmraid -s
          *** Group superset isw_caffbiaegi
          --> Subset
          name   : isw_caffbiaegi_MyMirror
          size   : 16768000
          stride : 128
          type   : mirror
          status : ok
          subsets: 0
          devs   : 2
          spares : 0
      
          # dmraid -r
          /dev/sda: isw, "isw_caffbiaegi", GROUP, ok, 16777214 sectors, data@ 0
          /dev/sdb: isw, "isw_caffbiaegi", GROUP, ok, 16777214 sectors, data@ 0
      
          # wipefs /dev/sda
          offset               type
          ---------------------------------------------
          0x200                gpt   [partition table]
          0x1fffffc00          isw_raid_member   [raid]
      
      Run GParted and click [Fix] on /dev/sda.  Now the first member has gone:
      
          # dmraid -s
          *** Group superset isw_caffbiaegi
          --> *Inconsistent* Subset
          name   : isw_caffbiaegi_MyMirror
          size   : 16768000
          stride : 128
          type   : mirror
          status : inconsistent
          subsets: 0
          devs   : 1
          spares : 0
      
          # dmraid -r
          /dev/sdb: isw, "isw_caffbiaegi", GROUP, ok, 16777214 sectors, data@ 0
      
          # wipefs /dev/sda
          offset               type
          ---------------------------------------------
          0x200                gpt   [partition table]
      
      Click [Fix] on /dev/sdb.  Now all members of the array are gone:
      
          # dmraid -s
          no raid disks
      
          # dmraid -r
          no raid disks
      
          # wipefs /dev/sdb
          offset               type
          ---------------------------------------------
          0x200                gpt   [partition table]
      
      So GParted must not run libparted partition table scanning on the member
      devices in ATARAID arrays.  Only on the array device itself.
      
      In terms of the UI GParted must show disks which are ATARAID members as
      whole disk devices with ATARAID member content and detect array busy
      status to avoid allowing active members from being overwritten while in
      use.
      
      THIS COMMIT
      
      Recognise ATARAID member devices and display in GParted as whole device
      "ataraid" file systems.  Because they are recognised as whole device
      content ("ataraid" file systems) this alone stops GParted running the
      libparted partition table scanning and avoids the above errors.
      
      The list of dmraid supported formats is matched by the signatures
      recognised by blkid:
      
          $ dmraid -l
          asr     : Adaptec HostRAID ASR (0,1,10)
          ddf1    : SNIA DDF1 (0,1,4,5,linear)
          hpt37x  : Highpoint HPT37X (S,0,1,10,01)
          hpt45x  : Highpoint HPT45X (S,0,1,10)
          isw     : Intel Software RAID (0,1,5,01)
          jmicron : JMicron ATARAID (S,0,1)
          lsi     : LSI Logic MegaRAID (0,1,10)
          nvidia  : NVidia RAID (S,0,1,10,5)
          pdc     : Promise FastTrack (S,0,1,10)
          sil     : Silicon Image(tm) Medley(tm) (0,1,10)
          via     : VIA Software RAID (S,0,1,10)
          dos     : DOS partitions on SW RAIDs
      
          $ fgrep -h _raid_member util-linux/libblkid/src/superblocks/*.c
                  .name           = "adaptec_raid_member",
                  .name           = "ddf_raid_member",
                  .name           = "hpt45x_raid_member",
                  .name           = "hpt37x_raid_member",
                  .name           = "isw_raid_member",
                  .name           = "jmicron_raid_member",
                  .name           = "linux_raid_member",
                  .name           = "lsi_mega_raid_member",
                  .name           = "nvidia_raid_member",
                  .name           = "promise_fasttrack_raid_member",
                  .name           = "silicon_medley_raid_member",
                  .name           = "via_raid_member",
      
      As they are all types of Firmware / BIOS / ATARAID arrays, report all
      members as a single "ataraid" file system type.  (Except for
      "linux_raid_member" in the above blkid source listing which is Linux
      Software RAID).
      
      Closes #75 - Errors with GPT on RAID 0 ATARAID array
      aea6200d
  11. 01 Dec, 2019 2 commits
  12. 28 Nov, 2019 1 commit
  13. 14 Nov, 2019 4 commits
    • Mike Fleetwood's avatar
      Add missing includes into jfs.cc · af60f91f
      Mike Fleetwood authored and Curtis Gedak's avatar Curtis Gedak committed
      af60f91f
    • Mike Fleetwood's avatar
      Remove unallocated space comment from HACKING file (!50) · 4b8d4be7
      Mike Fleetwood authored and Curtis Gedak's avatar Curtis Gedak committed
      The HACKING file should be hints for making changes to the code base and
      associated processes.  A overview of how GParted handled unallocated
      space was not that.  Also now the size of a JFS is accurately calculated
      using JFS as an example of a file system with intrinsic unallocated
      space is no longer valid.  Therefore removed from the HACKING file.
      Instead add the original commit message as an extended comment to method
      calc_significant_unallocated_sectors().
      
      Closes !50 - Calculate JFS size accurately
      4b8d4be7
    • Mike Fleetwood's avatar
      Calculate mounted JFS size accurately (!50) · 2c0572e2
      Mike Fleetwood authored and Curtis Gedak's avatar Curtis Gedak committed
      With the same minimum sized 16 MiB JFS used in the previous commit, now
      mounted, GParted once again reports 1.20 MiB of unallocated space.  This
      is because the kernel JFS driver is also just reporting the size of the
      Aggregate Disk Map (dmap) as the size of the file system [1].
      
      Fix by reading the on disk JFS superblock to calculate the size of the
      file system, but query the free space from the kernel using statvfs().
      Need to query mounted JFS free space from the kernel because the on disk
      dmap is not updated immediately so doesn't reflect recently used or
      freed disk space.
      
      For example, start with the 16 MiB JFS empty and mounted.
      
          # echo -e 'dmap\nx\nquit' | jfs_debugfs /dev/sdb1 | fgrep dn_nfree
          [2] dn_nfree:           0x00000000eaa   [10] dn_agwidth:        1
          # df -k /mnt/1
          Filesystem     1K-blocks  Used Available Use% Mounted on
          /dev/sdb1          15152   136     15016   1% /mnt/1
      
      Write 10 MiB of data to it:
      
          # dd if=/dev/zero bs=1M count=10 of=/mnt/1/file_10M
          10+0 records in
          10+0 records out
          1048760 bytes (10 MB, 10 MiB) copied, 0.0415676 s, 252 MB/s
      
      Query the file system free space from the kernel and by reading the on
      disk dmap figure:
      
          # df -k /mnt/1
          Filesystem     1K-blocks  Used Available Use% Mounted on
          /dev/sdb1          15152 10376      4776  69% /mnt/1
          # echo -e 'dmap\nx\nquit' | jfs_debugfs /dev/sdb1 | fgrep dn_nfree
          [2] dn_nfree:           0x00000000eaa   [10] dn_agwidth:        1
      
          # sync
          # echo -e 'dmap\nx\nquit' | jfs_debugfs /dev/sdb1 | fgrep dn_nfree
          [2] dn_nfree:           0x00000000eaa   [10] dn_agwidth:        1
      
          # umount /mnt/1
          # echo -e 'dmap\nx\nquit' | jfs_debugfs /dev/sdb1 | fgrep dn_nfree
          [2] dn_nfree:           0x000000004aa   [10] dn_agwidth:        1
      
      The kernel reports the updated usage straight away, but the on disk dmap
      record doesn't get updated even by sync, only after unmounting.
      
      This is the same fix as was previously done for EXT2/3/4 [2].
      
      [1] Linux jfs_statfs() function
          https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/fs/jfs/super.c?h=v3.10#n142
      
      [2] 38280190
          Read file system size for mounted ext2/3/4 from superblock (#683255)
      
      Closes !50 - Calculate JFS size accurately
      2c0572e2
    • Mike Fleetwood's avatar
      Calculate unmounted JFS size accurately (!50) · e55d10b9
      Mike Fleetwood authored and Curtis Gedak's avatar Curtis Gedak committed
      Create the smallest possible JFS (16 MiB) and GParted will report
      1.2 MiB of unallocated space.  This is because the size of the Aggregate
      Disk Map (dmap) was used as the size of the file system.  However after
      reading the source code to mkfs.jfs, it separately accounts for the size
      of the Log (Journal) and the FSCK Working Space.  The size of a JFS is
      the sum of these 3 components added together.
      
      Using the minimum 16 MiB JFS as an example:
      
          # jfs_debugfs /dev/sdb1
          jfs_debugfs version 1.1.15, 04-Mar-2011
      
          Aggregate Block Size: 4096
      
          > superblock
          [1] s_magic:            'JFS1'          [15] s_ait2.addr1:      0x00
          [2] s_version:          1               [16] s_ait2.addr2:      0x00000018
          [3] s_size:     0x0000000000007660           s_ait2.address:    24
          [4] s_bsize:            4096            [17] s_logdev:          0x00000000
          [5] s_l2bsize:          12              [18] s_logserial:       0x00000000
          [6] s_l2bfactor:        3               [19] s_logpxd.len:      256
          [7] s_pbsize:           512             [20] s_logpxd.addr1:    0x00
          [8] s_l2pbsize:         9               [21] s_logpxd.addr2:    0x00000f00
          [9] pad:                Not Displayed        s_logpxd.address:  3840
          [10] s_agsize:          0x00002000      [22] s_fsckpxd.len:     52
          [11] s_flag:            0x10200900      [23] s_fsckpxd.addr1:   0x00
                                  JFS_LINUX       [24] s_fsckpxd.addr2:   0x00000ecc
                  JFS_COMMIT      JFS_GROUPCOMMIT      s_fsckpxd.address: 3788
                                  JFS_INLINELOG   [25] s_time.tv_sec:     0x5dbbdfa0
                                                  [26] s_time.tv_nsec:    0x00000000
                                                  [27] s_fpack:           'small_jfs'
          [12] s_state:           0x00000000
                       FM_CLEAN
          [13] s_compress:        0
          [14] s_ait2.len:        4
      
          display_super: [m]odify or e[x]it: x
          > dmap
      
          Block allocation map control page at block 16
      
          [1] dn_mapsize:         0x00000000ecc   [9] dn_agheigth:        0
          [2] dn_nfree:           0x00000000eaa   [10] dn_agwidth:        1
          [3] dn_l2nbperpage:     0               [11] dn_agstart:        341
          [4] dn_numag:           1               [12] dn_agl2size:       13
          [5] dn_maxlevel:        0               [13] dn_agfree:         type 'f'
          [6] dn_maxag:           0               [14] dn_agsize:         8192
          [7] dn_agpref:          0               [15] pad:               Not Displayed
          [8] dn_aglevel:         0
          display_dbmap: [m]odify, [f]ree count, [t]ree, e[x]it > x
          > quit
      
      Values of interest:
          s_size        - Aggregate size in device (s_pbsize) blocks
          s_bsize       - Aggregate block (aka file system allocation) size in
                          bytes
          s_pbsize      - Physical (device) block size in bytes
          s_logpxd.len  - Log (Journal) size in Aggregate (s_bsize) blocks
          s_fsckpxd.len - FSCK Working Space in Aggregate (s_bsize) blocks
          dn_nfree      - Number of free (s_bsize) blocks in Aggregate
      
      Calculation:
          file system size = s_size * s_pbsize
                           + s_logpxd.len * s_bsize
                           + s_fsckpxd.len * s_bsize
                           = 30304 * 512
                           + 256 * 4096
                           + 52 * 4096
                           =  16777216
                              (Exactly 16 MiB.  The size of the partition.)
          free space = dn_nfree * s_bsize
                     = 3754 * 4096
                     = 15376384
      
      Rewrite JFS usage querying code to use this updated calculation.
      
      [1] JFS Overview / How the Journaled File System cuts system restart
          times to the quick
          http://jfs.sourceforge.net/project/pub/jfs.pdf
      [2] JFS Layout / How the Journaled File systems handles the on-disk
          layout
          http://jfs.sourceforge.net/project/pub/jfslayout.pdf
      [3] mkfs.jfs source code
          http://jfs.sourceforge.net/project/pub/jfsutils-1.1.15.tar.gz
          mkfs/mkfs.c
          Selected lines from mkfs/mkfs.c
              create_aggregate(..., number_of_blocks, ..., logsize, ...)
                  number_of_blocks -= fsck_wspace_length;
                  aggr_superblock.s_size = number_of_blocks * (aggr_block_size / phys_block_size);
                  aggr_superblock.s_bsize = aggr_block_size;
                  aggr_superblock.s_pbsize = phys_block_size;
                  PXDlength(&aggr_superblock.s_logpxd, logsize);
                  PXDlength(&aggr_superblock.s_fsckpxd, fsck_wspace_length);
              main()
                  number_of_bytes = bytes_on_device;
                  number_of_blocks = number_of_bytes / agg_block_size;
                  logsize = logsize_in_bytes / aggr_block_size;
                  number_of_blocks -= logsize;
                  create_aggregate(..., number_of_blocks, ..., logsize, ...);
      
      Closes !50 - Calculate JFS size accurately
      e55d10b9
  14. 09 Nov, 2019 1 commit