Compare commits

...

88 Commits

Author SHA1 Message Date
aszlig
0a18f59532 nixpart: Update to latest master version
This only renames the "new_uuid" keyword argument to be just "uuid", as
changed in the previous nixpkgs commit on blivet.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-28 05:45:42 +01:00
aszlig
9482c1d0d9 blivet: Update patch for UUIDs to latest version
This version largely differs from the previous version in that we now
set the UUID via the "uuid" keyword argument rather than introducing a
new "new_uuid" kwarg.

We now need to reorder the uuids.patch and the ntfs-formattable.patch,
because the latter got merged into the upstream 2.1-devel branch and the
uuids.patch has been rebased against the newest HEAD of 2.1-devel.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-28 05:42:09 +01:00
aszlig
6c9a0e0324 blivet: Verify PEP8 compliance in checkPhase
This is also part of blivet's "make check", so I've included it for
completeness and almost verbatim because blivet does not comply to a few
points in PEP8, like when it comes to line length.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-24 08:05:45 +01:00
aszlig
7b4c696352 blivet: Run pocketlint tests in checkPhase
So far we had disabled the tests while referring to the NixOS VM test
instead. However, it's desirable to run as much tests as we can, so
let's run the pocketlint tests in checkPhase instead of skipping it
altogether.

In my case this is very useful because it would have caught a few errors
during development of the UUIDs pull request:

https://github.com/rhinstaller/blivet/pull/537

But even if we're not directly developing for the upstream project, this
also catches Nix-related errors, such as references against pyanaconda
which might still exist or exist again after an update.

I'm using a list to accumulate find arguments because I wanted to avoid
endless repetitions of -o -path xyz -prune.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-24 07:36:38 +01:00
aszlig
c44441698d blivet: Update patch for setting UUIDs
This just contains one additional commit which fixes various pylint
errors.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-24 07:33:25 +01:00
aszlig
69149f3122 blivet: Remove all imports of pyanaconda
We already had stubs for some pyanaconda imports so far, but some
functionality like enable_installer_mode() inherently depends on it, so
let's remove enable_installer_mode().

Another occurence of pyanaconda import is in storage_initialize():

  from pyanaconda.flags import flags as anaconda_flags
  flags.update_from_anaconda_flags(anaconda_flags)

This is an installer-specific function which should also be quite tied
to pyanaconda, but instead of removing this function altogether, we just
remove the import, because it only appends certain flags from the
pyanaconda module.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-24 07:21:26 +01:00
aszlig
528c6ac8ea nixos/tests/storage/matchers: Remove labels
We no longer need to use labels now that every file system gets assigned
an UUID in latest nixpart.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-15 03:59:13 +01:00
aszlig
74a45d7142 nixpart: Update to latest master version
Switches to using a dictionary for devspecs and supports setting UUIDs
for every device specification (currently only sets RFC4122-style
UUIDs).

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-15 03:59:10 +01:00
aszlig
e2a24488b9 blivet: Add patch to set NTFS formattable
Even though the ntfs3g utilities are available inside our test
environment, the format didn't get advertised as formattable because the
_formattable attribute wasn't set to True.

Submitted upstream at:

https://github.com/rhinstaller/blivet/pull/536

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-15 03:48:31 +01:00
aszlig
135f831370 nixos/tests/blivet: Add mtools and ntfs3g
These tools are needed in order to run tests for NTFS and for setting
the serial of a FAT file system after it has been created.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-15 03:46:40 +01:00
aszlig
3ed76021f8 blivet: Add patch for setting UUIDs
I'm heading for a hybrid approach (using UUIDs and partition layout
holes) in nixpart for achieving storage tree determinism, so we need to
have support for setting UUIDs. Blivet currently doesn't yet support
this, so I've implemented it.

Upstream pull request:

https://github.com/rhinstaller/blivet/pull/537

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-15 00:33:15 +01:00
aszlig
c97a18a64a nixos/storage: Don't put whole config in devspec
While it may be handy to put the whole configuration of the
corresponding device specification into the values of the options
referring to them, this unfortunately blows up the size of the JSON
output we pass to nixpart.

This is unnecessary because we're only interested in the UUID.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-15 00:01:38 +01:00
aszlig
a8952b1ae4 nixos/storage: Integrate storage UUIDs in fs/swaps
This implements the deterministically generated UUIDs to be used while
mounting file systems, but only if there is no label set already. So the
user still has a way to set labels (which are also applied by nixpart)
and use them accordingly, even though the UUIDs should be more
distincive.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-06 20:04:16 +01:00
aszlig
2b0095599c nixos/storage/lib: Propagate devspec's config
This is handy if we want to look up configuration options for a specific
device specification, so with only the internal representation of a
devspec we can simply say devspec.uuid to get a generated UUID.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-06 19:40:08 +01:00
aszlig
8e861d2eeb nixos/storage: Make devspec an attrset internally
So far we passed the device specification as-is to nixpart, but from
within the module system it's quite tricky to validate or look up such a
string, because we need to parse it every time we need to do a look up
an a configuration value in "storage.*".

Now a device specification is an attribute set consisting of a `name'
and a `type' attribute. We also have a new applyTypeContainer attribute
we need to pass to mkDeviceSpecOption so that we can properly convert
things such as "listOf devspecType" into a list of valid internal
representations of device specifications.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-06 18:13:07 +01:00
aszlig
17d464b9f7 nixos/storage: Switch to a new mkDeviceSpecOption
Having just a single type for a device specification doesn't work out
well if we want to have an apply function, which we do want, because it
makes more sense if we want to resolve such a device specification
without using builtins.match all over the place.

It also improves a lot in readability of the option descriptions,
because every such option now has not only a description of what a
device specification is but also lists the valid types for the device
specification.

This has another advantage that instead for something like the
following:

Type: list of device specification of <type>.<name>s

The type description is now just:

Type: list of device specifications

We're also heading for more consistency, speaking about "device
specification" or shortly "devspec". Say if we have something like
"storage.foo.bar", "foo.bar" is the "device specification" and "foo" is
the "device specification type" and "bar" is the "device specification
name".

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-06 17:27:27 +01:00
aszlig
c4698167e1 nixos/storage: Generate UUID for each device spec
We want to have deterministic UUIDs for every device specification in
order to avoid the need to manually set labels all over the place.

Of course, we could internally set labels instead of precomputing UUIDs,
but labels have different length restrictions for every file system (for
example XFS has a maximum of 12 bytes, for ext4 it's 16 bytes). In
addition to that we remove the ability for people to set their own
labels during runtime.

The UUIDs generated here are based on version 5:

https://tools.ietf.org/html/rfc4122#section-4.1.3

Our variant deviates from this a bit in that we use string concatenation
to build up the input for the SHA1 hash instead of binaries. The results
however are pretty much the same and in our part the most important
aspect is determinism rather than having a truly unique value across the
whole planet.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-06 16:10:46 +01:00
aszlig
c0c04ca1d3 nixos/tests/storage: Pass system to storage eval
This fixes the following test failure on i686-linux:

https://headcounter.org/hydra/build/1562265/nixlog/13/raw

The reason we get an exec format error here is that we evaluate the
storage spec using the host system while the rest is evaluaten using the
system attribute from the test's args.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-05 19:55:37 +01:00
aszlig
26c6ce6d52 nixpart: Update to latest master version
Adds support for mounting of file systems, which means that now the
"btrfs", "ext" and "matchers" storage tests are succeeding.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-05 10:02:19 +01:00
aszlig
5ccda7bde6 nixos/tests/storage/matchers: Assign labels
We're basically destroying the initial information that's relevant for
the matchers to actually match the corresponding devices, so we need
those labels to find the newly created dummy ext4 file systems again.

This also makes the definitions for fileSystems less redundant, because
we now generate it using listToAttrs and genList.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-05 09:31:19 +01:00
aszlig
1531442588 nixos/tests/storage: Don't always check /mnt
So far we checked whether /mnt is a valid mountpoint during invocation
of remountAndCheck. Now since we have the "matchers" sub test, we no
longer have anything mounted directly in /mnt, so it doesn't make sense
to check it unconditionally.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-05 09:30:49 +01:00
aszlig
acc1c0b3e4 nixos/tests/storage: Show stdout of nixpart -m
We want to know the messages printed to stdout regardless of whether
nixpart -m has failed or not, primarily because it makes debugging
easier (just adding "print(something)" should suffice).

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-05 09:24:42 +01:00
aszlig
9dca14e438 nixos/storage/disk: Fix wording of allowIncomplete
Just remove the redundant "array", because RAID already includes array
in the A.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-05 03:26:12 +01:00
aszlig
96bb3b1ae7 nixos/tests/storage: Add subtest for matchers
We want to make sure that the options defined in disk.${name}.match are
working. So we define a bunch of disks with all currently available
matching methods and check afterwards if the devices get mounted.

Of course the "check afterwards" part is solely theoretical because this
is not yet supported in nixpart.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-05 03:05:09 +01:00
aszlig
02765b407f nixos/storage/disk: Allow only one match method
Having multiple matchers is a bit tricky if we don't know how they
should be combined. For example if we have a match on a label and a
device name and both produce valid matches, which one should we choose?

So let's restrict the use of device matchers to allow only one method
right now. If we later figure out a better way how to combine these
matchers, we can still lift this restriction easily.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-05 00:32:19 +01:00
aszlig
4fbea84433 nixpart: Update to latest master version
Adds support for JSON and drops XML support and thus brings nixpart in
par with our current storage module.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-04 10:40:20 +01:00
aszlig
cd8a05c291 nixos/tests/storage: Switch to using --json
The latest version of nixpart no longer needs XML nor do we now need to
gather the various option definitions from within the NixOS test but we
simply use config.system.build.nixpart-spec, which is the resulting JSON
file we need to pass to nixpart.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-04 05:52:12 +01:00
aszlig
5a3b198da8 blivet: Simplify the no-hawkey.patch
Hawkey is normally used to get package versions and we have used
nix-store -qR to query the full dependency graph.

However, for just checking the versions for programs we have in $PATH it
should suffice to just traverse $PATH and run a regex on the package
name and version without walking the whole dependency graph.

I've used the latter only because we might have wrapped programs, but
after running the tests it turns out that we really don't need such an
overhead.

Also, it's quite a nuisance if we want to run tests in an isolated
environment where we don't have access to the store database (which
nix-store -q is trying to open).

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-04 05:47:40 +01:00
aszlig
109f782cdf nixos/storage: Check assertions when building JSON
We want the evaluation to fail if one or more assertions arise in the
storage module. However, we don't want to raise unrelated assertions,
because these are clearly the job of system.build.toplevel and only are
relevant for a full system.

However, we still propagate the assertions down to config.assertions, so
that whenever a full system is built, the storage assertions get thrown
as well (if there are any).

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-04 02:50:09 +01:00
aszlig
dbfb050cbd nixos/storage: Provide system.build.nixpart-spec
This is to make it easier to evaluate the config with only the storage
options returned as JSON, along with all the stuff already built like
for example match.script options for disks.

If we'd only nix-instantiate the config we'd only get the .drv for the
script which we'd need to realize in another step.

In addition to that, it also makes it easier to do more extensive
validation on the storage options and we also have a way to even return
an entirely different JSON structure to what we have set in the options,
even though we don't intend to do that.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-04 02:41:53 +01:00
aszlig
0fb9661969 nixos/storage/disk: Add apply function to script
Having this as ready to be instantiated derivation is better than doing
this within nixpart, because we can make sure that all dependencies we
need for that script are in place.

Note that I'm using ${pkgs.bash}/bin/bash instead of
${pkgs.stdenv.shell}, so that it's guaranteed that the documentation is
right even if stdenv.shell should change someday.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-04 02:19:52 +01:00
aszlig
d5bcbcd874 nixos/storage/disk: Make .match a submodule
This has the advantage of making it easier to set a default value and
also makes documentation a bit more "in place", so that we don't need to
write documentation about matches in storage.disk but in
storage.disk.*.match instead.

The default value is now match.name = diskName, where diskName is the
name defined by storage.disk.NAME.

Of course, we still have the limitation that we can't set multiple
matchers of the same type, but we can implement that easily by adding
another option "matchers" or "matches" that takes a list of matcher
submodules.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-04 01:54:51 +01:00
aszlig
04dad9e5d2 nixos/storage: Add documentation to storage.disk
We only had short descriptions for the options defined via deviceTypes
and thus we now also have a doc attribute that specifies a longer
description of the option.

This is needed in order for the user to know what's the default matching
method in case no particular matcher is specified.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-04 00:46:44 +01:00
aszlig
0b656c0c55 nixos/storage: Add options for matching disks
This is preliminary because this needs to be properly type-constrained
and with a fallback to the device name.

Also I'm not yet sure whether we should create match.script via the
NixOS module system or within nixpart.

But before deciding on this, the current implementation at least serves
the purpose of documentation. Even though quite a bit is subject to
change the documentation on the individual matchers will still largely
apply.

However what's is going to be changed is how we're going to handle the
"match" option. Do we want it to be an additional submodule or do we
want it to be like it is now?

With the current state however it's a bit ugly because on one side we
allow multiple matchers but on the other side we don't allow the same
matcher to apply more than once.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-04 00:46:39 +01:00
aszlig
cf912a178a storage: Move types and sizeUnits into new lib.nix
This should leave the default.nix with only option declarations and
without our custom types. The move of sizeUnits to lib.nix is currently
a bit of a workaround, but we're going to untangle that later.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-03 00:09:55 +01:00
aszlig
567ce6863b nixos/storage: Move module into its own directory
We're going to break down various parts of the module into smaller
files, so that the main default.nix doesn't get cluttered up by
implementation details.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-02 23:52:56 +01:00
aszlig
ddc083d872 blivet: Include patch fixing tests
So it turned out that the test failures we get with tmpfs are actually
an upstream problem because they seem to be not running tests that
require to be run as root. Here is a paste from an earlier run posted by
@vojtechtrefny:

https://paste.fedoraproject.org/518572/33694971/

The patch I'm using here is from @vojtechtrefny as well (pull request
rhinstaller/blivet#532) and should not only fix the tmpfs tests but a
few other issues.

After running the test suite with this patch applied the tests are now
succeeding:

https://headcounter.org/hydra/log/ncy4wdpnhzww5yfqv9p8l9cl97dp3cac-vm-test-run-blivet.drv

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-02 17:19:22 +01:00
aszlig
1c746cc039 nixos/tests/blivet: Add support for HFS+
This is needed for the macefi and HFS tests and I'm adding solely for
the sake of completeness so that we have the biggest test coverage
possible.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-02 17:02:24 +01:00
aszlig
c32d4097fe nixos/doc: Add stub chapter for storage config
Currently only contains a small warning about the options being
experimental and is going to be written as we go with refactoring the
storage module.

Note that I've put it to be included *before* file-systems.xml because
the storage configuration also includes setting various fileSystems.*
options.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2017-01-02 05:28:51 +01:00
aszlig
6217686df5 nixos/storage: Gracefully handle storage.btrfs
This is an exception to the container types (isContainer in
deviceTypes) in that we *only* allow fsType to be "btrfs" for btrfs
subvolumes.

If this is set to something else than "btrfs", throw an assertion error
printing the conflicting options.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-31 13:51:03 +01:00
aszlig
0fdc325fe6 nixos/storage: Validate device specification names
Whenever a device specification is cross-referenced we need to check
whether a definition for the exists. So for example if we have:

storage.mdraid.raid.devices = [ "partition.raid1" "partition.raid2" ];

We need to make sure here that storage.partition.raid1 and
storage.partition.raid2 are actually defined.

Of course we could check this within nixpart as well, but we want to
avoid such errors at run time.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-31 12:25:49 +01:00
aszlig
00e5ecf968 nixos/storage: Add sizes to sizeUnit descriptions
The sizes relative to the corresponding smaller units.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-31 11:42:30 +01:00
aszlig
cd71d31fc2 nixos: Move {fileSystems,swapDevices}.storage
I initially had these options in the storage module before actually
adding them to <nixpkgs>. Now it's time to put them back into the
storage module so that we have everything that's related to the module
in one place, so that we can do even more comprehensive type checking.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-31 11:42:26 +01:00
aszlig
25fd47c167 nixos/storage: Flesh out checking of device specs
Every device specification is in the form "<type>.<name>" and so far the
type for referencing a specific device has been a plain types.str.

Now we're not only checking whether the device specification is a string
but also whether its syntax is correct and the type actually exists and
is valid for a particular option.

We now have a deviceTypes attribute set which is our main definition for
all available device specifications and it also categorizes them with
attributes like "resizable" or "orderable" which add the corresponding
options to the option set of the device specification submodule.

What's still missing are assertions on whether the actual name
references a device which actually has been defined.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-31 11:42:23 +01:00
aszlig
51f85bd251 nixos/storage: Set clear if initlabel is true
This is just the functionality of what's already documented in the
description of initlabel.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-30 17:05:04 +01:00
aszlig
d39dd4039d nixpart: Update to latest master version
This incorporates the changes revolving around the size option.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-28 15:26:24 +01:00
aszlig
372fa21422 nixos/tests/storage: Use MiB for test sizes
The default size unit that's printed by the blivet device tree
representation is MiB, GiB and so on. This makes it more obvious whether
the correct size was used for partitioning without the need to convert
between mebibytes and megabytes.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-28 15:13:13 +01:00
aszlig
8c9d3192f0 nixos/storage: Use bytes for plain int sizes
It makes more sense to default to plain bytes, so that we have the
lowest unit and also the principle of least astonishment because people
would usually assume if the didn't read the description of the option
that the amount is in bytes rather than some arbitrary value.

However, in terms of specifying sizes for partitioning, MiB or MB would
make more sense because it's highly unlikely that people want to have a
partition that's only a few bytes large.

Nevertheless having MiB vs. MB is probably also confusing because it's
clear whether people would assume the default based on 1024 units or
units of 1000.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-28 15:08:04 +01:00
aszlig
8ec55bf596 nixos/tests/storage/ext: Fix check of boot sector
This was a typo I did in the first implementation and we really want to
check for the existance of a MBR on /dev/vdb instead of /dev/vdb4.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-28 14:44:35 +01:00
aszlig
37ca379834 nixos/tests/storage: Always show stdout of nixpart
Right now we have a print() at the end of the realize() function within
nixpart, which is going to print the device tree to stdout.

While I could print it to stderr instead it nevertheless make sense to
always show all the results from nixpart, regardless of whether it has
failed or not.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-28 14:42:12 +01:00
aszlig
bdaf7adcbc nixos/storage: Don't use commonOptions for btrfs
BTRFS volumes are spanning over a range of physical devices and thus the
size option really doesn't apply here. Neither does it make sense to
have ordering.

In the future however we might want to allow setting a quota for a
particular volume.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-28 14:39:56 +01:00
aszlig
a5fd32ed75 nixos/tests/storage: Fix size/grow definitions
Use the new way to specify sizes as implemented in the previous commit.

Right now we only specify megabytes in the tests, so the tests do not
serve as very good examples on different size specifications which we
need to change soon.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-28 14:34:43 +01:00
aszlig
b73449b002 nixos/storage: Refactor size/grow options
The grow option now is no longer necessary, because the same effect can
be achieved by setting size to "fill". This also means that setting the
size option is now mandatory, thus it doesn't have a default value.

Instead of allowing a string for specifying size units we now use
attribute sets to do so, for example:

storage.partition.foo.size.mb = 123;

This would result into the "foo" partition being created with a size of
123 MB.

Of course it's possible to specify several units, for example:

storage.partition.foo.size = { mb = 123; kb = 456; b = 789; };

Now the type checking is also improved, so it actually shows more
information about which value is incorrectly set and why.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-28 13:12:08 +01:00
aszlig
81152a51a4 nixpart: Update to latest master version
This is mainly for testing purposes during the WIP branch, but it should
actually do partitioning for some of the NixOS VM tests (particularily
the .btrfs test).

I've tagged this specifically as unstable-1.0.0 to make sure noone is
seriously going to use it yet (except for playing around of course).

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-27 20:37:48 +01:00
aszlig
fa05131461 nixos/tests/storage: Rename --from-xml to --xml
I've changed this in aszlig/nixpart@7529c47b05.

So let's fix it here :-)

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-27 20:20:07 +01:00
aszlig
850466140c nixos/tests/storage: Fix exporting storage.xml
Using copyFileFromHost() doesn't work if the file contains single
quotes, because they're not escaped properly.

So let's move to a more robust way to provide storage.xml to the
guests (via environment.etc), because apart from that escaping issue we
really don't need anything like copyFileFromHost() anymore because every
subtest now resides in its own derivation.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-27 20:16:21 +01:00
aszlig
d6f428c0d5 nixos/storage: Add defaults for clear/initlabel
I haven't yet stumbled on this because I have always set these options
within test configurations.

So this now allows to set empty disk options, which is fine (after all
we just need to reference them using the storage option in fileSystems
and swapDevices).

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-27 17:49:37 +01:00
aszlig
85e72e3abd nixos/tests/storage: Split into sub-derivations
Running a specific test case is a bit icky if you need to comment out
the parts you don't want to run or wait for tests to succeed that you're
not even interested in.

This splits the subtests into its own derivations that can be simply
referenced using -A of nix-build/nix-instantiate.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-27 13:39:07 +01:00
aszlig
e17bb6195d nixpart: Move out of python-packages.nix
The reason why it was in python-packages.nix is because we needed to
have a way to run this with different Python versions and also that the
project's API could be used by another program or library.

This is not intended so far and even if we're going to do that, we can
still move it back into python-packages.nix.

This now should make it easier to override the arguments of the package
and also should be easier to install inside a user env via "nix-env -iA
nixpart" instead of "nix-env -iA python3Packages.nixpart".

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-27 12:58:12 +01:00
aszlig
ccbee460da nixos/tests/blivet: Fix typo in comment
It's the volume_key _binary_ not the volume_key "something" :-)

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-27 09:39:11 +01:00
aszlig
e919a460a7 blivet: 0.67 -> 2.1.7
This is basically a bump from the stone age to the current version, so
the changelog would be a bit long to summarize here, hence here is the
URL:

https://github.com/rhinstaller/blivet/blob/blivet-2.1.7/python-blivet.spec#L80-L1278

A few of the direct dependencies of blivet are now direct depencies of
libblockdev, which inself is a replacement for pyblock in C and uses
gobject-introspection to resolve the C symbols.

We also patch out hawkey and replace it with a wrapper around nix-store,
because on NixOS we can't use libsolv to resolve dependencies.

Right now, I'm also disabling the EDD tests, because I think they
shouldn't work within our VM test environments, but I need to dig a bit
more into that.

In the end we now still have 12 failing test cases which we need to
resolve:

https://headcounter.org/hydra/log/1rbpjrrjvc95ldcmmrqbangh5ssizygr-vm-test-run-blivet.drv

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-27 09:19:12 +01:00
aszlig
108ba655a2 nixos/tests/blivet: Refactor test runner
First of all, this gets rid of all the copying of the test sources by
providing them via a separate output directly from the corresponding
packages.

This also simplifies the way we retrieve environment variables needed
for running the tests. Previously we used one derivation for every
environment variable, the latter being defined in an attrset so we can
make customizations.

We no longer need these customizations, especially because libblockdev
and blivet are both rhinstaller projects and pretty much have the same
testing setup.

So now we gather these variables in one derivation and also do not fetch
LD_LIBRARY_PATH anymore, because all of the library path references are
built into the corresponding libraries.

I'm also no longer using "with pkgs.lib;" for the whole expression to
make sure we can catch eval errors very early on (my Vim config does
"nix-instantiate --parse" when writing the file).

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-26 18:21:32 +01:00
aszlig
d99ceaeb9e python/pyudev: 0.16.1 -> 0.21.0
Upstream changes are a bit longer, see:

https://github.com/pyudev/pyudev/blob/v0.21.0/CHANGES.rst

While doCheck is implicitly true, the tests aren't actually hooked into
setup.py but need to be run via py.test.

I'm not running them on purpose because pyudev's tests assume they're on
a full-featured (GNU/)Linux system and thus the majority of the tests
fail.

Tested against Python 2.7, 3.4, 3.5 and 3.6.

Note that Python 3.3 doesn't work because the package requires the enum
module, which is only available since Python 3.4 so I've disabled
support for Python 3.3 for pyudev.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-25 06:33:38 +01:00
aszlig
8fcd2ed526 nixos/tests/blivet: Add libblockdev subtest
This includes a lot of cruft that has accumulated during my previous
tries to get the tests working. After several tries (several revisions
of libblockdev and blivet) now the libblockdev tests succeed so we can
finally move over to get blivet working.

Right now the blivet subtest is commented out entirely because it will
fail to evaluate. First we need to refactor blivet and its tests and
then we need to refactor the whole NixOS test with both sutests so
they're as DRY as possible.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-25 05:34:59 +01:00
aszlig
96f98dd47d parted: Add patch to correctly get sector size
Once again a patch from Fedora:

http://pkgs.fedoraproject.org/cgit/rpms/parted.git/tree/0031-Use-BLKSSZGET-to-get-device-sector-size-in-_device_p.patch

Submitted upstream at:

http://lists.alioth.debian.org/pipermail/parted-devel/2016-March/004817.html

This fixes the remaining failing tests for libblockdev.

Tested by building against i686-linux and x86_64-linux.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-25 05:27:21 +01:00
aszlig
f26b8d8f70 libblockdev: Skip tests using fake paths/utils
These tests are irrelevant on Nix(OS) because we ship libblockdev with
all the binary paths directly built in (bin-paths.patch and "binPaths"
attribute), so there is no way to actually fake these utilities except
by rebuilding the package.

Apart from that the tests try to recompile libblockdev on-the-fly to
check whether plugin loading works correctly. This is also a non-issue
with Nix, because *all* plugins are always available.

So the tests.patch skips all of these tests involving these non-issues.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-25 04:58:16 +01:00
aszlig
74adc1ee5f libblockdev: Use lvm2 with thin provisioning
While it probably doesn't make sense to include lvm2 with enabled thin
provisioning to the initrd it certainly makes sense for libblockdev and
in turn for nixpart.

It remains to be seen whether we'd need to add this to a generated NixOS
configuration once someone actually wants to use it with rootfs volumes
but for now this is enough to let libblockdev's LVM tests pass.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-25 04:52:28 +01:00
aszlig
3e79944c9f lvm2: Add an option to enable thin provisioning
This is going to be used for libblockdev. I'm a bit hesitant to enable
this by default though, to make sure we don't unnecessarily increase the
closure size of lvm2 (especially if it comes to the initrd, this could
be quite dangerous).

Tested against the tests.installer.lvm NixOS VM test even though it
shouldn't affect the result.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-25 02:38:35 +01:00
aszlig
fc76501a66 lvm2: Set a default fallback profile dir
For example the following fails on NixOS:

lvcreate --profile thin-performance ...

It tries to look for a thin-performance.profile file in
/etc/lvm/profile, which by default doesn't exist.

So in order to make those profiles work we now set a default profile
directory which is used if lvm doesn't have a configuration file.

Tested against the tests.installer.lvm NixOS VM test.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Cc: @7c6f434c
2016-12-25 01:52:54 +01:00
aszlig
02dbe226ee lvm2: Clean up package expression
First of all the purity.patch is no longer referenced, so instead of
keeping it rotting around in the source tree, let's remove it. We can
still bring it back if we really need it.

Next, let's move version attribute to the derivation attributes, because
this is what we adopted as a standard for other packages. Plus I
couldn't find any package within the source tree where this is
referenced.

The systemd generator and unit files are now installed via the make
files provided by the upstream package. This also means that the name of
"blk_availability_systemd_red_hat.service" now is
"blk-availability.service" and resides in $out/lib/systemd/system
instead of $out/etc/systemd/system. Again, I haven't found anything that
references either of these paths.

We now also install the standard lvm config files in $out/etc/lvm,
although lvm does not search these paths by default. We might want to
fix this later.

Tested against the tests.installer.lvm NixOS VM test.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Cc: @7c6f434c
2016-12-25 00:16:26 +01:00
aszlig
5762513836 lvm2: 2.02.140 -> 2.02.168
The main reason I'm updating this is because the libblockdev tests
assume that lvm has proper default cache backend support, which only was
added in recent versions.

Upstream changelog can be found at:

https://git.fedorahosted.org/cgit/lvm2.git/tree/WHATS_NEW?h=v2_02_168

Tested against the NixOS installer test (tests.installer.lvm).

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Cc: @7c6f434c
2016-12-24 22:38:20 +01:00
aszlig
633f8c3f44 libblockdev: Propagate six, pygobject3 and GI
The six library is a dependency of the library, so add it to the
propagatedBuildInputs.

In addition, pygobject3 and gobjectIntrospection need to be propagated
as well if we want to use this library from within Python, which is its
primary use-case.

If we don't want this propagation, we could add a flag "withPython" or
something similar so that the Python-specific parts are bound to a
condition. Right now however, the only package which is going to use
libblockdev is blivet.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-24 21:20:22 +01:00
aszlig
4d35d75fee parted: Add patch to fix FAT16 resize
Patch is originally from:

https://bug735669.bugzilla-attachments.gnome.org/attachment.cgi?id=289405

It's part of the following bug report:

https://bugzilla.gnome.org/show_bug.cgi?id=735669

I stumbled on this while running the libblockdev tests. Unfortunately we
haven't seen a new release of parted yet, so I'll add this patch for
now.

Tested by building against i686-linux and x86_64-linux.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-24 21:16:57 +01:00
aszlig
b3b03e57b1 libblockdev: init at 2.1
Needed for the current version of blivet.

This library does a lot of calls to external binaries, so we need to
patch in a lot of paths (bin-paths.patch).

The checkPhase currently only checks whether the paths to these binaries
are correct but doesn't run the real tests, which require root
permissions.

We also depend only on Python 3.x, because Python 2.x support seems
broken at the moment and it really doesn't make sense to support it via
patching on our side while it's eventually becoming obsolete.

Other than the default output there is also a "tests" output which is
going to be used by the upcoming NixOS VM test to run the libblockdev
tests within a VM.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-24 21:16:54 +01:00
aszlig
d910d2932a volume_key: init at 0.3.9
One of the requirements of libblockdev.

I'm using "python" without specifying an explicit Python version, so
that it's easy to override the Python version via something like this:

  volume_key.override { inherit (python3Packages) python; }

Tested by building against i686-linux and x86_64-linux with Python 2.7
and Python 3.5.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-24 17:17:44 +01:00
aszlig
63c521dfc0 parted: Add pkgconfig file for libparted-fs-resize
The patch is from Fedora:

http://pkgs.fedoraproject.org/cgit/rpms/parted.git/tree/0025-Add-libparted-fs-resize.pc.patch

It adds the pkgconfig file via configure.ac and Makefile.am, so we need
to use autoreconfHook and pkgconfig as well. The latter is needed so
that PKG_CHECK_MODULES macros are properly replaced in the resulting
configure script.

Built with success on i686-linux and x86_64-linux.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-24 15:26:32 +01:00
aszlig
f59b1f6dd0 libbytesize: init at 0.8
This is needed for libblockdev, one of its implicit dependencies.

In order to successfully build the docs, we needed to pass
XML_CATALOG_FILES so that xsltproc (called by gtkdoc-mkhtml) is able to
find them.

The package only works with Python 3, so I'm explicitly depending on
python3Packages.

Tested by building against i686-linux and x86_64-linux, which is why I
set platforms to only Linux (don't have anything else to test on).

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-24 13:49:08 +01:00
aszlig
76ee7db7db python/pocketlint: init at 0.13
This is a package that is required for running common tests against
projects from RedHat/Fedora, for example libbytesize.

I'm removing nix_run_setup.py before the check phase because that files
triggers a few pylint warnings and in turn causes the tests to fail.

Note that I don't use rm -f here to make sure the build fails once we no
longer need the nix_run_setup.py file so we can remove the reference
from the pocketlint as well.

Tested by building against Python 3.3, 3.4, 3.5 and 3.6.

Building against Python 3.6 has failed because of a test failure in
pylint, so it's only a transient failure.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-24 13:32:55 +01:00
aszlig
e4a5fa0c53 mpathconf: init at 0.4.9.83
This is basically multipath-tools patched into oblivion so that it's
resulting into a "mpathconf" script.

Although the patches apply to a lot more than just mpathconf and its
manpage, we use filterdiff to only apply all the patches that apply to
either mpathconf or mpathconf.[0-9]* so that we don't need to have the
original source of multipath-tools.

The reason I'm packaging this is that it's needed for libblockdev to
configure multipath devices.

As the description says, it's for editing /etc/multipath.conf so in the
long run let's see whether we actually need this if we can patch
libblockdev so it's not relying on a global multipath.conf at runtime.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-24 10:52:03 +01:00
aszlig
ecb98b9d4a libselinux: Make easier to build against Python 3
If we now use something like this:

pkgs.libselinux.override {
  enablePython = true;
  python = pkgs.python3;
}

... we now no longer will get the following error:

selinuxswig_wrap.c:143:21: fatal error: Python.h: No such file or directory

This happens because the Makefile is using the Python interpreter to get
the right library dir for Python:

PYTHONLIBDIR ?= $(shell $(PYTHON) -c "
  from distutils.sysconfig import *;
  print(get_python_lib(1))
")

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-24 10:51:59 +01:00
aszlig
d4f6c45f9d tests/storage: Don't import <nixpkgs>
Let's use relative paths instead, because the version in <nixpkgs>
isn't necessarily the same as the current nixpkgs tree.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-24 10:51:53 +01:00
aszlig
d98b8468ea nixos/tests/storage: Pass storage config as XML
We're internally calling nix-instantiate to get the required options
from the configuration, so let's pass it through an option that skips
this step.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-24 10:51:43 +01:00
aszlig
4a1ac11910 nixos/test/storage: Fix reference to kickstart
We have renamed the function to nixpart() already, so let's make sure we
rename it accordingly.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-24 10:51:30 +01:00
aszlig
c4b8a02e41 nixos/tests: Enable "partition" as "storage"
Renames the test to closer match the NixOS module attribute and put it
into release.nix. Of course, those tests still fail, because nixpart is
still WIP and I haven't pushed the first version yet.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-24 10:51:15 +01:00
aszlig
ea8ae6a822 nixos/test/partition: Rewrite for nixpart 1.0
Currently, this is still WIP and subject to change, but it helps to see
whether our storage configuration options actually work out the way we
want.

Still needs a lot of cleanup, especially regarding the -m option, where
I'm not sure whether we should do it with nixpart or write our own
lightweight solution to be built into NixOS.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-24 10:51:05 +01:00
aszlig
9efd3013c2 nixos/storage: Fix missing volgroupType stub
All of the types are just stubs right now, but I actually forgot
volgroupType, as I didn't do tests of LVM in nixpart so far yet.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-24 10:50:52 +01:00
aszlig
f0f7869c74 nixos: Add storage module for nixpart
This is not the final version, because I'm not yet sure whether we want
BTRFS as a special option here. Also, we don't check types properly yet.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-24 10:50:41 +01:00
aszlig
0f6f3a961e nixos: Add storage opt to fileSystems/swapDevices
References a partition, disk, volume or whatever you like instead of
using a device path or label. Creating those storage devices is done by
nixpart and we can infer the right labels and/or poths from the device
tree.

I've added those hooks here, because duplicating things such as fsType,
label, options or mountPoint in the storage configuration look kinda
pointless to me.

Signed-off-by: aszlig <aszlig@redmoonstudios.org>
2016-12-24 10:50:23 +01:00
33 changed files with 4225 additions and 456 deletions

View File

@@ -19,6 +19,7 @@ effect after you run <command>nixos-rebuild</command>.</para>
<xi:include href="config-syntax.xml" />
<xi:include href="package-mgmt.xml" />
<xi:include href="user-mgmt.xml" />
<xi:include href="storage.xml" />
<xi:include href="file-systems.xml" />
<xi:include href="x-windows.xml" />
<xi:include href="networking.xml" />

View File

@@ -0,0 +1,12 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="ch-storage">
<title>Storage configuration</title>
<warning><para>The <option>storage.*</option> options are experimental
and are subject to change without a deprecation phase.</para></warning>
</chapter>

View File

@@ -101,8 +101,9 @@ in
The swap devices and swap files. These must have been
initialised using <command>mkswap</command>. Each element
should be an attribute set specifying either the path of the
swap device or file (<literal>device</literal>) or the label
of the swap device (<literal>label</literal>, see
swap device or file (<literal>device</literal>), the device
from the storage configuration (<option>storage.*</option>) or
the label of the swap device (<literal>label</literal>, see
<command>mkswap -L</command>). Using a label is
recommended.
'';
@@ -164,7 +165,7 @@ in
restartIfChanged = false;
};
in listToAttrs (map createSwapDevice (filter (sw: sw.size != null || sw.randomEncryption) config.swapDevices));
in listToAttrs (map createSwapDevice (filter (sw: sw.device == null && (sw.size != null || sw.randomEncryption)) config.swapDevices));
};

View File

@@ -612,6 +612,7 @@
./tasks/network-interfaces-systemd.nix
./tasks/network-interfaces-scripted.nix
./tasks/scsi-link-power-management.nix
./tasks/storage
./tasks/swraid.nix
./tasks/trackpoint.nix
./testing/service-runner.nix

View File

@@ -159,10 +159,10 @@ in
(the mount options passed to <command>mount</command> using the
<option>-o</option> flag; defaults to <literal>[ "defaults" ]</literal>).
Instead of specifying <literal>device</literal>, you can also
Instead of specifying <literal>device</literal>, you can also either
specify a volume label (<literal>label</literal>) for file
systems that support it, such as ext2/ext3 (see <command>mke2fs
-L</command>).
-L</command>) or reference a device from <option>storage.*</option>.
'';
};

View File

@@ -0,0 +1,322 @@
{ config, pkgs, lib, ... }:
let
inherit (lib) mkOption types;
storageLib = import ./lib.nix { inherit lib; cfg = config.storage; };
containerTypes = let
filterFun = lib.const (attrs: attrs.isContainer or false);
in lib.attrNames (lib.filterAttrs filterFun deviceTypes);
resizableOptions = deviceSpec: {
options.size = mkOption {
type = storageLib.types.size;
example = { gib = 1; mb = 234; };
apply = s: if lib.isInt s then { b = s; } else s;
description = ''
Size of the ${deviceSpec.description} either as an integer in bytes, an
attribute set of size units or the special string
<literal>fill</literal> which uses the remaining size of the target
device.
Allowed size units are:
<variablelist>
${lib.concatStrings (lib.mapAttrsToList (size: desc: ''
<varlistentry>
<term><option>${size}</option></term>
<listitem><para>${desc}</para></listitem>
</varlistentry>
'') storageLib.sizeUnits)}
</variablelist>
'';
};
};
genericOptions = deviceSpec: { name, ... }: {
options.uuid = mkOption {
internal = true;
description = ''
The UUID of this device specification used for device formats, such as
file systems and other containers.
This is a generated value and shouldn't be set outside of this module.
It identifies the same device from within nixpart and from within a
NixOS system for mounting.
By default, every file system gets a random UUID, but we need to have
this deterministic so that we always get the same UUID for the same
device specification. So we hash the full device specification (eg.
<literal>partition.foo</literal>) along with a
<literal>nixpart</literal> namespace with sha1 and truncate it to 128
bits, similar to version 5 of the UUID specification:
<link xlink:href="https://tools.ietf.org/html/rfc4122#section-4.1.3"/>
Note that instead of a binary namespace ID, we simply use string
concatenation in the form of
<literal>[namespace]:[spectype].[specname]</literal>, so for example the
device specification of <literal>partition.foo</literal> gets a hash
from <literal>nixpart:partition.foo</literal>.
'';
};
config.uuid = let
inherit (builtins) hashString substring;
baseHash = hashString "sha1" "nixpart:${deviceSpec.name}.${name}";
splitted = [
(substring 0 8 baseHash)
(substring 8 4 baseHash)
(substring 12 4 baseHash)
(substring 16 4 baseHash)
(substring 20 12 baseHash)
];
in lib.concatStringsSep "-" splitted;
};
orderableOptions = deviceSpec: {
options.before = storageLib.mkDeviceSpecOption {
typeContainer = types.listOf;
applyTypeContainer = map;
validDeviceTypes = [ deviceSpec.name ];
default = [];
description = ''
List of ${deviceSpec.description}s that will be created after this
${deviceSpec.description}.
'';
};
options.after = storageLib.mkDeviceSpecOption {
typeContainer = types.listOf;
applyTypeContainer = map;
validDeviceTypes = [ deviceSpec.name ];
default = [];
description = ''
List of ${deviceSpec.description}s that will be created prior to this
${deviceSpec.description}.
'';
};
};
partitionOptions.options = {
targetDevice = storageLib.mkDeviceSpecOption {
validDeviceTypes = containerTypes;
description = ''
The target device of this partition.
'';
};
};
mdraidOptions.options = {
level = mkOption {
type = types.int;
default = 1;
description = ''
RAID level, default is 1 for mirroring.
'';
};
devices = storageLib.mkDeviceSpecOption {
typeContainer = types.listOf;
applyTypeContainer = map;
validDeviceTypes = containerTypes;
description = ''
List of devices that will be part of this array.
'';
};
};
volgroupOptions.options = {
devices = storageLib.mkDeviceSpecOption {
typeContainer = types.listOf;
applyTypeContainer = map;
validDeviceTypes = containerTypes;
description = ''
List of devices that will be part of this volume group.
'';
};
};
logvolOptions.options = {
group = storageLib.mkDeviceSpecOption {
validDeviceTypes = [ "volgroup" ];
description = ''
The volume group this volume should be part of.
'';
};
};
btrfsOptions.options = {
devices = storageLib.mkDeviceSpecOption {
typeContainer = types.listOf;
applyTypeContainer = map;
validDeviceTypes = containerTypes;
description = ''
List of devices that will be part of this BTRFS volume.
'';
};
data = mkOption {
type = types.nullOr types.int;
default = null;
description = ''
RAID level to use for filesystem data.
'';
};
metadata = mkOption {
type = types.nullOr types.int;
default = null;
description = ''
RAID level to use for filesystem metadata.
'';
};
};
deviceTypes = {
disk = {
description = "disk";
isContainer = true;
options = import ./disk.nix { inherit pkgs; };
};
partition = {
description = "disk partition";
isContainer = true;
orderable = true;
resizable = true;
options = partitionOptions;
};
mdraid = {
description = "MD RAID device";
isContainer = true;
orderable = true;
options = mdraidOptions;
};
volgroup = {
description = "LVM volume group";
options = volgroupOptions;
};
logvol = {
description = "LVM logical volume";
isContainer = true;
orderable = true;
resizable = true;
options = logvolOptions;
};
btrfs = {
description = "BTRFS volume";
options = btrfsOptions;
};
};
# Return true if an option is referencing a btrfs storage specification.
isBtrfs = storage: lib.isString storage && lib.hasPrefix "btrfs." storage;
assertions = let
# Make sure that whenever a fsType is set to something different than
# "btrfs" while using a "btrfs" device spec type we throw an assertion
# error.
btrfsAssertions = lib.mapAttrsToList (fs: cfg: {
assertion = if isBtrfs cfg.storage then cfg.fsType == "btrfs" else true;
message = "The option `fileSystems.${fs}.fsType' is `${cfg.fsType}' but"
+ " \"btrfs\" is expected because `fileSystems.${fs}.storage'"
+ " is set to `${cfg.storage}'.";
}) config.fileSystems;
# Only allow one match method to be set for a disk and throw an assertion
# if either no match methods or too many (more than one) are defined.
matcherAssertions = lib.mapAttrsToList (disk: cfg: let
inherit (lib) attrNames filterAttrs;
isMatcher = name: name != "_module" && name != "allowIncomplete";
filterMatcher = name: val: isMatcher name && val != null;
defined = attrNames (filterAttrs filterMatcher cfg.match);
amount = lib.length defined;
optStr = "`storage.disk.${disk}'";
noneMsg = "No match methods have been defined for ${optStr}.";
manyMsg = "The disk ${optStr} has more than one match methods"
+ " defined: ${lib.concatStringsSep ", " defined}";
in {
assertion = amount == 1;
message = if amount < 1 then noneMsg else manyMsg;
}) config.storage.disk;
in btrfsAssertions ++ matcherAssertions;
in
{
options.storage = lib.mapAttrs (devType: attrs: mkOption {
type = let
deviceSpec = attrs // { name = devType; };
orderable = attrs.orderable or false;
resizable = attrs.resizable or false;
in types.attrsOf (types.submodule {
imports = [ attrs.options (genericOptions deviceSpec) ]
++ lib.optional orderable (orderableOptions deviceSpec)
++ lib.optional resizable (resizableOptions deviceSpec);
});
default = {};
description = "Storage configuration for a ${attrs.description}."
+ lib.optionalString (attrs ? doc) "\n\n${attrs.doc}";
}) deviceTypes;
options.fileSystems = mkOption {
type = types.loaOf (types.submodule ({ config, ... }: {
options.storage = storageLib.mkDeviceSpecOption {
validDeviceTypes = containerTypes ++ [ "btrfs" ];
typeContainer = types.nullOr;
applyTypeContainer = fun: val: if val == null then null else fun val;
default = null;
example = "partition.root";
description = ''
Storage device from <option>storage.*</option> to use for
this file system.
'';
};
config = lib.mkMerge [
# If a fileSystems submodule references "btrfs." via the storage option,
# set the default value for fsType to "btrfs".
(lib.mkIf (isBtrfs config.storage) {
fsType = lib.mkDefault "btrfs";
})
# If no label is set, reference the device via the generated UUID in
# genericOptions.uuid.
(lib.mkIf (config.storage != null && config.label == null) {
device = "/dev/disk/by-uuid/${config.storage.uuid}";
})
];
}));
};
options.swapDevices = mkOption {
type = types.listOf (types.submodule ({ config, options, ... }: {
options.storage = storageLib.mkDeviceSpecOption {
validDeviceTypes = containerTypes;
typeContainer = types.nullOr;
applyTypeContainer = fun: val: if val == null then null else fun val;
default = null;
example = "partition.swap";
description = ''
Storage device from <option>storage.*</option> to use for
this swap device.
'';
};
# If no label is set, reference the device via the generated UUID in
# genericOptions.uuid.
config = lib.mkIf (config.storage != null && !options.label.isDefined) {
device = "/dev/disk/by-uuid/${config.storage.uuid}";
};
}));
};
config = {
inherit assertions;
system.build.nixpart-spec = let
# Only check assertions for this module.
failed = map (x: x.message) (lib.filter (x: !x.assertion) assertions);
failedStr = lib.concatMapStringsSep "\n" (x: "- ${x}") failed;
in pkgs.writeText "nixpart.json" (if failed == [] then builtins.toJSON {
inherit (config) fileSystems swapDevices storage;
} else throw "\nFailed assertions:\n${failedStr}");
};
}

View File

@@ -0,0 +1,151 @@
{ pkgs }:
{ name, lib, config, ... }:
let
inherit (lib) types mkOption;
matchers = {
id.desc = "device ID";
id.devPath = "/dev/disk/by-id";
id.example = "ata-XY33445566AB_C123DE4F";
label.desc = "filesystem label";
label.devPath = "/dev/disk/by-label";
label.example = "nixos";
name.desc = "device name";
name.devPath = "/dev";
name.example = "sda";
path.desc = "path";
path.example = "/srv/disk.img";
sysfsPath.desc = "sysfs path";
sysfsPath.example = "/sys/devices/pci0000:00/0000:00:00.0/ata1/host0/"
+ "target0:0:0/0:0:0:0/block/vda";
uuid.desc = "device UUID";
uuid.devPath = "/dev/disk/by-uuid";
uuid.example = "12345678-90ab-cdef-1234-567890abcdef";
};
mkMatcherOption = { desc, devPath ? null, example }: let
devPathDesc = " (commonly found in <filename>${devPath}/*</filename>)";
maybeDevPath = lib.optionalString (devPath != null) devPathDesc;
in mkOption {
type = types.nullOr types.str;
default = null;
inherit example;
description = "Match based on a ${desc}${maybeDevPath}.";
};
matcherOptions.options = {
physicalPos = mkOption {
type = types.nullOr (types.addCheck types.int (p: p > 0));
default = null;
example = 1;
description = ''
Match physical devices based on the position of the kernel's device
enumeration. Virtual devices such as <literal>/dev/loop0</literal> are
excluded from this.
The position is 1-indexed, thus the first device found is position
<literal>1</literal>.
'';
};
script = mkOption {
type = types.nullOr types.lines;
default = null;
example = ''
# Match on the first path that includes the disk specification name.
for i in /dev/*''${disk}*; do echo "$i"; done
# Match on Nth device found in /dev/sd*, where N is the integer within
# the disk's specification name.
ls -1 /dev/sd* | tail -n+''${disk//[^0-9]}
'';
apply = script: let
scriptFile = pkgs.writeScript "match-${name}.sh" ''
#!${pkgs.bash}/bin/bash -e
set -o pipefail
shopt -s nullglob
export disk="$1" PATH=${lib.escapeShellArg (lib.makeBinPath [
pkgs.coreutils pkgs.gnused pkgs.utillinux pkgs.bash
])}
${script}
'';
in if script == null then null else scriptFile;
description = ''
Match based on the shell script lines set here.
The script is expected to echo the full path of the matching device to
stdout. Only the first line is accepted and consecutive lines are
ignored.
Within the scripts scope there is a <varname>$disk</varname> variable
which is the name of the disk specification. For example if the disk to
be matched is defined as <option>storage.disk.foo.*</option> the
<varname>$disk</varname> variable would be set to
<literal>foo</literal>.
In addition the script is run within bash and has
<application>coreutils</application>, <application>GNU
sed</application> and <application>util-linux</application> in
<envar>PATH</envar>, everything else needs to be explicitly referenced
using absolute paths.
Within that shell the <literal>pipefail</literal> and
<literal>nullglob</literal> options are set in addition to
<literal>set -e</literal>, so any errors will cause the match to fail.
'';
};
allowIncomplete = mkOption {
type = types.bool;
default = false;
description = ''
Allow to match an incomplete device, like for example a degraded RAID.
'';
};
} // lib.mapAttrs (lib.const mkMatcherOption) matchers;
in {
options = {
clear = mkOption {
type = types.bool;
default = false;
description = ''
Clear the partition table of this device.
'';
};
initlabel = mkOption {
type = types.bool;
default = false;
description = ''
Create a new disk label for this device (implies
<option>clear</option>).
'';
};
match = mkOption {
type = types.submodule matcherOptions;
default.name = name;
description = ''
Define a way how to match the given device.
If no <option>match.*</option> options are set,
<option>match.name</option> is used with the attribute name set as the
matching value in <option>storage.disk.$name</option>.
So a definition like <literal>storage.disk.sda = {}</literal> matches
<literal>/dev/sda</literal>.
'';
};
};
config = lib.mkIf config.initlabel {
clear = true;
};
}

View File

@@ -0,0 +1,166 @@
{ lib ? import ../../../../lib
# This is the value of config.storage and it's needed to look up valid device
# specifications.
, cfg ? {}
}:
let
# A map of valid size units (attribute name) to their descriptions.
sizeUnits = {
b = "byte";
kib = "kibibyte (1024 bytes)";
mib = "mebibyte (1024 kibibytes)";
gib = "gibibyte (1024 mebibytes)";
tib = "tebibyte (1024 gibibytes)";
pib = "pebibyte (1024 tebibytes)";
eib = "exbibyte (1024 pebibytes)";
zib = "zebibyte (1024 exbibytes)";
yib = "yobibyte (1024 zebibytes)";
kb = "kilobyte (1000 bytes)";
mb = "megabyte (1000 kilobytes)";
gb = "gigabyte (1000 megabytes)";
tb = "terabyte (1000 gigabytes)";
pb = "petabyte (1000 terabytes)";
eb = "exabyte (1000 petabytes)";
zb = "zettabyte (1000 exabytes)";
yb = "yottabyte (1000 zettabytes)";
};
/* Return a string enumerating the list of `valids' in a way to be more
* friendly to human readers.
*
* For example if the list is [ "a" "b" "c" ] the result is:
*
* "one of `a', `b' or `c'"
*
* If `valids' contains only two elements, like [ "a" "b" ] the result is:
*
* "either `a' or `b'"
*
* If `valids' is a singleton list, like [ "lonely" ] the result is:
*
* "`lonely'"
*
* Note that it is expected that `valids' is non-empty and no extra
* checking is done to show a reasonable error message if that's the
* case.
*/
oneOf = valids: let
inherit (lib) head init last;
quote = name: "`${name}'";
len = builtins.length valids;
two = "either ${quote (head valids)} or ${quote (last valids)}";
multi = "one of " + lib.concatMapStringsSep ", " quote (init valids)
+ " or ${quote (last valids)}";
in if len > 2 then multi else if len == 2 then two else quote (head valids);
/* Make sure that the size units defined in a size type are correct.
*
* We can have simple `{ kb = 123; }' size units but also multiple size units,
* like this:
*
* { b = 100; kb = 200; mib = 300; yib = 400; }
*
* This function returns true or false depending on whether the unit size type
* is correct or not. If it's incorrect, builtins.trace is used to print a
* more helpful error message than the generic one we get from the NixOS
* module system.
*/
assertUnits = attrs: let
quoteUnit = unit: "`${unit}'";
unitList = lib.attrNames sizeUnits;
validStr = lib.concatMapStringsSep ", " quoteUnit (lib.init unitList)
+ " or ${quoteUnit (lib.last unitList)}";
errSize = unit: "Size for ${quoteUnit unit} has to be an integer.";
errUnit = unit: "Unit ${quoteUnit unit} is not valid, "
+ "it has to be one of ${validStr}.";
errEmpty = "Size units attribute set cannot be empty.";
assertSize = unit: size: lib.optional (!lib.isInt size) (errSize unit);
assertUnit = unit: size: if sizeUnits ? ${unit} then assertSize unit size
else lib.singleton (errUnit unit);
assertions = if attrs == {} then lib.singleton errEmpty
else lib.flatten (lib.mapAttrsToList assertUnit attrs);
strAssertions = lib.concatStringsSep "\n" (assertions);
in if assertions == [] then true else builtins.trace strAssertions false;
/* Decode a device specification string like "partition.foo" into an attribute
* set consisting of the attributes `type' ("partition" here) and `name'
* ("foo" here).
*/
decodeSpec = spec: let
typeAndName = builtins.match "([a-z]+)\\.([a-zA-Z0-9_-]+)" spec;
in if typeAndName == null then null else rec {
type = lib.head typeAndName;
name = lib.last typeAndName;
# The generated UUID for the storage spec. Note that we don't need to check
# whether ${type} and ${name} exist, because they're already checked in
# assertSpec.
inherit (cfg.${type}.${name}) uuid;
};
/* Validate the device specification and return true if it's valid or false if
* it's not.
*
* As with assertUnits, builtins.trace is used to print an additional error
* message.
*/
assertSpec = validTypes: spec: let
syntaxErrorMsg =
"Device specification \"${spec}\" needs to be in the form " +
"`<type>.<name>', where `name' may only contain letters (lower and " +
"upper case), numbers, underscores (_) and dashes (-)";
invalidTypeMsg =
"Device type `${type}' is invalid and needs to be ${oneOf validTypes}.";
invalidNameMsg =
"Device `${type}.${name}' does not exist in `config.storage.*'.";
syntaxError = builtins.trace syntaxErrorMsg false;
decoded = decodeSpec spec;
inherit (decoded) type name;
assertName = if (cfg.${type} or {}) ? ${name} then true
else builtins.trace invalidNameMsg false;
assertType = if lib.elem type validTypes then assertName
else builtins.trace invalidTypeMsg false;
in if decoded == null then syntaxError else assertType;
deviceSpecType = validTypes: lib.mkOptionType {
name = "deviceSpec";
description = "device specification";
check = spec: lib.isString spec && assertSpec validTypes spec;
merge = lib.mergeEqualOption;
};
in {
inherit sizeUnits;
mkDeviceSpecOption = attrs: lib.mkOption ({
type = let
# This is a list of valid device types in a device specification, such as
# "partition", "disk" and so on.
validDeviceTypes = attrs.validDeviceTypes or [];
# The outer type wrapping the internal deviceSpecType, so it's possible to
# wrap the deviceSpecType in any other container type in lib.types.
typeContainer = attrs.typeContainer or lib.id;
in typeContainer (deviceSpecType validDeviceTypes);
# `applyTypeContainer' is a function that's used to unpack the individual
# deviceSpecType from the typeContainer. So for example if typeContainer is
# `listOf', the applyTypeContainer function is "map".
apply = (attrs.applyTypeContainer or lib.id) decodeSpec;
description = attrs.description + ''
The device specification has to be in the form
<literal>&lt;type&gt;.&lt;name&gt;</literal> where <literal>type</literal>
is ${oneOf (attrs.validDeviceTypes or [])} and <literal>name</literal> is
the name in <option>storage.sometype.name</option>.
'';
} // removeAttrs attrs [
"validDeviceTypes" "typeContainer" "applyTypeContainer" "description"
]);
types = {
size = lib.mkOptionType {
name = "size";
description = "\"fill\", integer in bytes or attrset of unit -> size";
check = s: s == "fill" || lib.isInt s || (lib.isAttrs s && assertUnits s);
merge = lib.mergeEqualOption;
};
};
}

View File

@@ -216,7 +216,7 @@ in rec {
# nix-build tests/login.nix -A result.
tests.avahi = callTest tests/avahi.nix {};
tests.bittorrent = callTest tests/bittorrent.nix {};
tests.blivet = callTest tests/blivet.nix {};
tests.blivet = callSubTests tests/blivet.nix {};
tests.boot = callSubTests tests/boot.nix {};
tests.boot-stage1 = callTest tests/boot-stage1.nix {};
tests.cadvisor = hydraJob (import tests/cadvisor.nix { system = "x86_64-linux"; });
@@ -295,6 +295,7 @@ in rec {
tests.sddm = callTest tests/sddm.nix {};
tests.simple = callTest tests/simple.nix {};
tests.smokeping = callTest tests/smokeping.nix {};
tests.storage = callSubTests tests/storage.nix {};
tests.taskserver = callTest tests/taskserver.nix {};
tests.tomcat = callTest tests/tomcat.nix {};
tests.udisks2 = callTest tests/udisks2.nix {};

View File

@@ -1,87 +1,113 @@
import ./make-test.nix ({ pkgs, ... }: with pkgs.python2Packages; rec {
name = "blivet";
meta = with pkgs.stdenv.lib.maintainers; {
maintainers = [ aszlig ];
{ system ? builtins.currentSystem, debug ? false }:
let
inherit (import ../lib/testing.nix { inherit system; }) pkgs makeTest;
inherit (pkgs) lib;
mkCommonTest = name: attrs@{ package, ... }: let
pythonTestRunner = pkgs.writeText "run-python-tests.py" ''
import sys
import logging
from unittest import TestLoader
from unittest.runner import TextTestRunner
${attrs.extraCode or ""}
testdir = '${package.tests}/tests/'
sys.path.insert(0, '${package.tests}')
runner = TextTestRunner(verbosity=2, failfast=False, buffer=False)
result = runner.run(TestLoader().discover(testdir, pattern='*_test.py'))
sys.exit(not result.wasSuccessful())
'';
testRunner = let
testVars = [ "PYTHONPATH" "GI_TYPELIB_PATH" ];
varsRe = lib.concatStringsSep "|" testVars;
testEnv = pkgs.runCommand "test-env" {
buildInputs = lib.singleton package ++ (attrs.extraDeps or []);
} "set | sed -nr -e '/^(${varsRe})/s/^/export /p' > \"$out\"";
interpreter = pkgs.python3Packages.python.interpreter;
in pkgs.writeScript "run-tests.sh" ''
#!${pkgs.stdenv.shell} -e
# Use the hosts temporary directory, because we have a tmpfs within
# the VM and we don't want to increase the memory size of the VM for
# no reason.
mkdir -p /scratch/tmp
TMPDIR=/scratch/tmp
export TMPDIR
# Tests are put into the current working directory so let's make sure
# we are actually using the additional disk instead of the tmpfs.
cd /scratch
source ${lib.escapeShellArg testEnv}
exec ${lib.escapeShellArg pkgs.python3Packages.python.interpreter} \
${lib.escapeShellArg pythonTestRunner}
'';
in makeTest {
inherit name;
machine = {
boot.kernelModules = [
"dm-bufio"
"dm-cache"
"dm-cache-cleaner"
"dm-cache-mq"
"dm-mirror"
"dm-multipath"
"dm-persistent-data"
"dm-raid"
"dm-snapshot"
"dm-thin-pool"
"hfsplus"
"zram"
];
boot.supportedFilesystems = [ "btrfs" "jfs" "reiserfs" "xfs" ];
virtualisation.memorySize = 768;
virtualisation.emptyDiskImages = [ 10240 ];
environment.systemPackages = [ pkgs.hfsprogs pkgs.mtools pkgs.ntfs3g ]
++ (attrs.extraSystemDeps or []);
fileSystems."/scratch" = {
device = "/dev/vdb";
fsType = "ext4";
autoFormat = true;
};
};
testScript = ''
$machine->waitForUnit("multi-user.target");
$machine->succeed("${testRunner}");
'';
meta.maintainers = [ pkgs.stdenv.lib.maintainers.aszlig ];
};
machine = {
environment.systemPackages = [ pkgs.python blivet mock ];
boot.supportedFilesystems = [ "btrfs" "jfs" "reiserfs" "xfs" ];
virtualisation.memorySize = 768;
};
debugBlivet = false;
debugProgramCalls = false;
pythonTestRunner = pkgs.writeText "run-blivet-tests.py" ''
import sys
import logging
from unittest import TestLoader
from unittest.runner import TextTestRunner
${pkgs.lib.optionalString debugProgramCalls ''
in {
blivet = mkCommonTest "blivet" rec {
package = pkgs.python3Packages.blivet;
extraDeps = [ pkgs.python3Packages.mock ];
extraCode = lib.optionalString debug ''
blivet_program_log = logging.getLogger("program")
blivet_program_log.setLevel(logging.DEBUG)
blivet_program_log.addHandler(logging.StreamHandler(sys.stderr))
''}
${pkgs.lib.optionalString debugBlivet ''
blivet_log = logging.getLogger("blivet")
blivet_log.setLevel(logging.DEBUG)
blivet_log.addHandler(logging.StreamHandler(sys.stderr))
''}
'';
};
runner = TextTestRunner(verbosity=2, failfast=False, buffer=False)
result = runner.run(TestLoader().discover('tests/', pattern='*_test.py'))
sys.exit(not result.wasSuccessful())
'';
blivetTest = pkgs.writeScript "blivet-test.sh" ''
#!${pkgs.stdenv.shell} -e
# Use the hosts temporary directory, because we have a tmpfs within the VM
# and we don't want to increase the memory size of the VM for no reason.
mkdir -p /tmp/xchg/bigtmp
TMPDIR=/tmp/xchg/bigtmp
export TMPDIR
cp -Rd "${blivet.src}/tests" .
# Skip SELinux tests
rm -f tests/formats_test/selinux_test.py
# Race conditions in growing/shrinking during resync
rm -f tests/devicelibs_test/mdraid_*
# Deactivate small BTRFS device test, because it fails with newer btrfsprogs
sed -i -e '/^class *BTRFSAsRootTestCase3(/,/^[^ ]/ {
/^class *BTRFSAsRootTestCase3(/d
/^$/d
/^ /d
}' tests/devicelibs_test/btrfs_test.py
# How on earth can these tests ever work even upstream? O_o
sed -i -e '/def testDiskChunk[12]/,/^ *[^ ]/{n; s/^ */&return # /}' \
tests/partitioning_test.py
# fix hardcoded temporary directory
sed -i \
-e '1i import tempfile' \
-e 's|_STORE_FILE_PATH = .*|_STORE_FILE_PATH = tempfile.gettempdir()|' \
-e 's|DEFAULT_STORE_SIZE = .*|DEFAULT_STORE_SIZE = 409600|' \
tests/loopbackedtestcase.py
PYTHONPATH=".:$(< "${pkgs.stdenv.mkDerivation {
name = "blivet-pythonpath";
buildInputs = [ blivet mock ];
buildCommand = "echo \"$PYTHONPATH\" > \"$out\"";
}}")" python "${pythonTestRunner}"
'';
testScript = ''
$machine->waitForUnit("multi-user.target");
$machine->succeed("${blivetTest}");
$machine->execute("rm -rf /tmp/xchg/bigtmp");
'';
})
libblockdev = mkCommonTest "libblockdev" rec {
package = pkgs.libblockdev;
extraSystemDeps = [
# For generating test escrow certificates
pkgs.nssTools
# While libblockdev is linked against volume_key, the
# tests require the 'volume_key' binary to be in PATH.
pkgs.volume_key
];
};
}

View File

@@ -1,247 +0,0 @@
import ./make-test.nix ({ pkgs, ... }:
with pkgs.lib;
let
ksExt = pkgs.writeText "ks-ext4" ''
clearpart --all --initlabel --drives=vdb
part /boot --recommended --label=boot --fstype=ext2 --ondisk=vdb
part swap --recommended --label=swap --fstype=swap --ondisk=vdb
part /nix --size=500 --label=nix --fstype=ext3 --ondisk=vdb
part / --recommended --label=root --fstype=ext4 --ondisk=vdb
'';
ksBtrfs = pkgs.writeText "ks-btrfs" ''
clearpart --all --initlabel --drives=vdb,vdc
part swap1 --recommended --label=swap1 --fstype=swap --ondisk=vdb
part swap2 --recommended --label=swap2 --fstype=swap --ondisk=vdc
part btrfs.1 --grow --ondisk=vdb
part btrfs.2 --grow --ondisk=vdc
btrfs / --data=0 --metadata=1 --label=root btrfs.1 btrfs.2
'';
ksF2fs = pkgs.writeText "ks-f2fs" ''
clearpart --all --initlabel --drives=vdb
part swap --recommended --label=swap --fstype=swap --ondisk=vdb
part /boot --recommended --label=boot --fstype=f2fs --ondisk=vdb
part / --recommended --label=root --fstype=f2fs --ondisk=vdb
'';
ksRaid = pkgs.writeText "ks-raid" ''
clearpart --all --initlabel --drives=vdb,vdc
part raid.01 --size=200 --ondisk=vdb
part raid.02 --size=200 --ondisk=vdc
part swap1 --size=500 --label=swap1 --fstype=swap --ondisk=vdb
part swap2 --size=500 --label=swap2 --fstype=swap --ondisk=vdc
part raid.11 --grow --ondisk=vdb
part raid.12 --grow --ondisk=vdc
raid /boot --level=1 --fstype=ext3 --device=md0 raid.01 raid.02
raid / --level=1 --fstype=xfs --device=md1 raid.11 raid.12
'';
ksRaidLvmCrypt = pkgs.writeText "ks-lvm-crypt" ''
clearpart --all --initlabel --drives=vdb,vdc
part raid.1 --grow --ondisk=vdb
part raid.2 --grow --ondisk=vdc
raid pv.0 --level=1 --encrypted --passphrase=x --device=md0 raid.1 raid.2
volgroup nixos pv.0
logvol /boot --size=200 --fstype=ext3 --name=boot --vgname=nixos
logvol swap --size=500 --fstype=swap --name=swap --vgname=nixos
logvol / --size=1000 --grow --fstype=ext4 --name=root --vgname=nixos
'';
in {
name = "partitiion";
machine = { config, pkgs, ... }: {
environment.systemPackages = [
pkgs.pythonPackages.nixpart0
pkgs.file pkgs.btrfs-progs pkgs.xfsprogs pkgs.lvm2
];
virtualisation.emptyDiskImages = [ 4096 4096 ];
};
testScript = ''
my $diskStart;
my @mtab;
sub getMtab {
my $mounts = $machine->succeed("cat /proc/mounts");
chomp $mounts;
return map [split], split /\n/, $mounts;
}
sub parttest {
my ($desc, $code) = @_;
$machine->start;
$machine->waitForUnit("default.target");
# Gather mounts and superblock
@mtab = getMtab;
$diskStart = $machine->succeed("dd if=/dev/vda bs=512 count=1");
subtest($desc, $code);
$machine->shutdown;
}
sub ensureSanity {
# Check whether the filesystem in /dev/vda is still intact
my $newDiskStart = $machine->succeed("dd if=/dev/vda bs=512 count=1");
if ($diskStart ne $newDiskStart) {
$machine->log("Something went wrong, the partitioner wrote " .
"something into the first 512 bytes of /dev/vda!");
die;
}
# Check whether nixpart has unmounted anything
my @currentMtab = getMtab;
for my $mount (@mtab) {
my $path = $mount->[1];
unless (grep { $_->[1] eq $path } @currentMtab) {
$machine->log("The partitioner seems to have unmounted $path.");
die;
}
}
}
sub checkMount {
my $mounts = $machine->succeed("cat /proc/mounts");
}
sub kickstart {
$machine->copyFileFromHost($_[0], "/kickstart");
$machine->succeed("nixpart -v /kickstart");
ensureSanity;
}
sub ensurePartition {
my ($name, $match) = @_;
my $path = $name =~ /^\// ? $name : "/dev/disk/by-label/$name";
my $out = $machine->succeed("file -Ls $path");
my @matches = grep(/^$path: .*$match/i, $out);
if (!@matches) {
$machine->log("Partition on $path was expected to have a " .
"file system that matches $match, but instead has: $out");
die;
}
}
sub ensureNoPartition {
$machine->succeed("test ! -e /dev/$_[0]");
}
sub ensureMountPoint {
$machine->succeed("mountpoint $_[0]");
}
sub remountAndCheck {
$machine->nest("Remounting partitions:", sub {
# XXX: "findmnt -ARunl -oTARGET /mnt" seems to NOT print all mounts!
my $getmounts_cmd = "cat /proc/mounts | cut -d' ' -f2 | grep '^/mnt'";
# Insert canaries first
my $canaries = $machine->succeed($getmounts_cmd . " | while read p;" .
" do touch \"\$p/canary\";" .
" echo \"\$p/canary\"; done");
# Now unmount manually
$machine->succeed($getmounts_cmd . " | tac | xargs -r umount");
# /mnt should be empty or non-existing
my $found = $machine->succeed("find /mnt -mindepth 1");
chomp $found;
if ($found) {
$machine->log("Cruft found in /mnt:\n$found");
die;
}
# Try to remount with nixpart
$machine->succeed("nixpart -vm /kickstart");
ensureMountPoint("/mnt");
# Check if our beloved canaries are dead
chomp $canaries;
$machine->nest("Checking canaries:", sub {
for my $canary (split /\n/, $canaries) {
$machine->succeed("test -e '$canary'");
}
});
});
}
parttest "ext2, ext3 and ext4 filesystems", sub {
kickstart("${ksExt}");
ensurePartition("boot", "ext2");
ensurePartition("swap", "swap");
ensurePartition("nix", "ext3");
ensurePartition("root", "ext4");
ensurePartition("/dev/vdb4", "boot sector");
ensureNoPartition("vdb6");
ensureNoPartition("vdc1");
remountAndCheck;
ensureMountPoint("/mnt/boot");
ensureMountPoint("/mnt/nix");
};
parttest "btrfs filesystem", sub {
$machine->succeed("modprobe btrfs");
kickstart("${ksBtrfs}");
ensurePartition("swap1", "swap");
ensurePartition("swap2", "swap");
ensurePartition("/dev/vdb2", "btrfs");
ensurePartition("/dev/vdc2", "btrfs");
ensureNoPartition("vdb3");
ensureNoPartition("vdc3");
remountAndCheck;
};
parttest "f2fs filesystem", sub {
$machine->succeed("modprobe f2fs");
kickstart("${ksF2fs}");
ensurePartition("swap", "swap");
ensurePartition("boot", "f2fs");
ensurePartition("root", "f2fs");
remountAndCheck;
ensureMountPoint("/mnt/boot", "f2fs");
};
parttest "RAID1 with XFS", sub {
kickstart("${ksRaid}");
ensurePartition("swap1", "swap");
ensurePartition("swap2", "swap");
ensurePartition("/dev/md0", "ext3");
ensurePartition("/dev/md1", "xfs");
ensureNoPartition("vdb4");
ensureNoPartition("vdc4");
ensureNoPartition("md2");
remountAndCheck;
ensureMountPoint("/mnt/boot");
};
parttest "RAID1 with LUKS and LVM", sub {
kickstart("${ksRaidLvmCrypt}");
ensurePartition("/dev/vdb1", "data");
ensureNoPartition("vdb2");
ensurePartition("/dev/vdc1", "data");
ensureNoPartition("vdc2");
ensurePartition("/dev/md0", "luks");
ensureNoPartition("md1");
ensurePartition("/dev/nixos/boot", "ext3");
ensurePartition("/dev/nixos/swap", "swap");
ensurePartition("/dev/nixos/root", "ext4");
remountAndCheck;
ensureMountPoint("/mnt/boot");
};
'';
})

488
nixos/tests/storage.nix Normal file
View File

@@ -0,0 +1,488 @@
{ system ? builtins.currentSystem }:
let
inherit (import ../lib/testing.nix { inherit system; }) pkgs makeTest;
mkStorageTest = name: attrs: makeTest {
name = "storage-${name}";
machine = { lib, config, pkgs, ... }: {
imports = lib.singleton (attrs.extraMachineConfig or {});
environment.systemPackages = [
pkgs.nixpart pkgs.file pkgs.btrfs-progs pkgs.xfsprogs pkgs.lvm2
];
virtualisation.emptyDiskImages =
lib.genList (lib.const 4096) (attrs.diskImages or 2);
environment.etc."nixpart.json".source = (import ../lib/eval-config.nix {
modules = pkgs.lib.singleton attrs.config;
inherit system;
}).config.system.build.nixpart-spec;
};
testScript = ''
my $diskStart;
my @mtab;
sub getMtab {
my $mounts = $machine->succeed("cat /proc/mounts");
chomp $mounts;
return map [split], split /\n/, $mounts;
}
sub ensureSanity {
# Check whether the filesystem in /dev/vda is still intact
my $newDiskStart = $machine->succeed("dd if=/dev/vda bs=512 count=1");
if ($diskStart ne $newDiskStart) {
$machine->log("Something went wrong, the partitioner wrote " .
"something into the first 512 bytes of /dev/vda!");
die;
}
# Check whether nixpart has unmounted anything
my @currentMtab = getMtab;
for my $mount (@mtab) {
my $path = $mount->[1];
unless (grep { $_->[1] eq $path } @currentMtab) {
$machine->log("The partitioner seems to have unmounted $path.");
die;
}
}
}
sub nixpart {
$machine->succeed('nixpart -v --json /etc/nixpart.json >&2');
ensureSanity;
}
sub ensurePartition {
my ($name, $match) = @_;
my $path = $name =~ /^\// ? $name : "/dev/disk/by-label/$name";
my $out = $machine->succeed("file -Ls $path");
my @matches = grep(/^$path: .*$match/i, $out);
if (!@matches) {
$machine->log("Partition on $path was expected to have a " .
"file system that matches $match, but instead " .
"has: $out");
die;
}
}
sub ensureNoPartition {
$machine->succeed("test ! -e /dev/$_[0]");
}
sub ensureMountPoint {
$machine->succeed("mountpoint $_[0]");
}
sub remountAndCheck {
$machine->nest("Remounting partitions:", sub {
# XXX: "findmnt -ARunl -oTARGET /mnt" seems to NOT print all mounts!
my $getmountsCmd = "cat /proc/mounts | cut -d' ' -f2 | grep '^/mnt'";
# Insert canaries first
my $canaries = $machine->succeed($getmountsCmd . " | while read p;" .
" do touch \"\$p/canary\";" .
" echo \"\$p/canary\"; done");
# Now unmount manually
$machine->succeed($getmountsCmd . " | tac | xargs -r umount");
# /mnt should be empty or non-existing
my $found = $machine->succeed("find /mnt -mindepth 1");
chomp $found;
if ($found) {
$machine->log("Cruft found in /mnt:\n$found");
die;
}
# Try to remount with nixpart
$machine->succeed('nixpart -vm --json /etc/nixpart.json >&2');
# Check if our beloved canaries are dead
chomp $canaries;
$machine->nest("Checking canaries:", sub {
for my $canary (split /\n/, $canaries) {
$machine->succeed("test -e '$canary'");
}
});
});
}
$machine->waitForUnit("multi-user.target");
# Gather mounts and superblock
@mtab = getMtab;
$diskStart = $machine->succeed("dd if=/dev/vda bs=512 count=1");
$machine->execute("mkdir /mnt");
${pkgs.lib.optionalString (attrs ? prepare) ''
$machine->nest("Preparing disks:", sub {
$machine->succeed("${pkgs.writeScript "prepare.sh" attrs.prepare}");
});
''}
${attrs.testScript}
'';
};
in pkgs.lib.mapAttrs mkStorageTest {
ext = {
config = {
storage = {
disk.vdb.clear = true;
disk.vdb.initlabel = true;
partition.boot.size.mib = 100;
partition.boot.targetDevice = "disk.vdb";
partition.swap.size.mib = 500;
partition.swap.targetDevice = "disk.vdb";
partition.nix.size.mib = 500;
partition.nix.targetDevice = "disk.vdb";
partition.root.size = "fill";
partition.root.targetDevice = "disk.vdb";
};
fileSystems."/boot" = {
label = "boot";
fsType = "ext2";
storage = "partition.boot";
};
fileSystems."/nix" = {
label = "nix";
fsType = "ext3";
storage = "partition.nix";
};
fileSystems."/" = {
label = "root";
fsType = "ext4";
storage = "partition.root";
};
swapDevices = [
{ label = "swap"; storage = "partition.swap"; }
];
};
testScript = ''
nixpart;
ensurePartition("boot", "ext2");
ensurePartition("swap", "swap");
ensurePartition("nix", "ext3");
ensurePartition("root", "ext4");
ensurePartition("/dev/vdb", "boot sector");
ensureNoPartition("vdb6");
ensureNoPartition("vdc1");
remountAndCheck;
ensureMountPoint("/mnt");
ensureMountPoint("/mnt/boot");
ensureMountPoint("/mnt/nix");
'';
};
btrfs = {
config = {
storage = {
disk.vdb.clear = true;
disk.vdb.initlabel = true;
partition.swap1.size.mib = 500;
partition.swap1.targetDevice = "disk.vdb";
partition.btrfs1.size = "fill";
partition.btrfs1.targetDevice = "disk.vdb";
disk.vdc.clear = true;
disk.vdc.initlabel = true;
partition.swap2.size.mib = 500;
partition.swap2.targetDevice = "disk.vdc";
partition.btrfs2.size = "fill";
partition.btrfs2.targetDevice = "disk.vdc";
btrfs.root.data = 0;
btrfs.root.metadata = 1;
btrfs.root.devices = [ "partition.btrfs1" "partition.btrfs2" ];
};
fileSystems."/" = {
label = "root";
storage = "btrfs.root";
};
swapDevices = [
{ label = "swap1"; storage = "partition.swap1"; }
{ label = "swap2"; storage = "partition.swap2"; }
];
};
testScript = ''
$machine->succeed("modprobe btrfs");
nixpart;
ensurePartition("swap1", "swap");
ensurePartition("swap2", "swap");
ensurePartition("/dev/vdb2", "btrfs");
ensurePartition("/dev/vdc2", "btrfs");
ensureNoPartition("vdb3");
ensureNoPartition("vdc3");
remountAndCheck;
ensureMountPoint("/mnt");
'';
};
f2fs = {
config = {
storage = {
disk.vdb.clear = true;
disk.vdb.initlabel = true;
partition.swap.size.mib = 500;
partition.swap.targetDevice = "disk.vdb";
partition.boot.size.mib = 100;
partition.boot.targetDevice = "disk.vdb";
partition.root.size = "fill";
partition.root.targetDevice = "disk.vdb";
};
fileSystems."/boot" = {
label = "boot";
fsType = "f2fs";
storage = "partition.boot";
};
fileSystems."/" = {
label = "root";
fsType = "f2fs";
storage = "partition.root";
};
swapDevices = [
{ label = "swap"; storage = "partition.swap"; }
];
};
testScript = ''
$machine->succeed("modprobe f2fs");
nixpart;
ensurePartition("swap", "swap");
ensurePartition("boot", "f2fs");
ensurePartition("root", "f2fs");
remountAndCheck;
ensureMountPoint("/mnt");
ensureMountPoint("/mnt/boot", "f2fs");
'';
};
mdraid = {
config = {
storage = {
disk.vdb.clear = true;
disk.vdb.initlabel = true;
partition.raid01.size.mib = 200;
partition.raid01.targetDevice = "disk.vdb";
partition.swap1.size.mib = 500;
partition.swap1.targetDevice = "disk.vdb";
partition.raid11.size = "fill";
partition.raid11.targetDevice = "disk.vdb";
disk.vdc.clear = true;
disk.vdc.initlabel = true;
partition.raid02.size.mib = 200;
partition.raid02.targetDevice = "disk.vdc";
partition.swap2.size.mib = 500;
partition.swap2.targetDevice = "disk.vdc";
partition.raid12.size = "fill";
partition.raid12.targetDevice = "disk.vdc";
mdraid.boot.level = 1;
mdraid.boot.devices = [ "partition.raid01" "partition.raid02" ];
mdraid.root.level = 1;
mdraid.root.devices = [ "partition.raid11" "partition.raid12" ];
};
fileSystems."/boot" = {
label = "boot";
fsType = "ext3";
storage = "mdraid.boot";
};
fileSystems."/" = {
label = "root";
fsType = "xfs";
storage = "mdraid.root";
};
swapDevices = [
{ label = "swap1"; storage = "partition.swap1"; }
{ label = "swap2"; storage = "partition.swap2"; }
];
};
testScript = ''
nixpart;
ensurePartition("swap1", "swap");
ensurePartition("swap2", "swap");
ensurePartition("/dev/md0", "ext3");
ensurePartition("/dev/md1", "xfs");
ensureNoPartition("vdb4");
ensureNoPartition("vdc4");
ensureNoPartition("md2");
remountAndCheck;
ensureMountPoint("/mnt");
ensureMountPoint("/mnt/boot");
'';
};
raidLvmCrypt = {
config = {
storage = {
disk.vdb.clear = true;
disk.vdb.initlabel = true;
partition.raid1.size = "fill";
partition.raid1.targetDevice = "disk.vdb";
disk.vdc.clear = true;
disk.vdc.initlabel = true;
partition.raid2.size = "fill";
partition.raid2.targetDevice = "disk.vdc";
mdraid.raid.level = 1;
mdraid.raid.devices = [ "partition.raid1" "partition.raid2" ];
/* TODO!
luks.volroot.passphrase = "x";
luks.volroot.targetDevice = "mdraid.raid";
*/
volgroup.nixos.devices = [ "luks.volroot" ];
logvol.boot.size.mib = 200;
logvol.boot.group = "volgroup.nixos";
logvol.swap.size.mib = 500;
logvol.swap.group = "volgroup.nixos";
logvol.root.size = "fill";
logvol.root.group = "volgroup.nixos";
};
fileSystems."/boot" = {
label = "boot";
fsType = "ext3";
storage = "logvol.boot";
};
fileSystems."/" = {
label = "root";
fsType = "ext4";
storage = "logvol.root";
};
swapDevices = [
{ label = "swap"; storage = "logvol.swap"; }
];
};
testScript = ''
nixpart;
ensurePartition("/dev/vdb1", "data");
ensureNoPartition("vdb2");
ensurePartition("/dev/vdc1", "data");
ensureNoPartition("vdc2");
ensurePartition("/dev/md0", "luks");
ensureNoPartition("md1");
ensurePartition("/dev/nixos/boot", "ext3");
ensurePartition("/dev/nixos/swap", "swap");
ensurePartition("/dev/nixos/root", "ext4");
remountAndCheck;
ensureMountPoint("/mnt");
ensureMountPoint("/mnt/boot");
'';
};
matchers = let
# Match by sysfs path:
match5 = "/sys/devices/pci0000:00/0000:00:0e.0/virtio11/block/vdf";
# Match by UUID:
match6 = "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee";
# Do a bit of a more complicated matching based of the number that's
# occuring within the disk name ("match7") and walk the available
# devices from /dev/vda to /dev/vdN until we get to the number from the
# disk name, whilst skipping /dev/vda.
match7 = ''
num="''${disk//[^0-9]}"
for i in /dev/vd?; do
num=$((num - 1))
if [ $num -lt 0 ]; then echo "$i"; break; fi
done
'';
in {
diskImages = 8;
prepare = ''
mkfs.xfs -L match2 /dev/vdc
mkfs.xfs -m uuid=${match6} /dev/vdg
'';
extraMachineConfig = {
# This is in order to fake a disk ID for match1 (/dev/vdb).
services.udev.extraRules = ''
KERNEL=="vdb", SUBSYSTEM=="block", SYMLINK+="disk/by-id/match1"
'';
};
config = {
storage = {
disk.match1.match.id = "match1"; # vdb
disk.match2.match.label = "match2"; # vdc
disk.match3.match.name = "vdd"; # vdd
disk.match4.match.path = "/dev/vde"; # vde
disk.match5.match.sysfsPath = match5; # vdf
disk.match6.match.uuid = match6; # vdg
disk.match7.match.script = match7; # vdh
disk.match8.match.physicalPos = 9; # vdi
};
# This basically creates a bunch of filesystem option definitions for
# match1 to match8.
fileSystems = builtins.listToAttrs (builtins.genList (idx: let
name = "match${toString (idx + 1)}";
in {
name = "/${name}";
value.storage = "disk.${name}";
value.fsType = "ext4";
}) 8);
};
testScript = ''
nixpart;
ensurePartition("/dev/vdb", "ext4");
ensurePartition("/dev/vdc", "ext4");
ensurePartition("/dev/vdd", "ext4");
ensurePartition("/dev/vde", "ext4");
ensurePartition("/dev/vdf", "ext4");
ensurePartition("/dev/vdg", "ext4");
ensurePartition("/dev/vdh", "ext4");
ensurePartition("/dev/vdi", "ext4");
remountAndCheck;
ensureMountPoint("/mnt/match1");
ensureMountPoint("/mnt/match2");
ensureMountPoint("/mnt/match3");
ensureMountPoint("/mnt/match4");
ensureMountPoint("/mnt/match5");
ensureMountPoint("/mnt/match6");
ensureMountPoint("/mnt/match7");
ensureMountPoint("/mnt/match8");
'';
};
}

View File

@@ -0,0 +1,871 @@
diff --git a/src/plugins/btrfs.c b/src/plugins/btrfs.c
index 21a6236..8f68002 100644
--- a/src/plugins/btrfs.c
+++ b/src/plugins/btrfs.c
@@ -100,7 +100,7 @@ void bd_btrfs_filesystem_info_free (BDBtrfsFilesystemInfo *info) {
*/
gboolean bd_btrfs_check_deps () {
GError *error = NULL;
- gboolean ret = bd_utils_check_util_version ("btrfs", BTRFS_MIN_VERSION, NULL, "[Bb]trfs.* v([\\d\\.]+)", &error);
+ gboolean ret = bd_utils_check_util_version ("btrfs", BTRFS_BIN_PATH, BTRFS_MIN_VERSION, NULL, "[Bb]trfs.* v([\\d\\.]+)", &error);
if (!ret && error) {
g_warning("Cannot load the BTRFS plugin: %s" , error->message);
@@ -255,7 +255,7 @@ gboolean bd_btrfs_create_volume (const gchar **devices, const gchar *label, cons
num_args += 2;
argv = g_new0 (const gchar*, num_args + 2);
- argv[0] = "mkfs.btrfs";
+ argv[0] = MKFS_BTRFS_BIN_PATH;
if (label) {
argv[next_arg] = "--label";
next_arg++;
@@ -295,7 +295,7 @@ gboolean bd_btrfs_create_volume (const gchar **devices, const gchar *label, cons
* Returns: whether the @device was successfully added to the @mountpoint btrfs volume or not
*/
gboolean bd_btrfs_add_device (const gchar *mountpoint, const gchar *device, const BDExtraArg **extra, GError **error) {
- const gchar *argv[6] = {"btrfs", "device", "add", device, mountpoint, NULL};
+ const gchar *argv[6] = {BTRFS_BIN_PATH, "device", "add", device, mountpoint, NULL};
return bd_utils_exec_and_report_error (argv, extra, error);
}
@@ -310,7 +310,7 @@ gboolean bd_btrfs_add_device (const gchar *mountpoint, const gchar *device, cons
* Returns: whether the @device was successfully removed from the @mountpoint btrfs volume or not
*/
gboolean bd_btrfs_remove_device (const gchar *mountpoint, const gchar *device, const BDExtraArg **extra, GError **error) {
- const gchar *argv[6] = {"btrfs", "device", "delete", device, mountpoint, NULL};
+ const gchar *argv[6] = {BTRFS_BIN_PATH, "device", "delete", device, mountpoint, NULL};
return bd_utils_exec_and_report_error (argv, extra, error);
}
@@ -327,7 +327,7 @@ gboolean bd_btrfs_remove_device (const gchar *mountpoint, const gchar *device, c
gboolean bd_btrfs_create_subvolume (const gchar *mountpoint, const gchar *name, const BDExtraArg **extra, GError **error) {
gchar *path = NULL;
gboolean success = FALSE;
- const gchar *argv[5] = {"btrfs", "subvol", "create", NULL, NULL};
+ const gchar *argv[5] = {BTRFS_BIN_PATH, "subvol", "create", NULL, NULL};
if (g_str_has_suffix (mountpoint, "/"))
path = g_strdup_printf ("%s%s", mountpoint, name);
@@ -354,7 +354,7 @@ gboolean bd_btrfs_create_subvolume (const gchar *mountpoint, const gchar *name,
gboolean bd_btrfs_delete_subvolume (const gchar *mountpoint, const gchar *name, const BDExtraArg **extra, GError **error) {
gchar *path = NULL;
gboolean success = FALSE;
- const gchar *argv[5] = {"btrfs", "subvol", "delete", NULL, NULL};
+ const gchar *argv[5] = {BTRFS_BIN_PATH, "subvol", "delete", NULL, NULL};
if (g_str_has_suffix (mountpoint, "/"))
path = g_strdup_printf ("%s%s", mountpoint, name);
@@ -383,7 +383,7 @@ guint64 bd_btrfs_get_default_subvolume_id (const gchar *mountpoint, GError **err
gchar *output = NULL;
gchar *match = NULL;
guint64 ret = 0;
- const gchar *argv[5] = {"btrfs", "subvol", "get-default", mountpoint, NULL};
+ const gchar *argv[5] = {BTRFS_BIN_PATH, "subvol", "get-default", mountpoint, NULL};
regex = g_regex_new ("ID (\\d+) .*", 0, 0, error);
if (!regex) {
@@ -430,7 +430,7 @@ guint64 bd_btrfs_get_default_subvolume_id (const gchar *mountpoint, GError **err
* to @subvol_id or not
*/
gboolean bd_btrfs_set_default_subvolume (const gchar *mountpoint, guint64 subvol_id, const BDExtraArg **extra, GError **error) {
- const gchar *argv[6] = {"btrfs", "subvol", "set-default", NULL, mountpoint, NULL};
+ const gchar *argv[6] = {BTRFS_BIN_PATH, "subvol", "set-default", NULL, mountpoint, NULL};
gboolean ret = FALSE;
argv[3] = g_strdup_printf ("%"G_GUINT64_FORMAT, subvol_id);
@@ -452,7 +452,7 @@ gboolean bd_btrfs_set_default_subvolume (const gchar *mountpoint, guint64 subvol
* Returns: whether the @dest snapshot of @source was successfully created or not
*/
gboolean bd_btrfs_create_snapshot (const gchar *source, const gchar *dest, gboolean ro, const BDExtraArg **extra, GError **error) {
- const gchar *argv[7] = {"btrfs", "subvol", "snapshot", NULL, NULL, NULL, NULL};
+ const gchar *argv[7] = {BTRFS_BIN_PATH, "subvol", "snapshot", NULL, NULL, NULL, NULL};
guint next_arg = 3;
if (ro) {
@@ -475,7 +475,7 @@ gboolean bd_btrfs_create_snapshot (const gchar *source, const gchar *dest, gbool
* containing @device or %NULL in case of error
*/
BDBtrfsDeviceInfo** bd_btrfs_list_devices (const gchar *device, GError **error) {
- const gchar *argv[5] = {"btrfs", "filesystem", "show", device, NULL};
+ const gchar *argv[5] = {BTRFS_BIN_PATH, "filesystem", "show", device, NULL};
gchar *output = NULL;
gboolean success = FALSE;
gchar **lines = NULL;
@@ -547,7 +547,7 @@ BDBtrfsDeviceInfo** bd_btrfs_list_devices (const gchar *device, GError **error)
* list before its parent (sub)volume.
*/
BDBtrfsSubvolumeInfo** bd_btrfs_list_subvolumes (const gchar *mountpoint, gboolean snapshots_only, GError **error) {
- const gchar *argv[7] = {"btrfs", "subvol", "list", "-p", NULL, NULL, NULL};
+ const gchar *argv[7] = {BTRFS_BIN_PATH, "subvol", "list", "-p", NULL, NULL, NULL};
gchar *output = NULL;
gboolean success = FALSE;
gchar **lines = NULL;
@@ -660,7 +660,7 @@ BDBtrfsSubvolumeInfo** bd_btrfs_list_subvolumes (const gchar *mountpoint, gboole
* Returns: information about the @device's volume's filesystem or %NULL in case of error
*/
BDBtrfsFilesystemInfo* bd_btrfs_filesystem_info (const gchar *device, GError **error) {
- const gchar *argv[5] = {"btrfs", "filesystem", "show", device, NULL};
+ const gchar *argv[5] = {BTRFS_BIN_PATH, "filesystem", "show", device, NULL};
gchar *output = NULL;
gboolean success = FALSE;
gchar const * const pattern = "Label:\\s+(none|'(?P<label>.+)')\\s+" \
@@ -728,7 +728,7 @@ gboolean bd_btrfs_mkfs (const gchar **devices, const gchar *label, const gchar *
* or not
*/
gboolean bd_btrfs_resize (const gchar *mountpoint, guint64 size, const BDExtraArg **extra, GError **error) {
- const gchar *argv[6] = {"btrfs", "filesystem", "resize", NULL, mountpoint, NULL};
+ const gchar *argv[6] = {BTRFS_BIN_PATH, "filesystem", "resize", NULL, mountpoint, NULL};
gboolean ret = FALSE;
argv[3] = g_strdup_printf ("%"G_GUINT64_FORMAT, size);
@@ -748,7 +748,7 @@ gboolean bd_btrfs_resize (const gchar *mountpoint, guint64 size, const BDExtraAr
* Returns: whether the filesystem was successfully checked or not
*/
gboolean bd_btrfs_check (const gchar *device, const BDExtraArg **extra, GError **error) {
- const gchar *argv[4] = {"btrfs", "check", device, NULL};
+ const gchar *argv[4] = {BTRFS_BIN_PATH, "check", device, NULL};
return bd_utils_exec_and_report_error (argv, extra, error);
}
@@ -763,7 +763,7 @@ gboolean bd_btrfs_check (const gchar *device, const BDExtraArg **extra, GError *
* Returns: whether the filesystem was successfully checked and repaired or not
*/
gboolean bd_btrfs_repair (const gchar *device, const BDExtraArg **extra, GError **error) {
- const gchar *argv[5] = {"btrfs", "check", "--repair", device, NULL};
+ const gchar *argv[5] = {BTRFS_BIN_PATH, "check", "--repair", device, NULL};
return bd_utils_exec_and_report_error (argv, extra, error);
}
@@ -778,7 +778,7 @@ gboolean bd_btrfs_repair (const gchar *device, const BDExtraArg **extra, GError
* to @label or not
*/
gboolean bd_btrfs_change_label (const gchar *mountpoint, const gchar *label, GError **error) {
- const gchar *argv[6] = {"btrfs", "filesystem", "label", mountpoint, label, NULL};
+ const gchar *argv[6] = {BTRFS_BIN_PATH, "filesystem", "label", mountpoint, label, NULL};
return bd_utils_exec_and_report_error (argv, NULL, error);
}
diff --git a/src/plugins/dm.c b/src/plugins/dm.c
index 8151681..10ff0c5 100644
--- a/src/plugins/dm.c
+++ b/src/plugins/dm.c
@@ -68,7 +68,7 @@ static void discard_dm_log (int level __attribute__((unused)), const char *file
*/
gboolean bd_dm_check_deps () {
GError *error = NULL;
- gboolean ret = bd_utils_check_util_version ("dmsetup", DM_MIN_VERSION, NULL, "Library version:\\s+([\\d\\.]+)", &error);
+ gboolean ret = bd_utils_check_util_version ("dmsetup", DMSETUP_BIN_PATH, DM_MIN_VERSION, NULL, "Library version:\\s+([\\d\\.]+)", &error);
if (!ret && error) {
g_warning("Cannot load the DM plugin: %s" , error->message);
@@ -116,7 +116,7 @@ void bd_dm_close () {
*/
gboolean bd_dm_create_linear (const gchar *map_name, const gchar *device, guint64 length, const gchar *uuid, GError **error) {
gboolean success = FALSE;
- const gchar *argv[9] = {"dmsetup", "create", map_name, "--table", NULL, NULL, NULL, NULL, NULL};
+ const gchar *argv[9] = {DMSETUP_BIN_PATH, "create", map_name, "--table", NULL, NULL, NULL, NULL, NULL};
gchar *table = g_strdup_printf ("0 %"G_GUINT64_FORMAT" linear %s 0", length, device);
argv[4] = table;
@@ -142,7 +142,7 @@ gboolean bd_dm_create_linear (const gchar *map_name, const gchar *device, guint6
* Returns: whether the @map_name map was successfully removed or not
*/
gboolean bd_dm_remove (const gchar *map_name, GError **error) {
- const gchar *argv[4] = {"dmsetup", "remove", map_name, NULL};
+ const gchar *argv[4] = {DMSETUP_BIN_PATH, "remove", map_name, NULL};
return bd_utils_exec_and_report_error (argv, NULL, error);
}
diff --git a/src/plugins/fs.c b/src/plugins/fs.c
index a10642e..cd51876 100644
--- a/src/plugins/fs.c
+++ b/src/plugins/fs.c
@@ -141,7 +141,7 @@ void bd_fs_vfat_info_free (BDFSVfatInfo *data) {
gboolean bd_fs_check_deps () {
GError *error = NULL;
gboolean check_ret = TRUE;
- gboolean ret = bd_utils_check_util_version ("mkfs.ext4", NULL, "", NULL, &error);
+ gboolean ret = bd_utils_check_util_version ("mkfs.ext4", MKFS_EXT4_BIN_PATH, NULL, "", NULL, &error);
if (!ret && error) {
g_warning("Cannot load the FS plugin: %s" , error->message);
@@ -149,84 +149,84 @@ gboolean bd_fs_check_deps () {
check_ret = FALSE;
}
- ret = bd_utils_check_util_version ("e2fsck", NULL, "", NULL, &error);
+ ret = bd_utils_check_util_version ("e2fsck", E2FSCK_BIN_PATH, NULL, "", NULL, &error);
if (!ret && error) {
g_warning("Cannot load the FS plugin: %s" , error->message);
g_clear_error (&error);
check_ret = FALSE;
}
- ret = bd_utils_check_util_version ("tune2fs", NULL, "", NULL, &error);
+ ret = bd_utils_check_util_version ("tune2fs", TUNE2FS_BIN_PATH, NULL, "", NULL, &error);
if (!ret && error) {
g_warning("Cannot load the FS plugin: %s" , error->message);
g_clear_error (&error);
check_ret = FALSE;
}
- ret = bd_utils_check_util_version ("dumpe2fs", NULL, "", NULL, &error);
+ ret = bd_utils_check_util_version ("dumpe2fs", DUMPE2FS_BIN_PATH, NULL, "", NULL, &error);
if (!ret && error) {
g_warning("Cannot load the FS plugin: %s" , error->message);
g_clear_error (&error);
check_ret = FALSE;
}
- ret = bd_utils_check_util_version ("resize2fs", NULL, "", NULL, &error);
+ ret = bd_utils_check_util_version ("resize2fs", RESIZE2FS_BIN_PATH, NULL, "", NULL, &error);
if (!ret && error) {
g_warning("Cannot load the FS plugin: %s" , error->message);
g_clear_error (&error);
check_ret = FALSE;
}
- ret = bd_utils_check_util_version ("mkfs.xfs", NULL, "", NULL, &error);
+ ret = bd_utils_check_util_version ("mkfs.xfs", MKFS_XFS_BIN_PATH, NULL, "", NULL, &error);
if (!ret && error) {
g_warning("Cannot load the FS plugin: %s" , error->message);
g_clear_error (&error);
check_ret = FALSE;
}
- ret = bd_utils_check_util_version ("xfs_db", NULL, "", NULL, &error);
+ ret = bd_utils_check_util_version ("xfs_db", XFS_DB_BIN_PATH, NULL, "", NULL, &error);
if (!ret && error) {
g_warning("Cannot load the FS plugin: %s" , error->message);
g_clear_error (&error);
check_ret = FALSE;
}
- ret = bd_utils_check_util_version ("xfs_repair", NULL, "", NULL, &error);
+ ret = bd_utils_check_util_version ("xfs_repair", XFS_REPAIR_BIN_PATH, NULL, "", NULL, &error);
if (!ret && error) {
g_warning("Cannot load the FS plugin: %s" , error->message);
g_clear_error (&error);
check_ret = FALSE;
}
- ret = bd_utils_check_util_version ("xfs_admin", NULL, "", NULL, &error);
+ ret = bd_utils_check_util_version ("xfs_admin", XFS_ADMIN_BIN_PATH, NULL, "", NULL, &error);
if (!ret && error) {
g_warning("Cannot load the FS plugin: %s" , error->message);
g_clear_error (&error);
check_ret = FALSE;
}
- ret = bd_utils_check_util_version ("xfs_growfs", NULL, "", NULL, &error);
+ ret = bd_utils_check_util_version ("xfs_growfs", XFS_GROWFS_BIN_PATH, NULL, "", NULL, &error);
if (!ret && error) {
g_warning("Cannot load the FS plugin: %s" , error->message);
g_clear_error (&error);
check_ret = FALSE;
}
- ret = bd_utils_check_util_version ("mkfs.vfat", NULL, "", NULL, &error);
+ ret = bd_utils_check_util_version ("mkfs.vfat", MKFS_VFAT_BIN_PATH, NULL, "", NULL, &error);
if (!ret && error) {
g_warning("Cannot load the FS plugin: %s" , error->message);
g_clear_error (&error);
check_ret = FALSE;
}
- ret = bd_utils_check_util_version ("fatlabel", NULL, "", NULL, &error);
+ ret = bd_utils_check_util_version ("fatlabel", FATLABEL_BIN_PATH, NULL, "", NULL, &error);
if (!ret && error) {
g_warning("Cannot load the FS plugin: %s" , error->message);
g_clear_error (&error);
check_ret = FALSE;
}
- ret = bd_utils_check_util_version ("fsck.vfat", NULL, "", NULL, &error);
+ ret = bd_utils_check_util_version ("fsck.vfat", FSCK_VFAT_BIN_PATH, NULL, "", NULL, &error);
if (!ret && error) {
g_warning("Cannot load the FS plugin: %s" , error->message);
g_clear_error (&error);
@@ -568,7 +568,7 @@ static gboolean wipe_fs (const gchar *device, const gchar *fs_type, gboolean wip
* Returns: whether a new ext4 fs was successfully created on @device or not
*/
gboolean bd_fs_ext4_mkfs (const gchar *device, const BDExtraArg **extra, GError **error) {
- const gchar *args[3] = {"mkfs.ext4", device, NULL};
+ const gchar *args[3] = {MKFS_EXT4_BIN_PATH, device, NULL};
return bd_utils_exec_and_report_error (args, extra, error);
}
@@ -598,7 +598,7 @@ gboolean bd_fs_ext4_check (const gchar *device, const BDExtraArg **extra, GError
/* Force checking even if the file system seems clean. AND
* Open the filesystem read-only, and assume an answer of no to all
* questions. */
- const gchar *args[5] = {"e2fsck", "-f", "-n", device, NULL};
+ const gchar *args[5] = {E2FSCK_BIN_PATH, "-f", "-n", device, NULL};
gint status = 0;
gboolean ret = FALSE;
@@ -625,7 +625,7 @@ gboolean bd_fs_ext4_repair (const gchar *device, gboolean unsafe, const BDExtraA
/* Force checking even if the file system seems clean. AND
* Automatically repair what can be safely repaired. OR
* Assume an answer of `yes' to all questions. */
- const gchar *args[5] = {"e2fsck", "-f", unsafe ? "-y" : "-p", device, NULL};
+ const gchar *args[5] = {E2FSCK_BIN_PATH, "-f", unsafe ? "-y" : "-p", device, NULL};
return bd_utils_exec_and_report_error (args, extra, error);
}
@@ -640,7 +640,7 @@ gboolean bd_fs_ext4_repair (const gchar *device, gboolean unsafe, const BDExtraA
* successfully set or not
*/
gboolean bd_fs_ext4_set_label (const gchar *device, const gchar *label, GError **error) {
- const gchar *args[5] = {"tune2fs", "-L", label, device, NULL};
+ const gchar *args[5] = {TUNE2FS_BIN_PATH, "-L", label, device, NULL};
return bd_utils_exec_and_report_error (args, NULL, error);
}
@@ -721,7 +721,7 @@ static BDFSExt4Info* get_ext4_info_from_table (GHashTable *table, gboolean free_
* %NULL in case of error
*/
BDFSExt4Info* bd_fs_ext4_get_info (const gchar *device, GError **error) {
- const gchar *args[4] = {"dumpe2fs", "-h", device, NULL};
+ const gchar *args[4] = {DUMPE2FS_BIN_PATH, "-h", device, NULL};
gboolean success = FALSE;
gchar *output = NULL;
GHashTable *table = NULL;
@@ -765,7 +765,7 @@ BDFSExt4Info* bd_fs_ext4_get_info (const gchar *device, GError **error) {
* Returns: whether the file system on @device was successfully resized or not
*/
gboolean bd_fs_ext4_resize (const gchar *device, guint64 new_size, const BDExtraArg **extra, GError **error) {
- const gchar *args[4] = {"resize2fs", device, NULL, NULL};
+ const gchar *args[4] = {RESIZE2FS_BIN_PATH, device, NULL, NULL};
gboolean ret = FALSE;
if (new_size != 0)
@@ -787,7 +787,7 @@ gboolean bd_fs_ext4_resize (const gchar *device, guint64 new_size, const BDExtra
* Returns: whether a new xfs fs was successfully created on @device or not
*/
gboolean bd_fs_xfs_mkfs (const gchar *device, const BDExtraArg **extra, GError **error) {
- const gchar *args[3] = {"mkfs.xfs", device, NULL};
+ const gchar *args[3] = {MKFS_XFS_BIN_PATH, device, NULL};
return bd_utils_exec_and_report_error (args, extra, error);
}
@@ -815,7 +815,7 @@ gboolean bd_fs_xfs_wipe (const gchar *device, GError **error) {
* everything is okay and there are just some pending/in-progress writes
*/
gboolean bd_fs_xfs_check (const gchar *device, GError **error) {
- const gchar *args[6] = {"xfs_db", "-r", "-c", "check", device, NULL};
+ const gchar *args[6] = {XFS_DB_BIN_PATH, "-r", "-c", "check", device, NULL};
gboolean ret = FALSE;
ret = bd_utils_exec_and_report_error (args, NULL, error);
@@ -837,7 +837,7 @@ gboolean bd_fs_xfs_check (const gchar *device, GError **error) {
* (if needed) or not (error is set in that case)
*/
gboolean bd_fs_xfs_repair (const gchar *device, const BDExtraArg **extra, GError **error) {
- const gchar *args[3] = {"xfs_repair", device, NULL};
+ const gchar *args[3] = {XFS_REPAIR_BIN_PATH, device, NULL};
return bd_utils_exec_and_report_error (args, extra, error);
}
@@ -852,7 +852,7 @@ gboolean bd_fs_xfs_repair (const gchar *device, const BDExtraArg **extra, GError
* successfully set or not
*/
gboolean bd_fs_xfs_set_label (const gchar *device, const gchar *label, GError **error) {
- const gchar *args[5] = {"xfs_admin", "-L", label, device, NULL};
+ const gchar *args[5] = {XFS_ADMIN_BIN_PATH, "-L", label, device, NULL};
if (!label || (strncmp (label, "", 1) == 0))
args[2] = "--";
@@ -868,7 +868,7 @@ gboolean bd_fs_xfs_set_label (const gchar *device, const gchar *label, GError **
* %NULL in case of error
*/
BDFSXfsInfo* bd_fs_xfs_get_info (const gchar *device, GError **error) {
- const gchar *args[4] = {"xfs_admin", "-lu", device, NULL};
+ const gchar *args[4] = {XFS_ADMIN_BIN_PATH, "-lu", device, NULL};
gboolean success = FALSE;
gchar *output = NULL;
BDFSXfsInfo *ret = NULL;
@@ -906,7 +906,7 @@ BDFSXfsInfo* bd_fs_xfs_get_info (const gchar *device, GError **error) {
}
g_strfreev (lines);
- args[0] = "xfs_info";
+ args[0] = XFS_INFO_BIN_PATH;
args[1] = device;
args[2] = NULL;
success = bd_utils_exec_and_capture_output (args, NULL, &output, error);
@@ -976,7 +976,7 @@ BDFSXfsInfo* bd_fs_xfs_get_info (const gchar *device, GError **error) {
* Returns: whether the file system mounted on @mpoint was successfully resized or not
*/
gboolean bd_fs_xfs_resize (const gchar *mpoint, guint64 new_size, const BDExtraArg **extra, GError **error) {
- const gchar *args[5] = {"xfs_growfs", NULL, NULL, NULL, NULL};
+ const gchar *args[5] = {XFS_GROWFS_BIN_PATH, NULL, NULL, NULL, NULL};
gchar *size_str = NULL;
gboolean ret = FALSE;
@@ -1005,7 +1005,7 @@ gboolean bd_fs_xfs_resize (const gchar *mpoint, guint64 new_size, const BDExtraA
* Returns: whether a new vfat fs was successfully created on @device or not
*/
gboolean bd_fs_vfat_mkfs (const gchar *device, const BDExtraArg **extra, GError **error) {
- const gchar *args[3] = {"mkfs.vfat", device, NULL};
+ const gchar *args[3] = {MKFS_VFAT_BIN_PATH, device, NULL};
return bd_utils_exec_and_report_error (args, extra, error);
}
@@ -1032,7 +1032,7 @@ gboolean bd_fs_vfat_wipe (const gchar *device, GError **error) {
* Returns: whether an vfat file system on the @device is clean or not
*/
gboolean bd_fs_vfat_check (const gchar *device, const BDExtraArg **extra, GError **error) {
- const gchar *args[4] = {"fsck.vfat", "-n", device, NULL};
+ const gchar *args[4] = {FSCK_VFAT_BIN_PATH, "-n", device, NULL};
gint status = 0;
gboolean ret = FALSE;
@@ -1055,7 +1055,7 @@ gboolean bd_fs_vfat_check (const gchar *device, const BDExtraArg **extra, GError
* (if needed) or not (error is set in that case)
*/
gboolean bd_fs_vfat_repair (const gchar *device, const BDExtraArg **extra, GError **error) {
- const gchar *args[4] = {"fsck.vfat", "-a", device, NULL};
+ const gchar *args[4] = {FSCK_VFAT_BIN_PATH, "-a", device, NULL};
return bd_utils_exec_and_report_error (args, extra, error);
}
@@ -1070,7 +1070,7 @@ gboolean bd_fs_vfat_repair (const gchar *device, const BDExtraArg **extra, GErro
* successfully set or not
*/
gboolean bd_fs_vfat_set_label (const gchar *device, const gchar *label, GError **error) {
- const gchar *args[4] = {"fatlabel", device, label, NULL};
+ const gchar *args[4] = {FATLABEL_BIN_PATH, device, label, NULL};
return bd_utils_exec_and_report_error (args, NULL, error);
}
@@ -1084,7 +1084,7 @@ gboolean bd_fs_vfat_set_label (const gchar *device, const gchar *label, GError *
* %NULL in case of error
*/
BDFSVfatInfo* bd_fs_vfat_get_info (const gchar *device, GError **error) {
- const gchar *args[4] = {"fsck.vfat", "-nv", device, NULL};
+ const gchar *args[4] = {FSCK_VFAT_BIN_PATH, "-nv", device, NULL};
blkid_probe probe = NULL;
gint fd = 0;
gint status = 0;
diff --git a/src/plugins/kbd.c b/src/plugins/kbd.c
index 612160b..165c9a8 100644
--- a/src/plugins/kbd.c
+++ b/src/plugins/kbd.c
@@ -69,7 +69,7 @@ gboolean bd_kbd_check_deps () {
if (!ret)
return FALSE;
- ret = bd_utils_check_util_version ("make-bcache", NULL, NULL, NULL, &error);
+ ret = bd_utils_check_util_version ("make-bcache", MAKE_BCACHE_BIN_PATH, NULL, NULL, NULL, &error);
if (!ret && error) {
g_warning("Cannot load the kbd plugin: %s" , error->message);
g_clear_error (&error);
@@ -687,7 +687,7 @@ BDKBDZramStats* bd_kbd_zram_get_stats (const gchar *device, GError **error) {
* Returns: whether the bcache device was successfully created or not
*/
gboolean bd_kbd_bcache_create (const gchar *backing_device, const gchar *cache_device, const BDExtraArg **extra, const gchar **bcache_device, GError **error) {
- const gchar *argv[6] = {"make-bcache", "-B", backing_device, "-C", cache_device, NULL};
+ const gchar *argv[6] = {MAKE_BCACHE_BIN_PATH, "-B", backing_device, "-C", cache_device, NULL};
gboolean success = FALSE;
gchar *output = NULL;
gchar **lines = NULL;
diff --git a/src/plugins/loop.c b/src/plugins/loop.c
index 3509657..eb36c92 100644
--- a/src/plugins/loop.c
+++ b/src/plugins/loop.c
@@ -51,7 +51,7 @@ GQuark bd_loop_error_quark (void)
*/
gboolean bd_loop_check_deps () {
GError *error = NULL;
- gboolean ret = bd_utils_check_util_version ("losetup", LOSETUP_MIN_VERSION, NULL, "losetup from util-linux\\s+([\\d\\.]+)", &error);
+ gboolean ret = bd_utils_check_util_version ("losetup", LOSETUP_BIN_PATH, LOSETUP_MIN_VERSION, NULL, "losetup from util-linux\\s+([\\d\\.]+)", &error);
if (!ret && error) {
g_warning("Cannot load the loop plugin: %s" , error->message);
@@ -173,7 +173,7 @@ gchar* bd_loop_get_loop_name (const gchar *file, GError **error __attribute__((u
*/
gboolean bd_loop_setup (const gchar *file, guint64 offset, guint64 size, gboolean read_only, gboolean part_scan, const gchar **loop_name, GError **error) {
/* losetup -f -o offset --sizelimit size -P -r file NULL */
- const gchar *args[10] = {"losetup", "-f", NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL};
+ const gchar *args[10] = {LOSETUP_BIN_PATH, "-f", NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL};
gint args_top = 2;
gboolean success = FALSE;
gchar *offset_str = NULL;
@@ -222,7 +222,7 @@ gboolean bd_loop_teardown (const gchar *loop, GError **error) {
gboolean success = FALSE;
gchar *dev_loop = NULL;
- const gchar *args[4] = {"losetup", "-d", NULL, NULL};
+ const gchar *args[4] = {LOSETUP_BIN_PATH, "-d", NULL, NULL};
if (g_str_has_prefix (loop, "/dev/"))
args[2] = loop;
diff --git a/src/plugins/lvm.c b/src/plugins/lvm.c
index 23002b1..a02e1ef 100644
--- a/src/plugins/lvm.c
+++ b/src/plugins/lvm.c
@@ -164,7 +164,7 @@ void bd_lvm_cache_stats_free (BDLVMCacheStats *data) {
*/
gboolean bd_lvm_check_deps () {
GError *error = NULL;
- gboolean ret = bd_utils_check_util_version ("lvm", LVM_MIN_VERSION, "version", "LVM version:\\s+([\\d\\.]+)", &error);
+ gboolean ret = bd_utils_check_util_version ("lvm", LVM_BIN_PATH, LVM_MIN_VERSION, "version", "LVM version:\\s+([\\d\\.]+)", &error);
if (!ret && error) {
g_warning("Cannot load the LVM plugin: %s" , error->message);
@@ -208,7 +208,7 @@ static gboolean call_lvm_and_report_error (const gchar **args, const BDExtraArg
const gchar **argv = g_new0 (const gchar*, args_length + 3);
/* construct argv from args with "lvm" prepended */
- argv[0] = "lvm";
+ argv[0] = LVM_BIN_PATH;
for (i=0; i < args_length; i++)
argv[i+1] = args[i];
argv[args_length + 1] = global_config_str ? g_strdup_printf("--config=%s", global_config_str) : NULL;
@@ -234,7 +234,7 @@ static gboolean call_lvm_and_capture_output (const gchar **args, const BDExtraAr
const gchar **argv = g_new0 (const gchar*, args_length + 3);
/* construct argv from args with "lvm" prepended */
- argv[0] = "lvm";
+ argv[0] = LVM_BIN_PATH;
for (i=0; i < args_length; i++)
argv[i+1] = args[i];
argv[args_length + 1] = global_config_str ? g_strdup_printf("--config=%s", global_config_str) : NULL;
diff --git a/src/plugins/mdraid.c b/src/plugins/mdraid.c
index 74930f2..c4a13e8 100644
--- a/src/plugins/mdraid.c
+++ b/src/plugins/mdraid.c
@@ -136,7 +136,7 @@ void bd_md_detail_data_free (BDMDDetailData *data) {
*/
gboolean bd_md_check_deps () {
GError *error = NULL;
- gboolean ret = bd_utils_check_util_version ("mdadm", MDADM_MIN_VERSION, NULL, "mdadm - v([\\d\\.]+)", &error);
+ gboolean ret = bd_utils_check_util_version ("mdadm", MDADM_BIN_PATH, MDADM_MIN_VERSION, NULL, "mdadm - v([\\d\\.]+)", &error);
if (!ret && error) {
g_warning("Cannot load the MDRAID plugin: %s" , error->message);
@@ -473,7 +473,7 @@ gboolean bd_md_create (const gchar *device_name, const gchar *level, const gchar
level_str = g_strdup_printf ("--level=%s", level);
rdevices_str = g_strdup_printf ("--raid-devices=%"G_GUINT64_FORMAT, (num_disks - spares));
- argv[argv_top++] = "mdadm";
+ argv[argv_top++] = MDADM_BIN_PATH;
argv[argv_top++] = "--create";
argv[argv_top++] = device_name;
argv[argv_top++] = "--run";
@@ -519,7 +519,7 @@ gboolean bd_md_create (const gchar *device_name, const gchar *level, const gchar
* Returns: whether the MD RAID metadata was successfully destroyed on @device or not
*/
gboolean bd_md_destroy (const gchar *device, GError **error) {
- const gchar *argv[] = {"mdadm", "--zero-superblock", device, NULL};
+ const gchar *argv[] = {MDADM_BIN_PATH, "--zero-superblock", device, NULL};
return bd_utils_exec_and_report_error (argv, NULL, error);
}
@@ -532,7 +532,7 @@ gboolean bd_md_destroy (const gchar *device, GError **error) {
* Returns: whether the RAID device @device_name was successfully deactivated or not
*/
gboolean bd_md_deactivate (const gchar *device_name, GError **error) {
- const gchar *argv[] = {"mdadm", "--stop", device_name, NULL};
+ const gchar *argv[] = {MDADM_BIN_PATH, "--stop", device_name, NULL};
gchar *dev_md_path = NULL;
gboolean ret = FALSE;
@@ -573,7 +573,7 @@ gboolean bd_md_activate (const gchar *device_name, const gchar **members, const
/* mdadm, --assemble, device_name/--scan, --run, --uuid=uuid, member1, member2,..., NULL*/
argv = g_new0 (const gchar*, num_members + 6);
- argv[argv_top++] = "mdadm";
+ argv[argv_top++] = MDADM_BIN_PATH;
argv[argv_top++] = "--assemble";
if (device_name)
argv[argv_top++] = device_name;
@@ -608,7 +608,7 @@ gboolean bd_md_activate (const gchar *device_name, const gchar **members, const
* Returns: whether the @raid_name was successfully started or not
*/
gboolean bd_md_run (const gchar *raid_name, GError **error) {
- const gchar *argv[] = {"mdadm", "--run", NULL, NULL};
+ const gchar *argv[] = {MDADM_BIN_PATH, "--run", NULL, NULL};
gchar *raid_name_str = NULL;
gboolean ret = FALSE;
@@ -634,7 +634,7 @@ gboolean bd_md_run (const gchar *raid_name, GError **error) {
* Note: may start the MD RAID if it becomes ready by adding @device.
*/
gboolean bd_md_nominate (const gchar *device, GError **error) {
- const gchar *argv[] = {"mdadm", "--incremental", "--quiet", "--run", device, NULL};
+ const gchar *argv[] = {MDADM_BIN_PATH, "--incremental", "--quiet", "--run", device, NULL};
return bd_utils_exec_and_report_error (argv, NULL, error);
}
@@ -650,7 +650,7 @@ gboolean bd_md_nominate (const gchar *device, GError **error) {
* Note: may start the MD RAID if it becomes ready by adding @device.
*/
gboolean bd_md_denominate (const gchar *device, GError **error) {
- const gchar *argv[] = {"mdadm", "--incremental", "--fail", device, NULL};
+ const gchar *argv[] = {MDADM_BIN_PATH, "--incremental", "--fail", device, NULL};
/* XXX: stupid mdadm! --incremental --fail requires "sda1" instead of "/dev/sda1" */
if (g_str_has_prefix (device, "/dev/"))
@@ -680,7 +680,7 @@ gboolean bd_md_denominate (const gchar *device, GError **error) {
* decided by mdadm.
*/
gboolean bd_md_add (const gchar *raid_name, const gchar *device, guint64 raid_devs, const BDExtraArg **extra, GError **error) {
- const gchar *argv[7] = {"mdadm", NULL, NULL, NULL, NULL, NULL, NULL};
+ const gchar *argv[7] = {MDADM_BIN_PATH, NULL, NULL, NULL, NULL, NULL, NULL};
guint argv_top = 1;
gchar *raid_name_str = NULL;
gchar *raid_devs_str = NULL;
@@ -721,7 +721,7 @@ gboolean bd_md_add (const gchar *raid_name, const gchar *device, guint64 raid_de
* RAID or not.
*/
gboolean bd_md_remove (const gchar *raid_name, const gchar *device, gboolean fail, const BDExtraArg **extra, GError **error) {
- const gchar *argv[] = {"mdadm", raid_name, NULL, NULL, NULL, NULL};
+ const gchar *argv[] = {MDADM_BIN_PATH, raid_name, NULL, NULL, NULL, NULL};
guint argv_top = 2;
gchar *raid_name_str = NULL;
gboolean ret = FALSE;
@@ -754,7 +754,7 @@ gboolean bd_md_remove (const gchar *raid_name, const gchar *device, gboolean fai
* Returns: information about the MD RAID extracted from the @device
*/
BDMDExamineData* bd_md_examine (const gchar *device, GError **error) {
- const gchar *argv[] = {"mdadm", "--examine", "-E", device, NULL};
+ const gchar *argv[] = {MDADM_BIN_PATH, "--examine", "-E", device, NULL};
gchar *output = NULL;
gboolean success = FALSE;
GHashTable *table = NULL;
@@ -869,7 +869,7 @@ BDMDExamineData* bd_md_examine (const gchar *device, GError **error) {
* Returns: information about the MD RAID @raid_name
*/
BDMDDetailData* bd_md_detail (const gchar *raid_name, GError **error) {
- const gchar *argv[] = {"mdadm", "--detail", raid_name, NULL};
+ const gchar *argv[] = {MDADM_BIN_PATH, "--detail", raid_name, NULL};
gchar *output = NULL;
gboolean success = FALSE;
GHashTable *table = NULL;
diff --git a/src/plugins/mpath.c b/src/plugins/mpath.c
index 547bfc5..b71188b 100644
--- a/src/plugins/mpath.c
+++ b/src/plugins/mpath.c
@@ -52,7 +52,7 @@ GQuark bd_mpath_error_quark (void)
*/
gboolean bd_mpath_check_deps () {
GError *error = NULL;
- gboolean ret = bd_utils_check_util_version ("multipath", MULTIPATH_MIN_VERSION, NULL, "multipath-tools v([\\d\\.]+)", &error);
+ gboolean ret = bd_utils_check_util_version ("multipath", MULTIPATH_BIN_PATH, MULTIPATH_MIN_VERSION, NULL, "multipath-tools v([\\d\\.]+)", &error);
if (!ret && error) {
g_warning("Cannot load the mpath plugin: %s" , error->message);
@@ -63,7 +63,7 @@ gboolean bd_mpath_check_deps () {
return FALSE;
/* mpathconf doesn't report its version */
- ret = bd_utils_check_util_version ("mpathconf", NULL, NULL, NULL, &error);
+ ret = bd_utils_check_util_version ("mpathconf", MPATHCONF_BIN_PATH, NULL, NULL, NULL, &error);
if (!ret && error) {
g_warning("Cannot load the mpath plugin: %s" , error->message);
g_clear_error (&error);
@@ -103,7 +103,7 @@ void bd_mpath_close () {
* Flushes all unused multipath device maps.
*/
gboolean bd_mpath_flush_mpaths (GError **error) {
- const gchar *argv[3] = {"multipath", "-F", NULL};
+ const gchar *argv[3] = {MULTIPATH_BIN_PATH, "-F", NULL};
gboolean success = FALSE;
gchar *output = NULL;
@@ -463,7 +463,7 @@ gchar** bd_mpath_get_mpath_members (GError **error) {
* Returns: if successfully set or not
*/
gboolean bd_mpath_set_friendly_names (gboolean enabled, GError **error) {
- const gchar *argv[8] = {"mpathconf", "--find_multipaths", "y", "--user_friendly_names", NULL, "--with_multipathd", "y", NULL};
+ const gchar *argv[8] = {MPATHCONF_BIN_PATH, "--find_multipaths", "y", "--user_friendly_names", NULL, "--with_multipathd", "y", NULL};
argv[4] = enabled ? "y" : "n";
return bd_utils_exec_and_report_error (argv, NULL, error);
diff --git a/src/plugins/part.c b/src/plugins/part.c
index ba93684..38f2ae9 100644
--- a/src/plugins/part.c
+++ b/src/plugins/part.c
@@ -132,14 +132,14 @@ gboolean bd_part_check_deps () {
GError *error = NULL;
gboolean check_ret = TRUE;
- gboolean ret = bd_utils_check_util_version ("sgdisk", "1.0.1", NULL, "GPT fdisk \\(sgdisk\\) version ([\\d\\.]+)", &error);
+ gboolean ret = bd_utils_check_util_version ("sgdisk", SGDISK_BIN_PATH, "1.0.1", NULL, "GPT fdisk \\(sgdisk\\) version ([\\d\\.]+)", &error);
if (!ret && error) {
g_warning("Cannot load the part plugin: %s" , error->message);
g_clear_error (&error);
check_ret = FALSE;
}
- ret = bd_utils_check_util_version ("sfdisk", NULL, NULL, NULL, &error);
+ ret = bd_utils_check_util_version ("sfdisk", SFDISK_BIN_PATH, NULL, NULL, NULL, &error);
if (!ret && error) {
g_warning("Cannot load the part plugin: %s" , error->message);
g_clear_error (&error);
@@ -283,7 +283,7 @@ gboolean bd_part_create_table (const gchar *disk, BDPartTableType type, gboolean
}
static gchar* get_part_type_guid_and_gpt_flags (const gchar *device, int part_num, guint64 *flags, GError **error) {
- const gchar *args[4] = {"sgdisk", NULL, device, NULL};
+ const gchar *args[4] = {SGDISK_BIN_PATH, NULL, device, NULL};
gchar *output = NULL;
gchar **lines = NULL;
gchar **line_p = NULL;
@@ -982,7 +982,7 @@ gboolean bd_part_delete_part (const gchar *disk, const gchar *part, GError **err
}
static gboolean set_gpt_flag (const gchar *device, int part_num, BDPartFlag flag, gboolean state, GError **error) {
- const gchar *args[5] = {"sgdisk", "--attributes", NULL, device, NULL};
+ const gchar *args[5] = {SGDISK_BIN_PATH, "--attributes", NULL, device, NULL};
int bit_num = 0;
gboolean success = FALSE;
@@ -1003,7 +1003,7 @@ static gboolean set_gpt_flag (const gchar *device, int part_num, BDPartFlag flag
}
static gboolean set_gpt_flags (const gchar *device, int part_num, guint64 flags, GError **error) {
- const gchar *args[5] = {"sgdisk", "--attributes", NULL, device, NULL};
+ const gchar *args[5] = {SGDISK_BIN_PATH, "--attributes", NULL, device, NULL};
guint64 real_flags = 0;
gchar *mask_str = NULL;
gboolean success = FALSE;
@@ -1425,7 +1425,7 @@ gboolean bd_part_set_part_name (const gchar *disk, const gchar *part, const gcha
* Returns: whether the @type_guid type was successfully set for @part or not
*/
gboolean bd_part_set_part_type (const gchar *disk, const gchar *part, const gchar *type_guid, GError **error) {
- const gchar *args[5] = {"sgdisk", "--typecode", NULL, disk, NULL};
+ const gchar *args[5] = {SGDISK_BIN_PATH, "--typecode", NULL, disk, NULL};
const gchar *part_num_str = NULL;
gboolean success = FALSE;
guint64 progress_id = 0;
@@ -1475,7 +1475,7 @@ gboolean bd_part_set_part_type (const gchar *disk, const gchar *part, const gcha
* Returns: whether the @part_id type was successfully set for @part or not
*/
gboolean bd_part_set_part_id (const gchar *disk, const gchar *part, const gchar *part_id, GError **error) {
- const gchar *args[6] = {"sfdisk", "--part-type", disk, NULL, part_id, NULL};
+ const gchar *args[6] = {SFDISK_BIN_PATH, "--part-type", disk, NULL, part_id, NULL};
const gchar *part_num_str = NULL;
gboolean success = FALSE;
guint64 progress_id = 0;
@@ -1541,7 +1541,7 @@ gboolean bd_part_set_part_id (const gchar *disk, const gchar *part, const gchar
* Returns (transfer full): partition id type or %NULL in case of error
*/
gchar* bd_part_get_part_id (const gchar *disk, const gchar *part, GError **error) {
- const gchar *args[5] = {"sfdisk", "--part-type", disk, NULL, NULL};
+ const gchar *args[5] = {SFDISK_BIN_PATH, "--part-type", disk, NULL, NULL};
const gchar *part_num_str = NULL;
gchar *output = NULL;
gchar *ret = NULL;
diff --git a/src/plugins/swap.c b/src/plugins/swap.c
index c4ce972..84ce66e 100644
--- a/src/plugins/swap.c
+++ b/src/plugins/swap.c
@@ -50,7 +50,7 @@ GQuark bd_swap_error_quark (void)
*/
gboolean bd_swap_check_deps () {
GError *error = NULL;
- gboolean ret = bd_utils_check_util_version ("mkswap", MKSWAP_MIN_VERSION, NULL, "mkswap from util-linux ([\\d\\.]+)", &error);
+ gboolean ret = bd_utils_check_util_version ("mkswap", MKSWAP_BIN_PATH, MKSWAP_MIN_VERSION, NULL, "mkswap from util-linux ([\\d\\.]+)", &error);
if (!ret && error) {
g_warning("Cannot load the swap plugin: %s" , error->message);
@@ -60,7 +60,7 @@ gboolean bd_swap_check_deps () {
if (!ret)
return FALSE;
- ret = bd_utils_check_util_version ("swapon", SWAPON_MIN_VERSION, NULL, "swapon from util-linux ([\\d\\.]+)", &error);
+ ret = bd_utils_check_util_version ("swapon", SWAPON_BIN_PATH, SWAPON_MIN_VERSION, NULL, "swapon from util-linux ([\\d\\.]+)", &error);
if (!ret && error) {
g_warning("Cannot load the swap plugin: %s" , error->message);
g_clear_error (&error);
@@ -69,7 +69,7 @@ gboolean bd_swap_check_deps () {
if (!ret)
return FALSE;
- ret = bd_utils_check_util_version ("swapoff", SWAPOFF_MIN_VERSION, NULL, "swapoff from util-linux ([\\d\\.]+)", &error);
+ ret = bd_utils_check_util_version ("swapoff", SWAPOFF_BIN_PATH, SWAPOFF_MIN_VERSION, NULL, "swapoff from util-linux ([\\d\\.]+)", &error);
if (!ret && error) {
g_warning("Cannot load the swap plugin: %s" , error->message);
g_clear_error (&error);
@@ -116,7 +116,7 @@ gboolean bd_swap_mkswap (const gchar *device, const gchar *label, const BDExtraA
/* We use -f to force since mkswap tends to refuse creation on lvs with
a message about erasing bootbits sectors on whole disks. Bah. */
- const gchar *argv[6] = {"mkswap", "-f", NULL, NULL, NULL, NULL};
+ const gchar *argv[6] = {MKSWAP_BIN_PATH, "-f", NULL, NULL, NULL, NULL};
if (label) {
argv[next_arg] = "-L";
@@ -152,7 +152,7 @@ gboolean bd_swap_swapon (const gchar *device, gint priority, GError **error) {
guint64 progress_id = 0;
gchar *msg = NULL;
- const gchar *argv[5] = {"swapon", NULL, NULL, NULL, NULL};
+ const gchar *argv[5] = {SWAPON_BIN_PATH, NULL, NULL, NULL, NULL};
msg = g_strdup_printf ("Started 'swapon %s'", device);
progress_id = bd_utils_report_started (msg);
g_free (msg);
@@ -238,7 +238,7 @@ gboolean bd_swap_swapon (const gchar *device, gint priority, GError **error) {
* Returns: whether the swap device was successfully deactivated or not
*/
gboolean bd_swap_swapoff (const gchar *device, GError **error) {
- const gchar *argv[3] = {"swapoff", NULL, NULL};
+ const gchar *argv[3] = {SWAPOFF_BIN_PATH, NULL, NULL};
argv[1] = device;
return bd_utils_exec_and_report_error (argv, NULL, error);
diff --git a/src/utils/exec.c b/src/utils/exec.c
index c31aa80..26d559f 100644
--- a/src/utils/exec.c
+++ b/src/utils/exec.c
@@ -605,6 +605,7 @@ gint bd_utils_version_cmp (const gchar *ver_string1, const gchar *ver_string2, G
/**
* bd_utils_check_util_version:
* @util: name of the utility to check
+ * @util_path: full path to the executable of the utility
* @version: (allow-none): minimum required version of the utility or %NULL
* if no version is required
* @version_arg: (allow-none): argument to use with the @util to get version
@@ -616,23 +617,14 @@ gint bd_utils_version_cmp (const gchar *ver_string1, const gchar *ver_string2, G
* Returns: whether the @util is available in a version >= @version or not
* (@error is set in such case).
*/
-gboolean bd_utils_check_util_version (const gchar *util, const gchar *version, const gchar *version_arg, const gchar *version_regexp, GError **error) {
- gchar *util_path = NULL;
- const gchar *argv[] = {util, version_arg ? version_arg : "--version", NULL};
+gboolean bd_utils_check_util_version (const gchar *util, const gchar *util_path, const gchar *version, const gchar *version_arg, const gchar *version_regexp, GError **error) {
+ const gchar *argv[] = {util_path, version_arg ? version_arg : "--version", NULL};
gchar *output = NULL;
gboolean succ = FALSE;
GRegex *regex = NULL;
GMatchInfo *match_info = NULL;
gchar *version_str = NULL;
- util_path = g_find_program_in_path (util);
- if (!util_path) {
- g_set_error (error, BD_UTILS_EXEC_ERROR, BD_UTILS_EXEC_ERROR_UTIL_UNAVAILABLE,
- "The '%s' utility is not available", util);
- return FALSE;
- }
- g_free (util_path);
-
if (!version)
/* nothing more to do here */
return TRUE;
diff --git a/src/utils/exec.h b/src/utils/exec.h
index cccd463..f1ff16d 100644
--- a/src/utils/exec.h
+++ b/src/utils/exec.h
@@ -56,7 +56,7 @@ gboolean bd_utils_exec_and_capture_output (const gchar **argv, const BDExtraArg
gboolean bd_utils_exec_and_report_progress (const gchar **argv, const BDExtraArg **extra, BDUtilsProgExtract prog_extract, gint *proc_status, GError **error);
gboolean bd_utils_init_logging (BDUtilsLogFunc new_log_func, GError **error);
gint bd_utils_version_cmp (const gchar *ver_string1, const gchar *ver_string2, GError **error);
-gboolean bd_utils_check_util_version (const gchar *util, const gchar *version, const gchar *version_arg, const gchar *version_regexp, GError **error);
+gboolean bd_utils_check_util_version (const gchar *util, const gchar *util_path, const gchar *version, const gchar *version_arg, const gchar *version_regexp, GError **error);
gboolean bd_utils_init_prog_reporting (BDUtilsProgFunc new_prog_func, GError **error);
guint64 bd_utils_report_started (gchar *msg);

View File

@@ -0,0 +1,118 @@
{ stdenv, fetchFromGitHub, autoreconfHook, pkgconfig, python3Packages, glib
, gobjectIntrospection, cryptsetup, nss, nspr, devicemapper, udev, kmod, parted
# Dependencies NOT checked by the configure script:
, volume_key, dmraid, libuuid, gtkdoc, docbook_xsl, libxslt, libbytesize
# Dependencies needed for patching in executable paths:
, utillinux, btrfs-progs, lvm2, bcache-tools, mdadm, mpathconf, multipath-tools
, gptfdisk, e2fsprogs, dosfstools, xfsprogs
}:
let
# Paths to program binaries that we need to patch into the source:
binPaths = {
BTRFS = "${btrfs-progs}/bin/btrfs";
DMSETUP = "${devicemapper}/bin/dmsetup";
DUMPE2FS = "${e2fsprogs}/bin/dumpe2fs";
E2FSCK = "${e2fsprogs}/bin/e2fsck";
FATLABEL = "${dosfstools}/bin/fatlabel";
FSCK_VFAT = "${dosfstools}/bin/fsck.vfat";
LOSETUP = "${utillinux}/bin/losetup";
LVM = "${lvm2.override { enableThinProvisioning = true; }}/bin/lvm";
MAKE_BCACHE = "${bcache-tools}/bin/make-bcache";
MDADM = "${mdadm}/bin/mdadm";
MKFS_BTRFS = "${btrfs-progs}/bin/mkfs.btrfs";
MKFS_EXT4 = "${e2fsprogs}/bin/mkfs.ext4";
MKFS_VFAT = "${dosfstools}/bin/mkfs.vfat";
MKFS_XFS = "${xfsprogs}/bin/mkfs.xfs";
MKSWAP = "${utillinux}/bin/mkswap";
MPATHCONF = "${mpathconf}/bin/mpathconf";
MULTIPATH = "${multipath-tools}/bin/multipath";
RESIZE2FS = "${e2fsprogs}/bin/resize2fs";
SFDISK = "${utillinux}/bin/sfdisk";
SGDISK = "${gptfdisk}/bin/sgdisk";
SWAPOFF = "${utillinux}/bin/swapoff";
SWAPON = "${utillinux}/bin/swapon";
TUNE2FS = "${e2fsprogs}/bin/tune2fs";
XFS_ADMIN = "${xfsprogs}/bin/xfs_admin";
XFS_DB = "${xfsprogs}/bin/xfs_db";
XFS_GROWFS = "${xfsprogs}/bin/xfs_growfs";
XFS_INFO = "${xfsprogs}/bin/xfs_info";
XFS_REPAIR = "${xfsprogs}/bin/xfs_repair";
};
# Create a shell script fragment to check whether a particular path exists.
mkBinCheck = prog: path: let
inherit (stdenv.lib) escapeShellArg;
in ''
echo -n ${escapeShellArg "Checking if ${prog} (${path}) exists..."} >&2
if [ -e ${escapeShellArg path} ]; then
echo ' yes.' >&2
else
echo ' no!' >&2
exit 1
fi
'';
in stdenv.mkDerivation rec {
name = "libblockdev-${version}";
version = "2.1";
src = fetchFromGitHub {
owner = "rhinstaller";
repo = "libblockdev";
rev = "${name}-1";
sha256 = "17q07bvh61l0d9iq9y30fgsa4yigsxkp4b93c6dyb7p1nzmb2085";
};
patches = [ ./bin-paths.patch ./tests.patch ];
outputs = [ "out" "tests" ];
postPatch = ''
patchShebangs .
sed -i -e 's,/usr/include/volume_key,${volume_key}/include/volume_key,' \
src/plugins/Makefile.am
sed -i -e 's,^#define *DEFAULT_CONF_DIR_PATH *",&'"$out"',' \
src/lib/blockdev.c.in
sed -i -e 's/python3-pylint/pylint/' Makefile.am
'';
nativeBuildInputs = [
autoreconfHook pkgconfig gtkdoc docbook_xsl libxslt python3Packages.pylint
];
buildInputs = [
(volume_key.override { inherit (python3Packages) python; })
parted libbytesize python3Packages.python glib cryptsetup nss nspr
devicemapper udev kmod dmraid libuuid
];
propagatedBuildInputs = [
python3Packages.pygobject3 python3Packages.six gobjectIntrospection
];
NIX_CFLAGS_COMPILE = let
mkDefine = prog: path: "-D${prog}_BIN_PATH=\"${path}\"";
in stdenv.lib.mapAttrsToList mkDefine binPaths;
# Note that real tests are run as a subtest in nixos/tests/blivet.
doCheck = true;
checkPhase = let
checkList = stdenv.lib.mapAttrsToList mkBinCheck binPaths;
in stdenv.lib.concatStrings checkList;
postInstall = ''
sed -i -e 's,^ *sonames *= *,&'"$out/lib/"',' \
"$out/etc/libblockdev/conf.d/"*
mkdir "$tests"
cp -Rd tests "$tests/"
'';
meta = {
homepage = "https://github.com/rhinstaller/libblockdev";
description = "A library for low-level manipulation of block devices";
license = stdenv.lib.licenses.lgpl21Plus;
};
}

View File

@@ -0,0 +1,126 @@
diff --git a/tests/library_test.py b/tests/library_test.py
index 100889b..df94c07 100644
--- a/tests/library_test.py
+++ b/tests/library_test.py
@@ -15,15 +15,11 @@ class LibraryOpsTestCase(unittest.TestCase):
self.addCleanup(self._clean_up)
def _clean_up(self):
- # change the sources back and recompile
- os.system("sed -ri 's?1024;//test-change?BD_LVM_MAX_LV_SIZE;?' src/plugins/lvm.c > /dev/null")
- os.system("make -C src/plugins/ libbd_lvm.la &> /dev/null")
-
# try to get everything back to normal by (re)loading all plugins
BlockDev.reinit(None, True, None)
# recompiles the LVM plugin
- @unittest.skipIf("SKIP_SLOW" in os.environ, "skipping slow tests")
+ @unittest.skip("Do not assume that the source code is available")
def test_reload(self):
"""Verify that reloading plugins works as expected"""
@@ -55,7 +51,7 @@ class LibraryOpsTestCase(unittest.TestCase):
self.assertTrue(BlockDev.reinit(None, True, None))
# recompiles the LVM plugin
- @unittest.skipIf("SKIP_SLOW" in os.environ, "skipping slow tests")
+ @unittest.skip("Do not assume that the source code is available")
def test_force_plugin(self):
"""Verify that forcing plugin to be used works as expected"""
@@ -101,7 +97,7 @@ class LibraryOpsTestCase(unittest.TestCase):
self.assertEqual(BlockDev.lvm_get_max_lv_size(), orig_max_size)
# recompiles the LVM plugin
- @unittest.skipIf("SKIP_SLOW" in os.environ, "skipping slow tests")
+ @unittest.skip("Do not assume that the source code is available")
def test_plugin_priority(self):
"""Verify that preferring plugin to be used works as expected"""
@@ -164,7 +160,7 @@ class LibraryOpsTestCase(unittest.TestCase):
os.system ("rm -f src/plugins/.libs/libbd_lvm2.so")
# recompiles the LVM plugin
- @unittest.skipIf("SKIP_SLOW" in os.environ, "skipping slow tests")
+ @unittest.skip("Do not assume that the source code is available")
def test_plugin_fallback(self):
"""Verify that fallback when loading plugins works as expected"""
diff --git a/tests/overrides_hack.py b/tests/overrides_hack.py
index 0f10ee5..50a4dea 100644
--- a/tests/overrides_hack.py
+++ b/tests/overrides_hack.py
@@ -6,5 +6,6 @@ if not gi.overrides.__path__[0].endswith("src/python/gi/overrides"):
if path.endswith("src/python/gi/overrides"):
local_overrides = path
- gi.overrides.__path__.remove(local_overrides)
- gi.overrides.__path__.insert(0, local_overrides)
+ if local_overrides is not None:
+ gi.overrides.__path__.remove(local_overrides)
+ gi.overrides.__path__.insert(0, local_overrides)
diff --git a/tests/utils.py b/tests/utils.py
index d523fb9..644b39b 100644
--- a/tests/utils.py
+++ b/tests/utils.py
@@ -2,6 +2,7 @@ import os
import tempfile
from contextlib import contextmanager
from itertools import chain
+import unittest
from gi.repository import GLib
@@ -41,34 +42,12 @@ def udev_settle():
@contextmanager
def fake_utils(path="."):
- old_path = os.environ.get("PATH", "")
- if old_path:
- new_path = path + ":" + old_path
- else:
- new_path = path
- os.environ["PATH"] = new_path
-
- try:
- yield
- finally:
- os.environ["PATH"] = old_path
+ msg = "Nix package has executable files built-in,"
+ msg += " so the tests can't fake utilities!"
+ raise unittest.SkipTest(msg)
@contextmanager
def fake_path(path=None, keep_utils=None):
- keep_utils = keep_utils or []
- created_utils = set()
- if path:
- for util in keep_utils:
- util_path = GLib.find_program_in_path(util)
- if util_path:
- os.symlink(util_path, os.path.join(path, util))
- created_utils.add(util)
- old_path = os.environ.get("PATH", "")
- os.environ["PATH"] = path or ""
-
- try:
- yield
- finally:
- os.environ["PATH"] = old_path
- for util in created_utils:
- os.unlink(os.path.join(path, util))
+ msg = "Nix package has executable files built-in,"
+ msg += " so the tests can't fake utilities!"
+ raise unittest.SkipTest(msg)
diff --git a/tests/utils_test.py b/tests/utils_test.py
index 71ff94e..a25b9f3 100644
--- a/tests/utils_test.py
+++ b/tests/utils_test.py
@@ -87,6 +87,7 @@ class UtilsExecLoggingTest(unittest.TestCase):
self.assertEqual(BlockDev.utils_version_cmp("1.1.1", "1.1.1-1"), -1)
self.assertEqual(BlockDev.utils_version_cmp("1.1.2", "1.2"), -1)
+ @unittest.skip("Nix package has executables built-in!")
def test_util_version(self):
"""Verify that checking utility availability works as expected"""

View File

@@ -0,0 +1,43 @@
{ stdenv, fetchFromGitHub, autoreconfHook, pkgconfig
, pcre, gmp, mpfr, gtkdoc, docbook_xsl
, python3Packages
# Only needed for tests:
, glibcLocales
}:
stdenv.mkDerivation rec {
name = "libbytesize-${version}";
version = "0.8";
src = fetchFromGitHub {
owner = "rhinstaller";
repo = "libbytesize";
rev = version;
sha256 = "1khdfbx316aq4sbw4g2dgzwqi1nbw76rbwjmjx6libzsm6ibbsww";
};
postPatch = ''
sed -i -e 's,libbytesize.so.1,'"$out/lib/"'&,' src/python/bytesize.py
'';
XML_CATALOG_FILES = "${docbook_xsl}/xml/xsl/docbook/catalog.xml";
buildInputs = [
python3Packages.python pcre gmp mpfr
# Only needed for tests:
python3Packages.pocketlint glibcLocales
];
nativeBuildInputs = [ autoreconfHook pkgconfig gtkdoc ];
propagatedBuildInputs = [ python3Packages.six ];
doInstallCheck = true;
installCheckTarget = "check";
preInstallCheck = "patchShebangs tests";
meta = {
homepage = "https://github.com/rhinstaller/libbytesize";
description = "Library for working with arbitrary big sizes in bytes";
license = stdenv.lib.licenses.lgpl21Plus;
platforms = stdenv.lib.platforms.linux;
};
}

View File

@@ -0,0 +1,42 @@
{ stdenv, fetchurl, pkgconfig, python, utillinux, glib
, cryptsetup, nss, nspr, gnupg1orig, gpgme, gettext
# Test requirements:
, nssTools
}:
stdenv.mkDerivation rec {
name = "volume_key-${version}";
version = "0.3.9";
src = fetchurl {
url = "https://fedorahosted.org/releases/v/o/volume_key/${name}.tar.xz";
sha256 = "19hj0j8vdd0plp1wvw0yrb4i6j9y3lvp27hchp3cwspmkgz582j5";
};
postPatch = let
pkg = "python${stdenv.lib.optionalString (python.isPy3 or false) "3"}";
in ''
sed -i \
-e 's!/usr/include/python!${python}/include/python!' \
-e 's!-lpython[^ ]*!`pkg-config --ldflags ${pkg}`!' \
Makefile.*
sed -i -e '/^#include <config\.h>$/d' lib/libvolume_key.h
'';
buildInputs = [
pkgconfig python utillinux glib cryptsetup nss nspr gettext
# GnuPG 2 cannot receive passphrases from GPGME.
gnupg1orig (gpgme.override { useGnupg1 = true; })
# Test requirements:
nssTools
];
doCheck = true;
preCheck = "export HOME=\"$(mktemp -d)\"";
meta = {
homepage = "https://fedorahosted.org/volume_key/";
description = "Library for manipulating storage volume encryption keys";
license = stdenv.lib.licenses.gpl2;
};
}

View File

@@ -1,42 +1,124 @@
{ stdenv, fetchFromGitHub, buildPythonPackage, pykickstart, pyparted, pyblock
, pyudev, six, libselinux, cryptsetup, multipath-tools, lsof, utillinux
{ stdenv, fetchFromGitHub, buildPythonPackage, python, isPy3k
, libselinux, libblockdev, libbytesize, pyparted, pyudev
, coreutils, utillinux, multipath-tools, lsof
# Test dependencies
, pocketlint, pep8, mock, writeScript
}:
let
pyenable = { enablePython = true; };
selinuxWithPython = libselinux.override pyenable;
cryptsetupWithPython = cryptsetup.override pyenable;
in buildPythonPackage rec {
buildPythonPackage rec {
name = "blivet-${version}";
version = "0.67";
version = "2.1.7";
src = fetchFromGitHub {
owner = "dwlehman";
owner = "rhinstaller";
repo = "blivet";
rev = name;
sha256 = "1gk94ghjrxfqnx53hph1j2s7qcv86fjz48is7l099q9c24rjv8ky";
sha256 = "1z2s8dxwk5qpra7a27sqxzfahk89x8hgq2ff5qk6ymb7qzdkc1yj";
};
postPatch = ''
sed -i \
-e 's|"multipath"|"${multipath-tools}/sbin/multipath"|' \
-e '/^def set_friendly_names/a \ return False' \
blivet/devicelibs/mpath.py
sed -i -e '/"wipefs"/ {
s|wipefs|${utillinux}/sbin/wipefs|
s/-f/--force/
}' blivet/formats/__init__.py
sed -i -e 's|"lsof"|"${lsof}/bin/lsof"|' blivet/formats/fs.py
sed -i -r -e 's|"(u?mount)"|"${utillinux}/bin/\1"|' blivet/util.py
'';
outputs = [ "out" "tests" ];
propagatedBuildInputs = [
pykickstart pyparted pyblock pyudev selinuxWithPython cryptsetupWithPython
six
# Only works with Python 3!
disabled = !isPy3k;
patches = [
./no-hawkey.patch ./test-fixes.patch ./ntfs-formattable.patch ./uuids.patch
];
# Tests are in nixos/tests/blivet.nix.
doCheck = false;
postPatch = ''
cat > blivet/kickstart_stubs.py <<EOF
AUTOPART_TYPE_PLAIN = 0
AUTOPART_TYPE_BTRFS = 1
AUTOPART_TYPE_LVM = 2
AUTOPART_TYPE_LVM_THINP = 3
CLEARPART_TYPE_LINUX = 0
CLEARPART_TYPE_ALL = 1
CLEARPART_TYPE_NONE = 2
CLEARPART_TYPE_LIST = 3
EOF
# enable_installer_mode() imports lots of pyanaconda modules
sed -i -e '/^def enable_installer_mode(/,/^[^ ]/ {
/^\( \|$\|def enable_installer_mode(\)/d
}' blivet/__init__.py
# Remove another import of pyanaconda
sed -i -e '/anaconda_flags/d' blivet/osinstall.py
# Remove test involving pykickstart
rm tests/blivet_test.py
# Remove EDD tests, because they don't work within VM tests
rm tests/devicelibs_test/edd_test.py
# Replace imports of pykickstart with the stubs
sed -i -e '
s/^from pykickstart\.constants import/from blivet.kickstart_stubs import/
' blivet/autopart.py blivet/blivet.py tests/clearpart_test.py
# patch in paths for multipath-tools
sed -i -re '
s!(run_program[^"]*")(multipath|kpartx)"!\1${multipath-tools}/bin/\2"!
' blivet/devices/disk.py blivet/devices/dm.py
# make "kpartx" and "multipath" always available
sed -i -re '
s/^(KPARTX|MULTIPATH)_APP *=.*/\1_APP = available_resource("\L\1\E")/
' blivet/tasks/availability.py
sed -i -e 's|"lsof"|"${lsof}/bin/lsof"|' blivet/formats/fs.py
sed -i -e 's|"wipefs"|"${utillinux}/bin/wipefs"|' \
blivet/formats/__init__.py tests/formats_test/methods_test.py
'';
checkInputs = [ pocketlint pep8 mock ];
propagatedBuildInputs = [
(libselinux.override { enablePython = true; inherit python; })
libblockdev libbytesize pyparted pyudev
];
postInstall = ''
mkdir "$tests"
cp -Rd tests "$tests/"
'';
# Real tests are in nixos/tests/blivet.nix, just run pylint and pep8 here.
checkPhase = let
inherit (stdenv.lib)
intersperse concatLists concatMapStringsSep escapeShellArg;
# Create find arguments that ignore a certain path.
mkIgnorePath = path: [ "-path" "./${path}" "-prune" ];
# Match files that have "python" in their shebangs
matchPythonScripts = writeScript "match-python-scripts.sh" ''
#!${stdenv.shell}
head -n1 "$1" | grep -q '^#!.*python'
'';
# A list of arguments we're going to find in order to accumulate all the
# files we want to test. This replicates the default behaviour of
# pocketlint but with a few additional paths we want to have ignored.
findArgs = concatLists (intersperse ["-o"] (map mkIgnorePath [
# A set of paths we don't want to be tested.
"translation-canary" "build" "dist" "scripts" "nix_run_setup.py"
])) ++ [
"-o" "-name" "*.py" "-exec" "grep" "-qFx" "# pylint: skip-file" "{}" ";"
"-o" "-type" "f" "("
# These searches for files we actually *want* to match:
"-name" "*.py" "-o" "-exec" matchPythonScripts "{}" ";"
")"
# The actual test runner:
"-exec" python.interpreter "tests/pylint/runpylint.py"
# Pylint doesn't seem to pick up C modules:
"--extension-pkg-whitelist=_ped" "{}" "+"
];
in ''
find ${concatMapStringsSep " " escapeShellArg findArgs}
pep8 --ignore=E501,E402,E731 blivet/ tests/ examples/
'';
meta = with stdenv.lib; {
homepage = "https://fedoraproject.org/wiki/Blivet";

View File

@@ -0,0 +1,64 @@
diff --git a/blivet/tasks/availability.py b/blivet/tasks/availability.py
index 7811be1..9e94cb2 100644
--- a/blivet/tasks/availability.py
+++ b/blivet/tasks/availability.py
@@ -19,9 +19,10 @@
#
# Red Hat Author(s): Anne Mulhern <amulhern@redhat.com>
+import re
+import os
import abc
from distutils.version import LooseVersion
-import hawkey
from six import add_metaclass
@@ -37,6 +38,8 @@ import logging
log = logging.getLogger("blivet")
CACHE_AVAILABILITY = True
+NIX_STOREPATH_RE = re.compile('^/nix/store/[a-z0-9]+-([^/]+?)-([^/]*)/.*$',
+ re.IGNORECASE)
class ExternalResource(object):
@@ -153,22 +156,22 @@ class PackageMethod(Method):
:rtype: LooseVersion
:raises AvailabilityError: on failure to obtain package version
"""
- sack = hawkey.Sack()
-
- try:
- sack.load_system_repo()
- except IOError as e:
- # hawkey has been observed allowing an IOError to propagate to
- # caller with message "Failed calculating RPMDB checksum."
- # See: https://bugzilla.redhat.com/show_bug.cgi?id=1223914
- raise AvailabilityError("Could not determine package version for %s: %s" % (self.package.package_name, e))
-
- query = hawkey.Query(sack).filter(name=self.package.package_name, latest=True)
- packages = query.run()
- if len(packages) != 1:
- raise AvailabilityError("Could not determine package version for %s: unable to obtain package information from repo" % self.package.package_name)
-
- return LooseVersion(packages[0].version)
+ bin_searchpath = os.getenv('PATH')
+ if bin_searchpath is None:
+ raise AvailabilityError("Could not determine package version for %s" % self.package.package_name)
+
+ for store_path in set(filter(
+ lambda p: p.startswith('/nix/store/'),
+ [os.path.realpath(p) for p in bin_searchpath.split(':')]
+ )):
+ match = NIX_STOREPATH_RE.match(store_path)
+ if match is None:
+ continue
+ if match.group(1) != self.package.package_name:
+ continue
+ return LooseVersion(match.group(2))
+
+ raise AvailabilityError("Could not determine package version for %s" % self.package.package_name)
def availability_errors(self, resource):
if self._availability_errors is not None and CACHE_AVAILABILITY:

View File

@@ -0,0 +1,16 @@
Submitted upstream at:
https://github.com/rhinstaller/blivet/pull/536
diff --git a/blivet/formats/fs.py b/blivet/formats/fs.py
index 67517d6..9693840 100644
--- a/blivet/formats/fs.py
+++ b/blivet/formats/fs.py
@@ -1026,6 +1026,7 @@ class NTFS(FS):
_type = "ntfs"
_labelfs = fslabeling.NTFSLabeling()
_resizable = True
+ _formattable = True
_min_size = Size("1 MiB")
_max_size = Size("16 TiB")
_packages = ["ntfsprogs"]

View File

@@ -0,0 +1,82 @@
From 5ce95c061a340d14addb88914b08e3be64b62248 Mon Sep 17 00:00:00 2001
From: Vojtech Trefny <vtrefny@redhat.com>
Date: Mon, 2 Jan 2017 15:20:07 +0100
Subject: [PATCH 1/3] Fix task availability test
---
tests/devices_test/dependencies_test.py | 1 +
1 file changed, 1 insertion(+)
diff --git a/tests/devices_test/dependencies_test.py b/tests/devices_test/dependencies_test.py
index 9f2d82e..bf264fe 100644
--- a/tests/devices_test/dependencies_test.py
+++ b/tests/devices_test/dependencies_test.py
@@ -63,6 +63,7 @@ def test_availability_mdraidplugin(self):
# dev is not among its unavailable dependencies
availability.BLOCKDEV_MDRAID_PLUGIN._method = availability.AvailableMethod
+ availability.MKFS_HFSPLUS_APP._method = availability.AvailableMethod # macefi
self.assertNotIn(availability.BLOCKDEV_MDRAID_PLUGIN, self.luks.unavailable_dependencies)
self.assertIsNotNone(ActionCreateDevice(self.luks))
self.assertIsNotNone(ActionDestroyDevice(self.luks))
From 1101460899ba08f65bedff22e7d08a6df1f1015b Mon Sep 17 00:00:00 2001
From: Vojtech Trefny <vtrefny@redhat.com>
Date: Mon, 2 Jan 2017 15:20:43 +0100
Subject: [PATCH 2/3] Fix resize test in fstesting
Resizing raises DeviceFormatError, not FSError (changed in
298984306e6e56f43444f73eed075db54dd0a057).
---
tests/formats_test/fstesting.py | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/tests/formats_test/fstesting.py b/tests/formats_test/fstesting.py
index 0f7ddc4..7b7bbc6 100644
--- a/tests/formats_test/fstesting.py
+++ b/tests/formats_test/fstesting.py
@@ -6,7 +6,7 @@
import tempfile
from tests import loopbackedtestcase
-from blivet.errors import FSError, FSResizeError
+from blivet.errors import FSError, FSResizeError, DeviceFormatError
from blivet.size import Size, ROUND_DOWN
from blivet.formats import fs
@@ -199,9 +199,9 @@ def test_resize(self):
if not can_resize(an_fs):
self.assertFalse(an_fs.resizable)
# Not resizable, so can not do resizing actions.
- with self.assertRaises(FSError):
+ with self.assertRaises(DeviceFormatError):
an_fs.target_size = Size("64 MiB")
- with self.assertRaises(FSError):
+ with self.assertRaises(DeviceFormatError):
an_fs.do_resize()
else:
self.assertTrue(an_fs.resizable)
From df69e6ce760df62b56a2d46d1891aff638a21440 Mon Sep 17 00:00:00 2001
From: Vojtech Trefny <vtrefny@redhat.com>
Date: Mon, 2 Jan 2017 15:23:15 +0100
Subject: [PATCH 3/3] Do not try to search for 'tmpfs' devices in udev database
There is no "real" tmpfs device, so we can't find it using udev.
---
blivet/mounts.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/blivet/mounts.py b/blivet/mounts.py
index c787fde..26df6f0 100644
--- a/blivet/mounts.py
+++ b/blivet/mounts.py
@@ -112,7 +112,7 @@ def get_mountpoints(self, devspec, subvolspec=None):
subvolspec = str(subvolspec)
# devspec == None means "get 'nodev' mount points"
- if devspec is not None:
+ if devspec not in (None, "tmpfs"):
# use the canonical device path (if available)
canon_devspec = resolve_devspec(devspec, sysname=True)
if canon_devspec is not None:

View File

@@ -0,0 +1,972 @@
Submitted upstream at:
https://github.com/rhinstaller/blivet/pull/537
diff --git a/blivet/errors.py b/blivet/errors.py
index a39a5fbe..8d4c6463 100644
--- a/blivet/errors.py
+++ b/blivet/errors.py
@@ -109,6 +109,10 @@ class FSWriteLabelError(FSError):
pass
+class FSWriteUUIDError(FSError):
+ pass
+
+
class FSReadLabelError(FSError):
pass
diff --git a/blivet/formats/fs.py b/blivet/formats/fs.py
index 30e65536..631706ed 100644
--- a/blivet/formats/fs.py
+++ b/blivet/formats/fs.py
@@ -38,9 +38,11 @@ from ..tasks import fsreadlabel
from ..tasks import fsresize
from ..tasks import fssize
from ..tasks import fssync
+from ..tasks import fsuuid
from ..tasks import fswritelabel
+from ..tasks import fswriteuuid
from ..errors import FormatCreateError, FSError, FSReadLabelError
-from ..errors import FSWriteLabelError
+from ..errors import FSWriteLabelError, FSWriteUUIDError
from . import DeviceFormat, register_device_format
from .. import util
from .. import platform
@@ -65,12 +67,14 @@ class FS(DeviceFormat):
_name = None
_modules = [] # kernel modules required for support
_labelfs = None # labeling functionality
+ _uuidfs = None # functionality for UUIDs
_fsck_class = fsck.UnimplementedFSCK
_mkfs_class = fsmkfs.UnimplementedFSMkfs
_mount_class = fsmount.FSMount
_readlabel_class = fsreadlabel.UnimplementedFSReadLabel
_sync_class = fssync.UnimplementedFSSync
_writelabel_class = fswritelabel.UnimplementedFSWriteLabel
+ _writeuuid_class = fswriteuuid.UnimplementedFSWriteUUID
# This constant is aquired by testing some filesystems
# and it's giving us percentage of space left after the format.
# This number is more guess than precise number because this
@@ -111,6 +115,7 @@ class FS(DeviceFormat):
self._readlabel = self._readlabel_class(self)
self._sync = self._sync_class(self)
self._writelabel = self._writelabel_class(self)
+ self._writeuuid = self._writeuuid_class(self)
self._current_info = None # info obtained by _info task
@@ -249,6 +254,31 @@ class FS(DeviceFormat):
label = property(lambda s: s._get_label(), lambda s, l: s._set_label(l),
doc="this filesystem's label")
+ def can_assign_uuid(self):
+ """Returns True if this filesystem supports setting an UUID during
+ creation, otherwise False.
+
+ :rtype: bool
+ """
+ return self._mkfs.can_set_uuid and self._mkfs.available
+
+ def can_reassign_uuid(self):
+ """Returns True if it's possible to set the UUID of this filesystem
+ after it has been created, otherwise False.
+
+ :rtype: bool
+ """
+ return self._writeuuid.available
+
+ def uuid_format_ok(self, uuid):
+ """Return True if the UUID has an acceptable format for this
+ filesystem.
+
+ :param uuid: An UUID
+ :type uuid: str
+ """
+ return self._uuidfs is not None and self._uuidfs.uuid_format_ok(uuid)
+
def update_size_info(self):
""" Update this filesystem's current and minimum size (for resize). """
@@ -358,9 +388,16 @@ class FS(DeviceFormat):
super(FS, self)._create()
try:
- self._mkfs.do_task(options=kwargs.get("options"), label=not self.relabels())
+ self._mkfs.do_task(options=kwargs.get("options"),
+ label=not self.relabels(),
+ set_uuid=self.can_assign_uuid())
except FSWriteLabelError as e:
log.warning("Choosing not to apply label (%s) during creation of filesystem %s. Label format is unacceptable for this filesystem.", self.label, self.type)
+ except FSWriteUUIDError as e:
+ log.warning("Choosing not to apply UUID (%s) during"
+ " creation of filesystem %s. UUID format"
+ " is unacceptable for this filesystem.",
+ self.uuid, self.type)
except FSError as e:
raise FormatCreateError(e, self.device)
@@ -371,6 +408,9 @@ class FS(DeviceFormat):
self.write_label()
except FSError as e:
log.warning("Failed to write label (%s) for filesystem %s: %s", self.label, self.type, e)
+ if self.uuid is not None and not self.can_assign_uuid() and \
+ self.can_reassign_uuid():
+ self.write_uuid()
def _post_resize(self):
self.do_check()
@@ -607,6 +647,35 @@ class FS(DeviceFormat):
self._writelabel.do_task()
+ def write_uuid(self):
+ """Set an UUID for this filesystem.
+
+ :raises: FSError
+
+ Raises an FSError if the UUID can not be set.
+ """
+ err = None
+
+ if self.uuid is None:
+ err = "makes no sense to write an UUID when not requested"
+
+ if not self.exists:
+ err = "filesystem has not been created"
+
+ if not self._writeuuid.available:
+ err = "no application to set UUID for filesystem %s" % self.type
+
+ if not self.uuid_format_ok(self.uuid):
+ err = "bad UUID format for application %s" % self._writeuuid
+
+ if not os.path.exists(self.device):
+ err = "device does not exist"
+
+ if err is not None:
+ raise FSError(err)
+
+ self._writeuuid.do_task()
+
@property
def utils_available(self):
# we aren't checking for fsck because we shouldn't need it
@@ -720,6 +789,7 @@ class Ext2FS(FS):
_type = "ext2"
_modules = ["ext2"]
_labelfs = fslabeling.Ext2FSLabeling()
+ _uuidfs = fsuuid.Ext2FSUUID()
_packages = ["e2fsprogs"]
_formattable = True
_supported = True
@@ -736,6 +806,7 @@ class Ext2FS(FS):
_resize_class = fsresize.Ext2FSResize
_size_info_class = fssize.Ext2FSSize
_writelabel_class = fswritelabel.Ext2FSWriteLabel
+ _writeuuid_class = fswriteuuid.Ext2FSWriteUUID
parted_system = fileSystemType["ext2"]
_metadata_size_factor = 0.93 # ext2 metadata may take 7% of space
@@ -779,6 +850,7 @@ class FATFS(FS):
_type = "vfat"
_modules = ["vfat"]
_labelfs = fslabeling.FATFSLabeling()
+ _uuidfs = fsuuid.FATFSUUID()
_supported = True
_formattable = True
_max_size = Size("1 TiB")
@@ -788,6 +860,7 @@ class FATFS(FS):
_mount_class = fsmount.FATFSMount
_readlabel_class = fsreadlabel.DosFSReadLabel
_writelabel_class = fswritelabel.DosFSWriteLabel
+ _writeuuid_class = fswriteuuid.DosFSWriteUUID
_metadata_size_factor = 0.99 # fat metadata may take 1% of space
# FIXME this should be fat32 in some cases
parted_system = fileSystemType["fat16"]
@@ -887,6 +960,7 @@ class JFS(FS):
_type = "jfs"
_modules = ["jfs"]
_labelfs = fslabeling.JFSLabeling()
+ _uuidfs = fsuuid.JFSUUID()
_max_size = Size("8 TiB")
_formattable = True
_linux_native = True
@@ -896,6 +970,7 @@ class JFS(FS):
_mkfs_class = fsmkfs.JFSMkfs
_size_info_class = fssize.JFSSize
_writelabel_class = fswritelabel.JFSWriteLabel
+ _writeuuid_class = fswriteuuid.JFSWriteUUID
_metadata_size_factor = 0.99 # jfs metadata may take 1% of space
parted_system = fileSystemType["jfs"]
@@ -912,6 +987,7 @@ class ReiserFS(FS):
""" reiserfs filesystem """
_type = "reiserfs"
_labelfs = fslabeling.ReiserFSLabeling()
+ _uuidfs = fsuuid.ReiserFSUUID()
_modules = ["reiserfs"]
_max_size = Size("16 TiB")
_formattable = True
@@ -923,6 +999,7 @@ class ReiserFS(FS):
_mkfs_class = fsmkfs.ReiserFSMkfs
_size_info_class = fssize.ReiserFSSize
_writelabel_class = fswritelabel.ReiserFSWriteLabel
+ _writeuuid_class = fswriteuuid.ReiserFSWriteUUID
_metadata_size_factor = 0.98 # reiserfs metadata may take 2% of space
parted_system = fileSystemType["reiserfs"]
@@ -940,6 +1017,7 @@ class XFS(FS):
_type = "xfs"
_modules = ["xfs"]
_labelfs = fslabeling.XFSLabeling()
+ _uuidfs = fsuuid.XFSUUID()
_max_size = Size("16 EiB")
_formattable = True
_linux_native = True
@@ -951,6 +1029,7 @@ class XFS(FS):
_size_info_class = fssize.XFSSize
_sync_class = fssync.XFSSync
_writelabel_class = fswritelabel.XFSWriteLabel
+ _writeuuid_class = fswriteuuid.XFSWriteUUID
_metadata_size_factor = 0.97 # xfs metadata may take 3% of space
parted_system = fileSystemType["xfs"]
@@ -990,6 +1069,7 @@ class HFSPlus(FS):
_udev_types = ["hfsplus"]
_packages = ["hfsplus-tools"]
_labelfs = fslabeling.HFSPlusLabeling()
+ _uuidfs = fsuuid.HFSPlusUUID()
_formattable = True
_min_size = Size("1 MiB")
_max_size = Size("2 TiB")
@@ -1026,6 +1106,7 @@ class NTFS(FS):
""" ntfs filesystem. """
_type = "ntfs"
_labelfs = fslabeling.NTFSLabeling()
+ _uuidfs = fsuuid.NTFSUUID()
_resizable = True
_formattable = True
_min_size = Size("1 MiB")
@@ -1040,6 +1121,7 @@ class NTFS(FS):
_resize_class = fsresize.NTFSResize
_size_info_class = fssize.NTFSSize
_writelabel_class = fswritelabel.NTFSWriteLabel
+ _writeuuid_class = fswriteuuid.NTFSWriteUUID
parted_system = fileSystemType["ntfs"]
register_device_format(NTFS)
diff --git a/blivet/formats/swap.py b/blivet/formats/swap.py
index 0818f586..66a311a9 100644
--- a/blivet/formats/swap.py
+++ b/blivet/formats/swap.py
@@ -21,8 +21,10 @@
#
from parted import PARTITION_SWAP, fileSystemType
+from ..errors import FSWriteUUIDError
from ..storage_log import log_method_call
from ..tasks import availability
+from ..tasks import fsuuid
from . import DeviceFormat, register_device_format
from ..size import Size
@@ -110,6 +112,10 @@ class SwapSpace(DeviceFormat):
label = property(lambda s: s._get_label(), lambda s, l: s._set_label(l),
doc="the label for this swap space")
+ def uuid_format_ok(self, uuid):
+ """Check whether the given UUID is correct according to RFC 4122."""
+ return fsuuid.FSUUID._check_rfc4122_uuid(uuid)
+
def _set_priority(self, priority):
# pylint: disable=attribute-defined-outside-init
if priority is None:
@@ -167,6 +173,12 @@ class SwapSpace(DeviceFormat):
def _create(self, **kwargs):
log_method_call(self, device=self.device,
type=self.type, status=self.status)
- blockdev.swap.mkswap(self.device, label=self.label)
+ if self.uuid is None:
+ blockdev.swap.mkswap(self.device, label=self.label)
+ else:
+ if not self.uuid_format_ok(self.uuid):
+ raise FSWriteUUIDError("bad UUID format for swap filesystem")
+ blockdev.swap.mkswap(self.device, label=self.label,
+ extra={"-U": self.uuid})
register_device_format(SwapSpace)
diff --git a/blivet/tasks/availability.py b/blivet/tasks/availability.py
index 7811be17..4a134d79 100644
--- a/blivet/tasks/availability.py
+++ b/blivet/tasks/availability.py
@@ -298,12 +298,14 @@ MKFS_JFS_APP = application("mkfs.jfs")
MKFS_XFS_APP = application("mkfs.xfs")
MKNTFS_APP = application("mkntfs")
MKREISERFS_APP = application("mkreiserfs")
+MLABEL_APP = application("mlabel")
MULTIPATH_APP = application("multipath")
NTFSINFO_APP = application("ntfsinfo")
NTFSLABEL_APP = application("ntfslabel")
NTFSRESIZE_APP = application("ntfsresize")
REISERFSTUNE_APP = application("reiserfstune")
RESIZE2FS_APP = application_by_package("resize2fs", E2FSPROGS_PACKAGE)
+TUNE2FS_APP = application_by_package("tune2fs", E2FSPROGS_PACKAGE)
XFSADMIN_APP = application("xfs_admin")
XFSDB_APP = application("xfs_db")
XFSFREEZE_APP = application("xfs_freeze")
diff --git a/blivet/tasks/fsmkfs.py b/blivet/tasks/fsmkfs.py
index d0da5b21..ad166aa0 100644
--- a/blivet/tasks/fsmkfs.py
+++ b/blivet/tasks/fsmkfs.py
@@ -24,7 +24,7 @@ import shlex
from six import add_metaclass
-from ..errors import FSError, FSWriteLabelError
+from ..errors import FSError, FSWriteLabelError, FSWriteUUIDError
from .. import util
from . import availability
@@ -36,6 +36,7 @@ from . import task
class FSMkfsTask(fstask.FSTask):
can_label = abc.abstractproperty(doc="whether this task labels")
+ can_set_uuid = abc.abstractproperty(doc="whether this task can set UUID")
@add_metaclass(abc.ABCMeta)
@@ -49,6 +50,16 @@ class FSMkfs(task.BasicApplication, FSMkfsTask):
args = abc.abstractproperty(doc="options for creating filesystem")
+ @abc.abstractmethod
+ def get_uuid_args(self, uuid):
+ """Return a list of arguments for setting a filesystem UUID.
+
+ :param uuid: the UUID to set
+ :type uuid: str
+ :rtype: list of str
+ """
+ raise NotImplementedError
+
# IMPLEMENTATION methods
@property
@@ -61,6 +72,15 @@ class FSMkfs(task.BasicApplication, FSMkfsTask):
return self.label_option is not None
@property
+ def can_set_uuid(self):
+ """Whether this task can set the UUID of a filesystem.
+
+ :returns: True if UUID can be set
+ :rtype: bool
+ """
+ return self.get_uuid_args is not None
+
+ @property
def _label_options(self):
""" Any labeling options that a particular filesystem may use.
@@ -80,13 +100,33 @@ class FSMkfs(task.BasicApplication, FSMkfsTask):
else:
raise FSWriteLabelError("Choosing not to apply label (%s) during creation of filesystem %s. Label format is unacceptable for this filesystem." % (self.fs.label, self.fs.type))
- def _format_options(self, options=None, label=False):
+ @property
+ def _uuid_options(self):
+ """Any UUID options that a particular filesystem may use.
+
+ :returns: UUID options
+ :rtype: list of str
+ :raises: FSWriteUUIDError
+ """
+ if self.get_uuid_args is None or self.fs.uuid is None:
+ return []
+
+ if self.fs.uuid_format_ok(self.fs.uuid):
+ return self.get_uuid_args(self.fs.uuid)
+ else:
+ raise FSWriteUUIDError("Choosing not to apply UUID (%s) during"
+ " creation of filesystem %s. UUID format"
+ " is unacceptable for this filesystem."
+ % (self.fs.uuid, self.fs.type))
+
+ def _format_options(self, options=None, label=False, set_uuid=False):
"""Get a list of format options to be used when creating the
filesystem.
:param options: any special options
:type options: list of str or None
:param bool label: if True, label if possible, default is False
+ :param bool set_uuid: whether set UUID if possible, default is False
"""
options = options or []
@@ -94,25 +134,33 @@ class FSMkfs(task.BasicApplication, FSMkfsTask):
raise FSError("options parameter must be a list.")
label_options = self._label_options if label else []
+ uuid_options = self._uuid_options if set_uuid else []
create_options = shlex.split(self.fs.create_options or "")
- return options + self.args + label_options + create_options + [self.fs.device]
+ return (options + self.args + label_options + uuid_options +
+ create_options + [self.fs.device])
- def _mkfs_command(self, options, label):
+ def _mkfs_command(self, options, label, set_uuid):
"""Return the command to make the filesystem.
:param options: any special options
:type options: list of str or None
+ :param label: whether to set a label
+ :type label: bool
+ :param set_uuid: whether to set an UUID
+ :type set_uuid: bool
:returns: the mkfs command
:rtype: list of str
"""
- return [str(self.ext)] + self._format_options(options, label)
+ return [str(self.ext)] + self._format_options(options, label, set_uuid)
- def do_task(self, options=None, label=False):
+ def do_task(self, options=None, label=False, set_uuid=False):
"""Create the format on the device and label if possible and desired.
:param options: any special options, may be None
:type options: list of str or NoneType
:param bool label: whether to label while creating, default is False
+ :param bool set_uuid: whether to set an UUID while creating, default
+ is False
"""
# pylint: disable=arguments-differ
error_msgs = self.availability_errors
@@ -120,8 +168,9 @@ class FSMkfs(task.BasicApplication, FSMkfsTask):
raise FSError("\n".join(error_msgs))
options = options or []
+ cmd = self._mkfs_command(options, label, set_uuid)
try:
- ret = util.run_program(self._mkfs_command(options, label))
+ ret = util.run_program(cmd)
except OSError as e:
raise FSError(e)
@@ -133,6 +182,9 @@ class BTRFSMkfs(FSMkfs):
ext = availability.MKFS_BTRFS_APP
label_option = None
+ def get_uuid_args(self, uuid):
+ return ["-U", uuid]
+
@property
def args(self):
return []
@@ -144,6 +196,9 @@ class Ext2FSMkfs(FSMkfs):
_opts = []
+ def get_uuid_args(self, uuid):
+ return ["-U", uuid]
+
@property
def args(self):
return self._opts + (["-T", self.fs.fsprofile] if self.fs.fsprofile else [])
@@ -161,6 +216,9 @@ class FATFSMkfs(FSMkfs):
ext = availability.MKDOSFS_APP
label_option = "-n"
+ def get_uuid_args(self, uuid):
+ return ["-i", uuid.replace('-', '')]
+
@property
def args(self):
return []
@@ -169,6 +227,7 @@ class FATFSMkfs(FSMkfs):
class GFS2Mkfs(FSMkfs):
ext = availability.MKFS_GFS2_APP
label_option = None
+ get_uuid_args = None
@property
def args(self):
@@ -178,6 +237,7 @@ class GFS2Mkfs(FSMkfs):
class HFSMkfs(FSMkfs):
ext = availability.HFORMAT_APP
label_option = "-l"
+ get_uuid_args = None
@property
def args(self):
@@ -187,6 +247,7 @@ class HFSMkfs(FSMkfs):
class HFSPlusMkfs(FSMkfs):
ext = availability.MKFS_HFSPLUS_APP
label_option = "-v"
+ get_uuid_args = None
@property
def args(self):
@@ -196,6 +257,7 @@ class HFSPlusMkfs(FSMkfs):
class JFSMkfs(FSMkfs):
ext = availability.MKFS_JFS_APP
label_option = "-L"
+ get_uuid_args = None
@property
def args(self):
@@ -205,6 +267,7 @@ class JFSMkfs(FSMkfs):
class NTFSMkfs(FSMkfs):
ext = availability.MKNTFS_APP
label_option = "-L"
+ get_uuid_args = None
@property
def args(self):
@@ -215,6 +278,9 @@ class ReiserFSMkfs(FSMkfs):
ext = availability.MKREISERFS_APP
label_option = "-l"
+ def get_uuid_args(self, uuid):
+ return ["-u", uuid]
+
@property
def args(self):
return ["-f", "-f"]
@@ -224,6 +290,9 @@ class XFSMkfs(FSMkfs):
ext = availability.MKFS_XFS_APP
label_option = "-L"
+ def get_uuid_args(self, uuid):
+ return ["-m", "uuid=" + uuid]
+
@property
def args(self):
return ["-f"]
@@ -234,3 +303,7 @@ class UnimplementedFSMkfs(task.UnimplementedTask, FSMkfsTask):
@property
def can_label(self):
return False
+
+ @property
+ def can_set_uuid(self):
+ return False
diff --git a/blivet/tasks/fsuuid.py b/blivet/tasks/fsuuid.py
new file mode 100644
index 00000000..5beb43ac
--- /dev/null
+++ b/blivet/tasks/fsuuid.py
@@ -0,0 +1,85 @@
+import abc
+
+from six import add_metaclass
+
+
+@add_metaclass(abc.ABCMeta)
+class FSUUID(object):
+
+ """An abstract class that represents filesystem actions for setting the
+ UUID.
+ """
+
+ @abc.abstractmethod
+ def uuid_format_ok(self, uuid):
+ """Returns True if the given UUID is correctly formatted for
+ this filesystem, otherwise False.
+
+ :param str uuid: the UUID for this filesystem
+ :rtype: bool
+ """
+ raise NotImplementedError
+
+ # IMPLEMENTATION methods
+
+ @classmethod
+ def _check_rfc4122_uuid(cls, uuid):
+ """Check whether the given UUID is correct according to RFC 4122 and
+ return True if it's correct or False otherwise.
+
+ :param str uuid: the UUID to check
+ :rtype: bool
+ """
+ chunks = uuid.split('-')
+ if len(chunks) != 5:
+ return False
+ chunklens = [len(chunk) for chunk in chunks
+ if all(char in "0123456789abcdef" for char in chunk)]
+ return chunklens == [8, 4, 4, 4, 12]
+
+
+class Ext2FSUUID(FSUUID):
+ @classmethod
+ def uuid_format_ok(cls, uuid):
+ return cls._check_rfc4122_uuid(uuid)
+
+
+class FATFSUUID(FSUUID):
+ @classmethod
+ def uuid_format_ok(cls, uuid):
+ if len(uuid) != 9 or uuid[4] != '-':
+ return False
+ return all(char in "0123456789ABCDEF"
+ for char in (uuid[:4] + uuid[5:]))
+
+
+class JFSUUID(FSUUID):
+ @classmethod
+ def uuid_format_ok(cls, uuid):
+ return cls._check_rfc4122_uuid(uuid)
+
+
+class ReiserFSUUID(FSUUID):
+ @classmethod
+ def uuid_format_ok(cls, uuid):
+ return cls._check_rfc4122_uuid(uuid)
+
+
+class XFSUUID(FSUUID):
+ @classmethod
+ def uuid_format_ok(cls, uuid):
+ return cls._check_rfc4122_uuid(uuid)
+
+
+class HFSPlusUUID(FSUUID):
+ @classmethod
+ def uuid_format_ok(cls, uuid):
+ return cls._check_rfc4122_uuid(uuid)
+
+
+class NTFSUUID(FSUUID):
+ @classmethod
+ def uuid_format_ok(cls, uuid):
+ if len(uuid) != 16:
+ return False
+ return all(char in "0123456789ABCDEF" for char in uuid)
diff --git a/blivet/tasks/fswriteuuid.py b/blivet/tasks/fswriteuuid.py
new file mode 100644
index 00000000..5bc54ba1
--- /dev/null
+++ b/blivet/tasks/fswriteuuid.py
@@ -0,0 +1,93 @@
+import abc
+
+from six import add_metaclass
+
+from .. import util
+from ..errors import FSWriteUUIDError
+
+from . import availability
+from . import fstask
+from . import task
+
+
+@add_metaclass(abc.ABCMeta)
+class FSWriteUUID(task.BasicApplication, fstask.FSTask):
+
+ """ An abstract class that represents writing an UUID for a filesystem. """
+
+ description = "write filesystem UUID"
+
+ args = abc.abstractproperty(doc="arguments for writing a UUID")
+
+ # IMPLEMENTATION methods
+
+ @property
+ def _set_command(self):
+ """Get the command to set UUID of the filesystem.
+
+ :return: the command
+ :rtype: list of str
+ """
+ return [str(self.ext)] + self.args
+
+ def do_task(self):
+ error_msgs = self.availability_errors
+ if error_msgs:
+ raise FSWriteUUIDError("\n".join(error_msgs))
+
+ rc = util.run_program(self._set_command)
+ if rc:
+ msg = "setting UUID via {} failed".format(self._set_command)
+ raise FSWriteUUIDError(msg)
+
+
+class DosFSWriteUUID(FSWriteUUID):
+ ext = availability.MLABEL_APP
+
+ @property
+ def args(self):
+ return ["-N", self.fs.uuid, "-i", self.fs.device, "::"]
+
+
+class Ext2FSWriteUUID(FSWriteUUID):
+ ext = availability.TUNE2FS_APP
+
+ @property
+ def args(self):
+ return ["-U", self.fs.uuid, self.fs.device]
+
+
+class JFSWriteUUID(FSWriteUUID):
+ ext = availability.JFSTUNE_APP
+
+ @property
+ def args(self):
+ return ["-U", self.fs.uuid, self.fs.device]
+
+
+class NTFSWriteUUID(FSWriteUUID):
+ ext = availability.NTFSLABEL_APP
+
+ @property
+ def args(self):
+ return ["--new-serial=" + self.fs.uuid, self.fs.device]
+
+
+class ReiserFSWriteUUID(FSWriteUUID):
+ ext = availability.REISERFSTUNE_APP
+
+ @property
+ def args(self):
+ return ["-u", self.fs.uuid, self.fs.device]
+
+
+class XFSWriteUUID(FSWriteUUID):
+ ext = availability.XFSADMIN_APP
+
+ @property
+ def args(self):
+ return ["-U", self.fs.uuid, self.fs.device]
+
+
+class UnimplementedFSWriteUUID(fstask.UnimplementedFSTask):
+ pass
diff --git a/tests/formats_test/fsuuid.py b/tests/formats_test/fsuuid.py
new file mode 100644
index 00000000..0b550cbc
--- /dev/null
+++ b/tests/formats_test/fsuuid.py
@@ -0,0 +1,111 @@
+import sys
+import abc
+from six import add_metaclass
+from unittest import skipIf
+
+from tests import loopbackedtestcase
+from blivet.devicetree import DeviceTree
+from blivet.errors import FSError, FSWriteUUIDError
+from blivet.size import Size
+
+
+@add_metaclass(abc.ABCMeta)
+class SetUUID(loopbackedtestcase.LoopBackedTestCase):
+
+ """Base class for UUID tests without any test methods."""
+
+ _fs_class = abc.abstractproperty(
+ doc="The class of the filesystem being tested on.")
+
+ _valid_uuid = abc.abstractproperty(
+ doc="A valid UUID for this filesystem.")
+
+ _invalid_uuid = abc.abstractproperty(
+ doc="An invalid UUID for this filesystem.")
+
+ def __init__(self, methodName='run_test'):
+ super(SetUUID, self).__init__(methodName=methodName,
+ device_spec=[Size("100 MiB")])
+
+ def setUp(self):
+ an_fs = self._fs_class()
+ if not an_fs.formattable:
+ self.skipTest("can not create filesystem %s" % an_fs.name)
+ super(SetUUID, self).setUp()
+
+
+class SetUUIDWithMkFs(SetUUID):
+
+ """Tests various aspects of setting an UUID for a filesystem where the
+ native mkfs tool can set the UUID.
+ """
+
+ @skipIf(sys.version_info < (3, 4), "assertLogs is not supported")
+ def test_set_invalid_uuid(self):
+ """Create the filesystem with an invalid UUID."""
+ an_fs = self._fs_class(device=self.loop_devices[0],
+ uuid=self._invalid_uuid)
+ if self._fs_class._type == "swap":
+ with self.assertRaisesRegex(FSWriteUUIDError, "bad UUID format"):
+ an_fs.create()
+ else:
+ with self.assertLogs('blivet', 'WARNING') as logs:
+ an_fs.create()
+ self.assertTrue(len(logs.output) >= 1)
+ self.assertRegex(logs.output[0], "UUID format.*unacceptable")
+
+ def test_set_uuid(self):
+ """Create the filesystem with a valid UUID."""
+ an_fs = self._fs_class(device=self.loop_devices[0],
+ uuid=self._valid_uuid)
+ self.assertIsNone(an_fs.create())
+
+ # Use DeviceTree for detecting the loop device and check whether the
+ # UUID is correct.
+ dt = DeviceTree()
+ dt.reset()
+ dt.populate()
+ device = dt.get_device_by_path(self.loop_devices[0])
+ self.assertEqual(device.format.uuid, self._valid_uuid)
+
+
+class SetUUIDAfterMkFs(SetUUID):
+
+ """Tests various aspects of setting an UUID for a filesystem where the
+ native mkfs tool can't set the UUID.
+ """
+
+ def setUp(self):
+ an_fs = self._fs_class()
+ if an_fs._writeuuid.availability_errors:
+ self.skipTest("can not write UUID for filesystem %s" % an_fs.name)
+ super(SetUUIDAfterMkFs, self).setUp()
+
+ def test_set_uuid_later(self):
+ """Create the filesystem with random UUID and reassign later."""
+ an_fs = self._fs_class(device=self.loop_devices[0])
+ if an_fs._writeuuid.availability_errors:
+ self.skipTest("can not write UUID for filesystem %s" % an_fs.name)
+ self.assertIsNone(an_fs.create())
+
+ an_fs.uuid = self._valid_uuid
+ self.assertIsNone(an_fs.write_uuid())
+
+ # Use DeviceTree for detecting the loop device and check whether the
+ # UUID is correct.
+ dt = DeviceTree()
+ dt.reset()
+ dt.populate()
+ device = dt.get_device_by_path(self.loop_devices[0])
+ self.assertEqual(device.format.uuid, self._valid_uuid)
+
+ def test_set_invalid_uuid_later(self):
+ """Create the filesystem and try to reassign an invalid UUID later."""
+ an_fs = self._fs_class(device=self.loop_devices[0])
+ if an_fs._writeuuid.availability_errors:
+ self.skipTest("can not write UUID for filesystem %s" % an_fs.name)
+ self.assertIsNone(an_fs.create())
+
+ an_fs.uuid = self._invalid_uuid
+ with self.assertRaisesRegex(FSError, "bad UUID format"):
+ an_fs.write_uuid()
diff --git a/tests/formats_test/methods_test.py b/tests/formats_test/methods_test.py
index 9df74158..5659216c 100644
--- a/tests/formats_test/methods_test.py
+++ b/tests/formats_test/methods_test.py
@@ -300,7 +300,12 @@ class FSMethodsTestCase(FormatMethodsTestCase):
with patch.object(self.format, "_mkfs"):
self.format.exists = False
self.format.create()
- self.format._mkfs.do_task.assert_called_with(options=None, label=not self.format.relabels()) # pylint: disable=no-member
+ # pylint: disable=no-member
+ self.format._mkfs.do_task.assert_called_with(
+ options=None,
+ label=not self.format.relabels(),
+ set_uuid=self.format.can_assign_uuid()
+ )
def _test_setup_backend(self):
with patch.object(self.format, "_mount"):
diff --git a/tests/formats_test/uuid_test.py b/tests/formats_test/uuid_test.py
new file mode 100644
index 00000000..1ed40386
--- /dev/null
+++ b/tests/formats_test/uuid_test.py
@@ -0,0 +1,87 @@
+import unittest
+
+import blivet.formats.fs as fs
+import blivet.formats.swap as swap
+
+from . import fsuuid
+
+
+class InitializationTestCase(unittest.TestCase):
+
+ """Test FS object initialization."""
+
+ def test_uuids(self):
+ """Initialize some filesystems with valid and invalid UUIDs."""
+
+ # File systems that accept real UUIDs (RFC 4122)
+ for fscls in [fs.Ext2FS, fs.JFS, fs.ReiserFS, fs.XFS, fs.HFSPlus]:
+ uuid = "0invalid-uuid-with-righ-tlength00000"
+ self.assertFalse(fscls().uuid_format_ok(uuid))
+ uuid = "01234567-12341234123401234567891a"
+ self.assertFalse(fscls().uuid_format_ok(uuid))
+ uuid = "0123456-123-123-123-01234567891"
+ self.assertFalse(fscls().uuid_format_ok(uuid))
+ uuid = "01234567-xyz-1234-1234-1234-012345678911"
+ self.assertFalse(fscls().uuid_format_ok(uuid))
+ uuid = "01234567-1234-1234-1234-012345678911"
+ self.assertTrue(fscls().uuid_format_ok(uuid))
+
+ self.assertFalse(fs.FATFS().uuid_format_ok("1234-56789"))
+ self.assertFalse(fs.FATFS().uuid_format_ok("abcd-ef00"))
+ self.assertFalse(fs.FATFS().uuid_format_ok("12345678"))
+ self.assertTrue(fs.FATFS().uuid_format_ok("1234-5678"))
+ self.assertTrue(fs.FATFS().uuid_format_ok("ABCD-EF01"))
+
+ self.assertFalse(fs.NTFS().uuid_format_ok("12345678901234567"))
+ self.assertFalse(fs.NTFS().uuid_format_ok("abcdefgh"))
+ self.assertFalse(fs.NTFS().uuid_format_ok("abcdefabcdefabcd"))
+ self.assertTrue(fs.NTFS().uuid_format_ok("1234567890123456"))
+ self.assertTrue(fs.NTFS().uuid_format_ok("ABCDEFABCDEFABCD"))
+
+
+class XFSTestCase(fsuuid.SetUUIDWithMkFs):
+ _fs_class = fs.XFS
+ _invalid_uuid = "abcdefgh-ijkl-mnop-qrst-uvwxyz123456"
+ _valid_uuid = "97e3d40f-dca8-497d-8b86-92f257402465"
+
+
+class FATFSTestCase(fsuuid.SetUUIDWithMkFs):
+ _fs_class = fs.FATFS
+ _invalid_uuid = "c87ab0e1"
+ _valid_uuid = "DEAD-BEEF"
+
+
+class Ext2FSTestCase(fsuuid.SetUUIDWithMkFs):
+ _fs_class = fs.Ext2FS
+ _invalid_uuid = "abcdefgh-ijkl-mnop-qrst-uvwxyz123456"
+ _valid_uuid = "bad19a10-075a-4e99-8922-e4638722a567"
+
+
+class JFSTestCase(fsuuid.SetUUIDAfterMkFs):
+ _fs_class = fs.JFS
+ _invalid_uuid = "abcdefgh-ijkl-mnop-qrst-uvwxyz123456"
+ _valid_uuid = "ac54f987-b371-45d9-8846-7d6204081e5c"
+
+
+class ReiserFSTestCase(fsuuid.SetUUIDWithMkFs):
+ _fs_class = fs.ReiserFS
+ _invalid_uuid = "abcdefgh-ijkl-mnop-qrst-uvwxyz123456"
+ _valid_uuid = "1761023e-bab8-4919-a2cb-f26c89fe1cfe"
+
+
+class HFSPlusTestCase(fsuuid.SetUUIDAfterMkFs):
+ _fs_class = fs.HFSPlus
+ _invalid_uuid = "abcdefgh-ijkl-mnop-qrst-uvwxyz123456"
+ _valid_uuid = "3e6d84ce-cca9-4f55-9950-59e5b31f0e36"
+
+
+class NTFSTestCase(fsuuid.SetUUIDAfterMkFs):
+ _fs_class = fs.NTFS
+ _invalid_uuid = "b22193477ac947fb"
+ _valid_uuid = "BC3B34461B8344A6"
+
+
+class SwapSpaceTestCase(fsuuid.SetUUIDWithMkFs):
+ _fs_class = swap.SwapSpace
+ _invalid_uuid = "abcdefgh-ijkl-mnop-qrst-uvwxyz123456"
+ _valid_uuid = "01234567-1234-1234-1234-012345678912"

View File

@@ -36,6 +36,10 @@ stdenv.mkDerivation rec {
stripLen = 1;
});
makeFlags = let
isPy3 = python.isPy3 or false;
in optionalString (enablePython && isPy3) "PYTHON=python3";
postPatch = optionalString enablePython ''
sed -i -e 's|\$(LIBDIR)/libsepol.a|${libsepol}/lib/libsepol.a|' src/Makefile
'';

View File

@@ -1,15 +1,15 @@
{ stdenv, fetchurl, pkgconfig, systemd, libudev, utillinux, coreutils, libuuid, enable_dmeventd ? false }:
{ stdenv, fetchurl, pkgconfig, systemd, libudev, utillinux, coreutils, libuuid
, enable_dmeventd ? false
, enableThinProvisioning ? false, thin-provisioning-tools ? null
}:
let
version = "2.02.140";
in
stdenv.mkDerivation {
stdenv.mkDerivation rec {
name = "lvm2-${version}";
version = "2.02.168";
src = fetchurl {
url = "ftp://sources.redhat.com/pub/lvm2/releases/LVM2.${version}.tgz";
sha256 = "1jd46diyv7074fw8kxwq7imn4pl76g01d8y7z4scq0lkxf8jmpai";
sha256 = "03b62hcsj9z37ckd8c21wwpm07s9zblq7grfh58yzcs1vp6x38r3";
};
configureFlags = [
@@ -21,48 +21,49 @@ stdenv.mkDerivation {
"--enable-cmdlib"
] ++ stdenv.lib.optional enable_dmeventd " --enable-dmeventd";
# Make sure we use the default profile dir within the package if we don't
# have a valid /etc/lvm/lvm.conf.
postPatch = ''
sed -i -e '/get_default_\(unconfigured_\)\?config_profile_dir_CFG/,/^}/ {
/^{/,/^}/c { return "'"$out/etc/lvm/profile"'"; }
}' lib/config/config.c
'';
nativeBuildInputs = [ pkgconfig ];
buildInputs = [ libudev libuuid ];
preConfigure =
''
substituteInPlace scripts/lvmdump.sh \
--replace /usr/bin/tr ${coreutils}/bin/tr
substituteInPlace scripts/lvm2_activation_generator_systemd_red_hat.c \
--replace /usr/sbin/lvm $out/sbin/lvm \
--replace /usr/bin/udevadm ${systemd.udev.bin}/bin/udevadm
buildInputs = [
libudev libuuid
] ++ stdenv.lib.optional enableThinProvisioning thin-provisioning-tools;
sed -i /DEFAULT_SYS_DIR/d Makefile.in
sed -i /DEFAULT_PROFILE_DIR/d conf/Makefile.in
'';
preConfigure = ''
substituteInPlace scripts/lvm2_activation_generator_systemd_red_hat.c \
--replace /usr/bin/udevadm ${systemd.udev.bin}/bin/udevadm
'';
enableParallelBuilding = true;
#patches = [ ./purity.patch ];
makeFlags = [
"systemd_dir=$(out)/lib/systemd"
"systemd_unit_dir=$(systemd_dir)/system"
"systemd_generator_dir=$(systemd_dir)/system-generators"
"DEFAULT_PROFILE_DIR=$(out)/etc/lvm/profile"
];
# To prevent make install from failing.
preInstall = "installFlags=\"OWNER= GROUP= confdir=$out/etc\"";
installFlags = [ "confdir=$(out)/etc/lvm" "DEFAULT_SYS_DIR=$(confdir)" ];
installTargets = [
"install" "install_systemd_generators" "install_systemd_units"
];
# Install systemd stuff.
#installTargets = "install install_systemd_generators install_systemd_units install_tmpfiles_configuration";
postInstall =
''
substituteInPlace $out/lib/udev/rules.d/13-dm-disk.rules \
--replace $out/sbin/blkid ${utillinux}/sbin/blkid
# Systemd stuff
mkdir -p $out/etc/systemd/system $out/lib/systemd/system-generators
cp scripts/blk_availability_systemd_red_hat.service $out/etc/systemd/system
cp scripts/lvm2_activation_generator_systemd_red_hat $out/lib/systemd/system-generators
'';
postInstall = ''
substituteInPlace $out/lib/udev/rules.d/13-dm-disk.rules \
--replace $out/sbin/blkid ${utillinux}/sbin/blkid
'';
meta = {
homepage = http://sourceware.org/lvm2/;
descriptions = "Tools to support Logical Volume Management (LVM) on Linux";
platforms = stdenv.lib.platforms.linux;
maintainers = with stdenv.lib.maintainers; [raskin];
inherit version;
maintainers = with stdenv.lib.maintainers; [ raskin ];
downloadPage = "ftp://sources.redhat.com/pub/lvm2/";
};
}

View File

@@ -1,44 +0,0 @@
diff -ru LVM2.2.02.95-orig/udev/10-dm.rules.in LVM2.2.02.95/udev/10-dm.rules.in
--- LVM2.2.02.95-orig/udev/10-dm.rules.in 2011-08-11 19:55:29.000000000 +0200
+++ LVM2.2.02.95/udev/10-dm.rules.in 2012-03-19 20:12:35.000000000 +0100
@@ -19,9 +19,8 @@
SUBSYSTEM!="block", GOTO="dm_end"
KERNEL!="dm-[0-9]*", GOTO="dm_end"
-# Set proper sbin path, /sbin has higher priority than /usr/sbin.
-ENV{DM_SBIN_PATH}="/sbin"
-TEST!="$env{DM_SBIN_PATH}/dmsetup", ENV{DM_SBIN_PATH}="/usr/sbin"
+# Set proper sbin path. Exit if dmsetup is not present.
+ENV{DM_SBIN_PATH}="(sbindir)"
TEST!="$env{DM_SBIN_PATH}/dmsetup", GOTO="dm_end"
# Device created, major and minor number assigned - "add" event generated.
diff -ru LVM2.2.02.95-orig/udev/Makefile.in LVM2.2.02.95/udev/Makefile.in
--- LVM2.2.02.95-orig/udev/Makefile.in 2012-02-24 10:53:12.000000000 +0100
+++ LVM2.2.02.95/udev/Makefile.in 2012-03-19 20:16:09.000000000 +0100
@@ -12,6 +12,7 @@
# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
srcdir = @srcdir@
+sbindir = @sbindir@
top_srcdir = @top_srcdir@
top_builddir = @top_builddir@
@@ -26,7 +27,7 @@
ifeq ("@UDEV_HAS_BUILTIN_BLKID@", "yes")
BLKID_RULE=IMPORT{builtin}=\"blkid\"
else
- BLKID_RULE=IMPORT{program}=\"\$$env{DM_SBIN_PATH}\/blkid -o udev -p \$$tempnode\"
+ BLKID_RULE=IMPORT{program}=\"\/sbin\/blkid -o udev -p \$$tempnode\"
endif
CLEAN_TARGETS = 10-dm.rules 13-dm-disk.rules
@@ -36,7 +37,7 @@
vpath %.rules $(srcdir)
%.rules: %.rules.in
- $(SED) -e "s/(DM_DIR)/$(DM_DIR)/" -e "s/(BLKID_RULE)/$(BLKID_RULE)/" $< >$@
+ $(SED) -e "s/(DM_DIR)/$(DM_DIR)/" -e "s/(BLKID_RULE)/$(BLKID_RULE)/" -e "s|(sbindir)|$(sbindir)|" $< >$@
%_install: %.rules
$(INSTALL_DATA) -D $< $(udevdir)/$(<F)

View File

@@ -0,0 +1,42 @@
{ stdenv, fetchgit, patchutils }:
stdenv.mkDerivation rec {
name = "mpathconf-${version}";
version = "0.4.9.83";
src = fetchgit {
url = "http://pkgs.fedoraproject.org/git/rpms/device-mapper-multipath.git";
rev = "906e1e11285fc41097fd89893227664addb00848";
sha256 = "153cfsb4y5c4a415bahyspb88xj8bshdmiwk8aik72xni546zvid";
};
specfile = "device-mapper-multipath.spec";
phases = [ "unpackPhase" "patchPhase" "installPhase" "fixupPhase" ];
prePatch = ''
newPatches="$(sed -n -e 's/^Patch[0-9]\+: *//p' "$specfile")"
for patch in $newPatches; do
"${patchutils}/bin/filterdiff" \
-i '*/multipath/mpathconf' \
-i '*/multipath/mpathconf.[0-9]*' \
"$patch" > "$patch.tmp"
mv "$patch.tmp" "$patch"
done
patches="$patches $newPatches"
'';
installPhase = ''
install -vD multipath/mpathconf "$out/bin/mpathconf"
for manpage in multipath/*.[0-9]*; do
num="''${manpage##*.}"
install -m 644 -vD "$manpage" "$out/share/man/man$num/$manpage"
done
'';
meta = {
homepage = "http://pkgs.fedoraproject.org/cgit/device-mapper-multipath.git";
description = "Simple editing of /etc/multipath.conf";
license = stdenv.lib.licenses.gpl2;
};
}

View File

@@ -1,21 +1,24 @@
{ stdenv, fetchurl, buildPythonApplication, blivet }:
{ stdenv, fetchFromGitHub, python3Packages, nix }:
buildPythonApplication rec {
python3Packages.buildPythonApplication rec {
name = "nixpart-${version}";
version = "1.0.0";
version = "unstable-1.0.0";
src = fetchurl {
url = "https://github.com/aszlig/nixpart/archive/v${version}.tar.gz";
sha256 = "0avwd8p47xy9cydlbjxk8pj8q75zyl68gw2w6fnkk78dcb1a3swp";
src = fetchFromGitHub {
owner = "aszlig";
repo = "nixpart";
rev = "63df5695b4de82e372ede5a0b6a3caff51f1ee81";
sha256 = "1snz3xgnjfyjl0393jv2l13vmjl7yjpch4fx8cabwq3v0504h7wh";
};
propagatedBuildInputs = [ blivet ];
checkInputs = [ nix ];
propagatedBuildInputs = [ python3Packages.blivet ];
makeWrapperArgs = [ "--set GI_TYPELIB_PATH \"$GI_TYPELIB_PATH\"" ];
meta = {
description = "NixOS storage manager/partitioner";
license = stdenv.lib.licenses.gpl2Plus;
maintainers = [ stdenv.lib.maintainers.aszlig ];
platforms = stdenv.lib.platforms.linux;
broken = true;
};
}

View File

@@ -0,0 +1,67 @@
From 151dd81cd1e86c1329488a892fa5df38aae132f5 Mon Sep 17 00:00:00 2001
From: "Brian C. Lane" <bcl@redhat.com>
Date: Mon, 29 Feb 2016 11:34:31 -0800
Subject: [PATCH 25/28] Add libparted-fs-resize.pc
Add a pkgconfig file for the filesystem resize library.
(cherry picked from commit 56ede67e254132eba72b0c3e74b7b3677c22782d)
---
Makefile.am | 3 ++-
configure.ac | 1 +
libparted-fs-resize.pc.in | 10 ++++++++++
3 files changed, 13 insertions(+), 1 deletion(-)
create mode 100644 libparted-fs-resize.pc.in
diff --git a/Makefile.am b/Makefile.am
index 686b61c..c426b8c 100644
--- a/Makefile.am
+++ b/Makefile.am
@@ -6,6 +6,7 @@ EXTRA_DIST = \
.prev-version \
BUGS \
libparted.pc.in \
+ libparted-fs-resize.pc.in \
parted.spec.in \
parted.spec \
scripts/data/abi/baseline_symbols.txt \
@@ -18,7 +19,7 @@ EXTRA_DIST = \
aclocaldir=$(datadir)/aclocal
pcdir = $(libdir)/pkgconfig
-pc_DATA = libparted.pc
+pc_DATA = libparted.pc libparted-fs-resize.pc
# This is best not done via configure.ac, because automake's
# make distcheck target does not like auto-generated files
diff --git a/configure.ac b/configure.ac
index 436d0e2..3d57157 100644
--- a/configure.ac
+++ b/configure.ac
@@ -613,6 +613,7 @@ libparted/labels/Makefile
libparted/fs/Makefile
libparted/tests/Makefile
libparted.pc
+libparted-fs-resize.pc
parted/Makefile
partprobe/Makefile
doc/Makefile
diff --git a/libparted-fs-resize.pc.in b/libparted-fs-resize.pc.in
new file mode 100644
index 0000000..ed9b3d6
--- /dev/null
+++ b/libparted-fs-resize.pc.in
@@ -0,0 +1,10 @@
+prefix=@prefix@
+exec_prefix=@exec_prefix@
+libdir=@libdir@
+includedir=@includedir@
+
+Name: libparted-fs-resize
+Description: The GNU Parted filesystem resize shared library
+Version: @VERSION@
+Libs: -L${libdir} -lparted-fs-resize
+Cflags: -I${includedir}
--
2.5.0

View File

@@ -1,5 +1,9 @@
{ stdenv, fetchurl, devicemapper, libuuid, gettext, readline, perl, python2
, utillinux, check, enableStatic ? false, hurd ? null }:
{ stdenv, fetchurl, pkgconfig, autoreconfHook, devicemapper, libuuid, gettext
, readline, perl, python2, utillinux, check
, hurd ? null
, enableStatic ? false
}:
stdenv.mkDerivation rec {
name = "parted-3.2";
@@ -9,12 +13,18 @@ stdenv.mkDerivation rec {
sha256 = "1r3qpg3bhz37mgvp9chsaa3k0csby3vayfvz8ggsqz194af5i2w5";
};
patches = stdenv.lib.optional doCheck ./gpt-unicode-test-fix.patch;
patches = [
./fix-determining-sector-size.patch
./add-libparted-fs-resize.pc.patch
./fix-fat16-resize-crash.patch
] ++ stdenv.lib.optional doCheck ./gpt-unicode-test-fix.patch;
postPatch = stdenv.lib.optionalString doCheck ''
patchShebangs tests
'';
nativeBuildInputs = [ autoreconfHook pkgconfig ];
buildInputs = [ libuuid ]
++ stdenv.lib.optional (readline != null) readline
++ stdenv.lib.optional (gettext != null) gettext

View File

@@ -0,0 +1,101 @@
From 61dd3d4c5eb782eb43caa95342e63727db3f8281 Mon Sep 17 00:00:00 2001
From: David Cantrell <dcantrell@redhat.com>
Date: Thu, 17 Mar 2016 09:24:55 -0400
Subject: [PATCH] Use BLKSSZGET to get device sector size in
_device_probe_geometry()
Seen on certain newer devices (such as >32G SDHC memory cards), the
HDIO_GETGEO ioctl does not return useful information. The libparted
code records hardware and bios reported geometry information, but all of
that is largely unusable these days. The information is used in the
PedConstraint code for aligning partitions. The sector count is most
useful. Rather than only trying HDIO_GETGIO, first initialize the
bios_geom fields to 0 and then use BLKSSZGET to capture the sector size.
If that fails, try HDIO_GETGEO. And if that fails, raise a warning and
fall back on the library's default sector size macro.
This problem showed up on Raspberry Pi devices where users were
attempting to grow a partition to fill the SDHC card. Using the
optimal_aligned_constraint returned invalid geometry information
(98703359 instead of 124735488 sectors). The issue was reported here:
https://github.com/fedberry/fedberry/issues/8
And to the pyparted project:
https://github.com/rhinstaller/pyparted/issues/25
I've applied this patch locally to parted, rebuilt, and reinstalled it
and it is working correctly for the problem SDHC cards.
Signed-off-by: Brian C. Lane <bcl@redhat.com>
---
libparted/arch/linux.c | 40 +++++++++++++++++++++++++---------------
1 file changed, 25 insertions(+), 15 deletions(-)
diff --git a/libparted/arch/linux.c b/libparted/arch/linux.c
index 1198f52..326b956 100644
--- a/libparted/arch/linux.c
+++ b/libparted/arch/linux.c
@@ -852,6 +852,7 @@ _device_probe_geometry (PedDevice* dev)
LinuxSpecific* arch_specific = LINUX_SPECIFIC (dev);
struct stat dev_stat;
struct hd_geometry geometry;
+ int sector_size = 0;
if (!_device_stat (dev, &dev_stat))
return 0;
@@ -863,26 +864,35 @@ _device_probe_geometry (PedDevice* dev)
if (!dev->length)
return 0;
- /* The GETGEO ioctl is no longer useful (as of linux 2.6.x). We could
- * still use it in 2.4.x, but this is contentious. Perhaps we should
- * move to EDD. */
- dev->bios_geom.sectors = 63;
- dev->bios_geom.heads = 255;
- dev->bios_geom.cylinders
- = dev->length / (63 * 255);
+ /* initialize the bios_geom values to something */
+ dev->bios_geom.sectors = 0;
+ dev->bios_geom.heads = 0;
+ dev->bios_geom.cylinders = 0;
- /* FIXME: what should we put here? (TODO: discuss on linux-kernel) */
- if (!ioctl (arch_specific->fd, HDIO_GETGEO, &geometry)
+ if (!ioctl (arch_specific->fd, BLKSSZGET, &sector_size)) {
+ /* get the sector count first */
+ dev->bios_geom.sectors = 1 + (sector_size / PED_SECTOR_SIZE_DEFAULT);
+ dev->bios_geom.heads = 255;
+ } else if (!ioctl (arch_specific->fd, HDIO_GETGEO, &geometry)
&& geometry.sectors && geometry.heads) {
- dev->hw_geom.sectors = geometry.sectors;
- dev->hw_geom.heads = geometry.heads;
- dev->hw_geom.cylinders
- = dev->length / (dev->hw_geom.heads
- * dev->hw_geom.sectors);
+ /* if BLKSSZGET failed, try the deprecated HDIO_GETGEO */
+ dev->bios_geom.sectors = geometry.sectors;
+ dev->bios_geom.heads = geometry.heads;
} else {
- dev->hw_geom = dev->bios_geom;
+ ped_exception_throw (
+ PED_EXCEPTION_WARNING,
+ PED_EXCEPTION_OK,
+ _("Could not determine sector size for %s: %s.\n"
+ "Using the default sector size (%lld)."),
+ dev->path, strerror (errno), PED_SECTOR_SIZE_DEFAULT);
+ dev->bios_geom.sectors = 2;
+ dev->bios_geom.heads = 255;
}
+ dev->bios_geom.cylinders
+ = dev->length / (dev->bios_geom.heads
+ * dev->bios_geom.sectors);
+ dev->hw_geom = dev->bios_geom;
return 1;
}
--
2.5.0

View File

@@ -0,0 +1,193 @@
From 3a4c152d38ce34481b0f4fda8aea4e71a8280d8f Mon Sep 17 00:00:00 2001
From: Mike Fleetwood <mike.fleetwood@googlemail.com>
Date: Sat, 27 Sep 2014 10:23:17 +0100
Subject: [PATCH 1/3] lib-fs-resize: Prevent crash resizing FAT16 file systems
Resizing FAT16 file system crashes in libparted/fs/r/fat/resize.c
create_resize_context() because it was dereferencing NULL pointer
fs_info->info_sector to copy the info_sector.
Only FAT32 file systems have info_sector populated by fat_open() ->
fat_info_sector_read(). FAT12 and FAT16 file systems don't have an
info_sector so pointer fs_info->info_sector remains assigned NULL from
fat_alloc(). When resizing a FAT file system create_resize_context()
was always dereferencing fs_info->info_sector to memory copy the
info_sector, hence it crashed for FAT12 and FAT16.
Make create_resize_context() only copy the info_sector for FAT32 file
systems.
Reported by Christian Hesse in
https://bugzilla.gnome.org/show_bug.cgi?id=735669
---
NEWS | 4 ++++
libparted/fs/r/fat/resize.c | 12 +++++++++---
2 files changed, 13 insertions(+), 3 deletions(-)
diff --git a/NEWS b/NEWS
index 297b0a5..da7db50 100644
--- a/NEWS
+++ b/NEWS
@@ -2,6 +2,10 @@ GNU parted NEWS -*- outline -*-
* Noteworthy changes in release ?.? (????-??-??) [?]
+** Bug Fixes
+
+ libparted-fs-resize: Prevent crash resizing FAT16 file systems.
+
* Noteworthy changes in release 3.2 (2014-07-28) [stable]
diff --git a/libparted/fs/r/fat/resize.c b/libparted/fs/r/fat/resize.c
index 919acf0..bfe60a0 100644
--- a/libparted/fs/r/fat/resize.c
+++ b/libparted/fs/r/fat/resize.c
@@ -668,11 +668,17 @@ create_resize_context (PedFileSystem* fs, const PedGeometry* new_geom)
/* preserve boot code, etc. */
new_fs_info->boot_sector = ped_malloc (new_geom->dev->sector_size);
- new_fs_info->info_sector = ped_malloc (new_geom->dev->sector_size);
memcpy (new_fs_info->boot_sector, fs_info->boot_sector,
new_geom->dev->sector_size);
- memcpy (new_fs_info->info_sector, fs_info->info_sector,
- new_geom->dev->sector_size);
+ new_fs_info->info_sector = NULL;
+ if (fs_info->fat_type == FAT_TYPE_FAT32)
+ {
+ PED_ASSERT (fs_info->info_sector != NULL);
+ new_fs_info->info_sector =
+ ped_malloc (new_geom->dev->sector_size);
+ memcpy (new_fs_info->info_sector, fs_info->info_sector,
+ new_geom->dev->sector_size);
+ }
new_fs_info->logical_sector_size = fs_info->logical_sector_size;
new_fs_info->sector_count = new_geom->length;
--
1.7.1
From 2b5a4805533557b1bcdb5f70537569383f1fe7e8 Mon Sep 17 00:00:00 2001
From: Mike Fleetwood <mike.fleetwood@googlemail.com>
Date: Sat, 27 Sep 2014 11:31:46 +0100
Subject: [PATCH 2/3] tests: t3000-resize-fs.sh: Add FAT16 resizing test
Add FAT16 resizing test so that we don't regress again.
---
tests/t3000-resize-fs.sh | 16 +++++++++++++---
1 files changed, 13 insertions(+), 3 deletions(-)
diff --git a/tests/t3000-resize-fs.sh b/tests/t3000-resize-fs.sh
index 8cab476..9084eb4 100755
--- a/tests/t3000-resize-fs.sh
+++ b/tests/t3000-resize-fs.sh
@@ -46,7 +46,7 @@ device_sectors_required=$(echo $default_end | sed 's/s$//')
# Ensure that $dev is large enough for this test
test $device_sectors_required -le $dev_n_sectors || fail=1
-for fs_type in hfs+ fat32; do
+for fs_type in hfs+ fat32 fat16; do
# create an empty $fs_type partition, cylinder aligned, size > 256 MB
parted -a min -s $dev mkpart p1 $start $default_end > out 2>&1 || fail=1
@@ -59,6 +59,7 @@ for fs_type in hfs+ fat32; do
wait_for_dev_to_appear_ ${dev}1
case $fs_type in
+ fat16) mkfs_cmd='mkfs.vfat -F 16'; fsck='fsck.vfat -v';;
fat32) mkfs_cmd='mkfs.vfat -F 32'; fsck='fsck.vfat -v';;
hfs*) mkfs_cmd='mkfs.hfs'; fsck=fsck.hfs;;
*) error "internal error: unhandled fs type: $fs_type";;
@@ -70,8 +71,17 @@ for fs_type in hfs+ fat32; do
# NOTE: shrinking is the only type of resizing that works.
# resize that file system to be one cylinder (8MiB) smaller
fs-resize ${dev}1 0 $new_end > out 2>&1 || fail=1
- # expect no output
- compare /dev/null out || fail=1
+
+ # check for expected output
+ case $fs_type in
+ fat16) cat << EOF > exp || framework_failure
+Information: Would you like to use FAT32? If you leave your file system as FAT16, then you will have no problems. If you convert to FAT32, and MS Windows is installed on this partition, then you must re-install the MS Windows boot loader. If you want to do this, you should consult the Parted manual (or your distribution's manual). Also, converting to FAT32 will make the file system unreadable by MS DOS, MS Windows 95a, and MS Windows NT.
+EOF
+ ;;
+ fat32) cat /dev/null > exp || framework_failure;; # expect no output
+ hfs*) cat /dev/null > exp || framework_failure;; # expect no output
+ esac
+ compare exp out || fail=1
# This is known to segfault with fsck.hfs from
# Fedora 16's hfsplus-tools-332.14-12.fc15.x86_64.
--
1.7.1
From ca37fcb204f97964ff2c92ea0221367e798810bb Mon Sep 17 00:00:00 2001
From: Mike Fleetwood <mike.fleetwood@googlemail.com>
Date: Sun, 28 Sep 2014 11:54:45 +0100
Subject: [PATCH 3/3] tests: t3000-resize-fs.sh: Add requirement on mkfs.vfat
Add test skipping requirement on mkfs.vfat for the FAT32 and FAT16 file
system resizing tests. This matches existing test skipping requirement
on mkfs.hfs for the hfs+ file system.
* tests/t3000-resize-fs.sh: Also correct skip_test_ to skip_.
* tests/t-lib-helpers.sh: Also update message for requirement of hfs.
---
tests/t-lib-helpers.sh | 8 +++++++-
tests/t3000-resize-fs.sh | 5 +++--
2 files changed, 10 insertions(+), 3 deletions(-)
diff --git a/tests/t-lib-helpers.sh b/tests/t-lib-helpers.sh
index 4e83a05..c8684bb 100644
--- a/tests/t-lib-helpers.sh
+++ b/tests/t-lib-helpers.sh
@@ -20,7 +20,13 @@ require_acl_()
require_hfs_()
{
mkfs.hfs 2>&1 | grep '^usage:' \
- || skip_ "This test requires HFS support."
+ || skip_ "mkfs.hfs: command not found"
+}
+
+require_fat_()
+{
+ mkfs.vfat 2>&1 | grep '^Usage:' \
+ || skip_ "mkfs.vfat: command not found"
}
# Skip this test if we're not in SELinux "enforcing" mode.
diff --git a/tests/t3000-resize-fs.sh b/tests/t3000-resize-fs.sh
index 9084eb4..a79a307 100755
--- a/tests/t3000-resize-fs.sh
+++ b/tests/t3000-resize-fs.sh
@@ -18,7 +18,7 @@
. "${srcdir=.}/init.sh"; path_prepend_ ../parted .
require_hfs_
-
+require_fat_
require_root_
require_scsi_debug_module_
require_512_byte_sector_size_
@@ -31,7 +31,7 @@ default_end=546147s
# create memory-backed device
scsi_debug_setup_ dev_size_mb=550 > dev-name ||
- skip_test_ 'failed to create scsi_debug device'
+ skip_ 'failed to create scsi_debug device'
dev=$(cat dev-name)
fail=0
@@ -47,6 +47,7 @@ device_sectors_required=$(echo $default_end | sed 's/s$//')
test $device_sectors_required -le $dev_n_sectors || fail=1
for fs_type in hfs+ fat32 fat16; do
+ echo "fs_type=$fs_type"
# create an empty $fs_type partition, cylinder aligned, size > 256 MB
parted -a min -s $dev mkpart p1 $start $default_end > out 2>&1 || fail=1
--
1.7.1

View File

@@ -2981,6 +2981,8 @@ in
nixbot = callPackage ../tools/misc/nixbot {};
nixpart = callPackage ../tools/filesystems/nixpart {};
nkf = callPackage ../tools/text/nkf {};
nlopt = callPackage ../development/libraries/nlopt {};
@@ -7689,6 +7691,10 @@ in
libbdplus = callPackage ../development/libraries/libbdplus { };
libblockdev = callPackage ../development/libraries/libblockdev {
inherit (gnome2) gtkdoc;
};
libbluray = callPackage ../development/libraries/libbluray { };
libbs2b = callPackage ../development/libraries/audio/libbs2b { };
@@ -7697,6 +7703,10 @@ in
libburn = callPackage ../development/libraries/libburn { };
libbytesize = callPackage ../development/libraries/libbytesize {
inherit (gnome2) gtkdoc;
};
libcaca = callPackage ../development/libraries/libcaca {
inherit (xlibs) libX11 libXext;
};
@@ -9566,6 +9576,8 @@ in
vmime = callPackage ../development/libraries/vmime { };
volume_key = callPackage ../development/libraries/volume-key { };
vrpn = callPackage ../development/libraries/vrpn { };
vsqlite = callPackage ../development/libraries/vsqlite { };
@@ -11383,6 +11395,8 @@ in
inherit modules;
};
mpathconf = callPackage ../os-specific/linux/mpathconf { };
multipath-tools = callPackage ../os-specific/linux/multipath-tools { };
musl = callPackage ../os-specific/linux/musl { };

View File

@@ -236,8 +236,6 @@ in {
inherit python;
};
nixpart = callPackage ../tools/filesystems/nixpart { };
# This is used for NixOps to make sure we won't break it with the next major
# version of nixpart.
nixpart0 = callPackage ../tools/filesystems/nixpart/0.4 { };
@@ -18979,6 +18977,39 @@ in {
};
};
pocketlint = buildPythonPackage rec {
name = "pocketlint-${version}";
version = "0.13";
src = pkgs.fetchFromGitHub {
owner = "rhinstaller";
repo = "pocketlint";
rev = version;
sha256 = "1jymkd62n7dn533wczyw8xpxfmhj79ss340hgk1ny90xj2jaxs2f";
};
# Python 2.x and PyPy are not supported.
disabled = !isPy3k;
propagatedBuildInputs = [ self.polib self.six ];
postPatch = ''
sed -i -r \
-e 's,"(/usr/bin/)?python3-pylint","${self.pylint}/bin/pylint",' \
pocketlint/__init__.py
'';
checkPhase = ''
rm nix_run_setup.py
${python.interpreter} tests/pylint/runpylint.py
'';
meta = {
description = "Shared code for running pylint against RedHat projects";
homepage = "https://github.com/rhinstaller/pocketlint";
license = licenses.gpl2Plus;
};
};
polib = buildPythonPackage rec {
name = "polib-${version}";
@@ -21039,20 +21070,29 @@ in {
pyudev = buildPythonPackage rec {
name = "pyudev-${version}";
version = "0.16.1";
version = "0.21.0";
src = pkgs.fetchurl {
url = "mirror://pypi/p/pyudev/${name}.tar.gz";
sha256 = "765d1c14bd9bd031f64e2612225621984cb2bbb8cbc0c03538bcc4c735ff1c95";
sha256 = "0arz0dqp75sszsmgm6vhg92n1lsx91ihddx3m944f4ah0487ljq9";
};
disabled = isPy33;
postPatch = ''
sed -i -e '/udev_library_name/,/^ *libudev/ {
s|CDLL([^,]*|CDLL("${pkgs.systemd.lib}/lib/libudev.so.1"|p; d
}' pyudev/_libudev.py
sed -i -e '/library_name *= */d' -e 's/library_name/name/g' \
src/pyudev/_ctypeslib/utils.py
sed -i -e '/load_ctypes_library/ {
s,"libc","${stdenv.glibc.out}/lib/libc.so.6",g
s,'\'''udev'\''',"${pkgs.systemd.lib}/lib/libudev.so",g
}' src/pyudev/_os/pipe.py src/pyudev/core.py
'';
propagatedBuildInputs = with self; [ pkgs.systemd ];
# Note that tests aren't actually run, because there is no setup.py hook.
# This is fine because the tests would fail in the builder sandbox anyway.
checkInputs = with self; [ pytest mock docutils hypothesis ];
propagatedBuildInputs = [ self.six pkgs.systemd ];
meta = {
homepage = "http://pyudev.readthedocs.org/";