Compare commits

..

1 Commits

Author SHA1 Message Date
Jonathan Ringer
12c5acf376 21.05 beta release 2021-05-22 17:56:13 -07:00
27579 changed files with 652367 additions and 1537402 deletions

View File

@@ -55,30 +55,25 @@ trim_trailing_whitespace = unset
[*.lock]
indent_size = unset
# trailing whitespace is an actual syntax element of classic Markdown/
# CommonMark to enforce a line break
[*.md]
trim_trailing_whitespace = unset
# binaries
[*.nib]
end_of_line = unset
insert_final_newline = unset
trim_trailing_whitespace = unset
charset = unset
[eggs.nix]
trim_trailing_whitespace = unset
[nixos/modules/services/networking/ircd-hybrid/*.{conf,in}]
trim_trailing_whitespace = unset
[nixos/tests/systemd-networkd-vrf.nix]
trim_trailing_whitespace = unset
[pkgs/build-support/dotnetenv/Wrapper/**]
end_of_line = unset
indent_style = unset
insert_final_newline = unset
trim_trailing_whitespace = unset
[pkgs/build-support/upstream-updater/**]
indent_style = unset
trim_trailing_whitespace = unset
[pkgs/development/compilers/elm/registry.dat]
end_of_line = unset
insert_final_newline = unset
@@ -92,3 +87,10 @@ trim_trailing_whitespace = unset
[pkgs/tools/misc/timidity/timidity.cfg]
trim_trailing_whitespace = unset
[pkgs/tools/security/enpass/data.json]
insert_final_newline = unset
trim_trailing_whitespace = unset
[pkgs/top-level/emscripten-packages.nix]
trim_trailing_whitespace = unset

View File

@@ -1,41 +0,0 @@
# This file contains a list of commits that are not likely what you
# are looking for in a blame, such as mass reformatting or renaming.
# You can set this file as a default ignore file for blame by running
# the following command.
#
# $ git config blame.ignoreRevsFile .git-blame-ignore-revs
#
# To temporarily not use this file add
# --ignore-revs-file=""
# to your blame command.
#
# The ignoreRevsFile can't be set globally due to blame failing if the file isn't present.
# To not have to set the option in every repository it is needed in,
# save the following script in your path with the name "git-bblame"
# now you can run
# $ git bblame $FILE
# to use the .git-blame-ignore-revs file if it is present.
#
# #!/usr/bin/env bash
# repo_root=$(git rev-parse --show-toplevel)
# if [[ -e $repo_root/.git-blame-ignore-revs ]]; then
# git blame --ignore-revs-file="$repo_root/.git-blame-ignore-revs" $@
# else
# git blame $@
# fi
# nixos/modules/rename: Sort alphabetically
1f71224fe86605ef4cd23ed327b3da7882dad382
# manual: fix typos
feddd5e7f8c6f8167b48a077fa2a5394dc008999
# nixos: fix module paths in rename.nix
d08ede042b74b8199dc748323768227b88efcf7c
# fix indentation in mk-python-derivation.nix
d1c1a0c656ccd8bd3b25d3c4287f2d075faf3cf3
# fix indentation in meteor default.nix
a37a6de881ec4c6708e6b88fd16256bbc7f26bbd

174
.github/CODEOWNERS vendored
View File

@@ -6,10 +6,6 @@
#
# For documentation on this file, see https://help.github.com/articles/about-codeowners/
# Mentioned users will get code review requests.
#
# IMPORTANT NOTE: in order to actually get pinged, commit access is required.
# This also holds true for GitHub teams. Since almost none of our teams have write
# permissions, you need to list all members of the team with commit access individually.
# This file
/.github/CODEOWNERS @edolstra
@@ -23,7 +19,7 @@
# Libraries
/lib @edolstra @nbp @infinisil
/lib/systems @alyssais @nbp @ericson2314 @matthewbauer
/lib/systems @nbp @ericson2314 @matthewbauer
/lib/generators.nix @edolstra @nbp @Profpatsch
/lib/cli.nix @edolstra @nbp @Profpatsch
/lib/debug.nix @edolstra @nbp @Profpatsch
@@ -37,24 +33,15 @@
/pkgs/top-level/splice.nix @Ericson2314 @matthewbauer
/pkgs/top-level/release-cross.nix @Ericson2314 @matthewbauer
/pkgs/stdenv/generic @Ericson2314 @matthewbauer
/pkgs/stdenv/generic/check-meta.nix @Ericson2314 @matthewbauer @piegamesde
/pkgs/stdenv/cross @Ericson2314 @matthewbauer
/pkgs/build-support/cc-wrapper @Ericson2314
/pkgs/build-support/bintools-wrapper @Ericson2314
/pkgs/build-support/cc-wrapper @Ericson2314 @orivej
/pkgs/build-support/bintools-wrapper @Ericson2314 @orivej
/pkgs/build-support/setup-hooks @Ericson2314
/pkgs/build-support/setup-hooks/auto-patchelf.sh @layus
/pkgs/build-support/setup-hooks/auto-patchelf.py @layus
/pkgs/build-support/setup-hooks/auto-patchelf.sh @aszlig
# Nixpkgs build-support
/pkgs/build-support/writers @lassulus @Profpatsch
# Nixpkgs documentation
/doc @fricklerhandwerk
/maintainers/scripts/db-to-md.sh @jtojnar @ryantm
/maintainers/scripts/doc @jtojnar @ryantm
/doc/build-aux/pandoc-filters @jtojnar
/doc/contributing/contributing-to-documentation.chapter.md @jtojnar
# NixOS Internals
/nixos/default.nix @nbp @infinisil
/nixos/lib/from-env.nix @nbp @infinisil
@@ -72,17 +59,10 @@
/nixos/doc/manual/development/writing-modules.xml @nbp
/nixos/doc/manual/man-nixos-option.xml @nbp
/nixos/modules/installer/tools/nixos-option.sh @nbp
/nixos/modules/system @dasJ
# NixOS integration test driver
/nixos/lib/test-driver @tfc
# Systemd
/nixos/modules/system/boot/systemd.nix @NixOS/systemd
/nixos/modules/system/boot/systemd @NixOS/systemd
/nixos/lib/systemd-*.nix @NixOS/systemd
/pkgs/os-specific/linux/systemd @NixOS/systemd
# Updaters
## update.nix
/maintainers/scripts/update.nix @jtojnar
@@ -91,13 +71,12 @@
/pkgs/common-updater/scripts/update-source-version @jtojnar
# Python-related code and docs
/maintainers/scripts/update-python-libraries @FRidh
/pkgs/top-level/python-packages.nix @FRidh @jonringer
/pkgs/development/interpreters/python @FRidh
/pkgs/development/python-modules @FRidh @jonringer
/doc/languages-frameworks/python.section.md @FRidh
/pkgs/development/tools/poetry2nix @adisbladis
/pkgs/development/interpreters/python/hooks @FRidh @jonringer
/maintainers/scripts/update-python-libraries @FRidh
/pkgs/top-level/python-packages.nix @FRidh @jonringer
/pkgs/development/interpreters/python @FRidh
/pkgs/development/python-modules @FRidh @jonringer
/doc/languages-frameworks/python.section.md @FRidh
/pkgs/development/tools/poetry2nix @adisbladis
# Haskell
/doc/languages-frameworks/haskell.section.md @cdepillabout @sternenseemann @maralorn
@@ -109,13 +88,13 @@
/pkgs/top-level/haskell-packages.nix @cdepillabout @sternenseemann @maralorn
# Perl
/pkgs/development/interpreters/perl @stigtsp @zakame @dasJ
/pkgs/top-level/perl-packages.nix @stigtsp @zakame @dasJ
/pkgs/development/perl-modules @stigtsp @zakame @dasJ
/pkgs/development/interpreters/perl @volth @stigtsp
/pkgs/top-level/perl-packages.nix @volth @stigtsp
/pkgs/development/perl-modules @volth @stigtsp
# R
/pkgs/applications/science/math/R @jbedo
/pkgs/development/r-modules @jbedo
/pkgs/applications/science/math/R @peti
/pkgs/development/r-modules @peti
# Ruby
/pkgs/development/interpreters/ruby @marsam
@@ -123,6 +102,11 @@
# Rust
/pkgs/development/compilers/rust @Mic92 @LnL7 @zowoq
/pkgs/build-support/rust @andir @danieldk @zowoq
# Darwin-related
/pkgs/stdenv/darwin @NixOS/darwin-maintainers
/pkgs/os-specific/darwin @NixOS/darwin-maintainers
# C compilers
/pkgs/development/compilers/gcc @matthewbauer
@@ -132,14 +116,14 @@
/pkgs/top-level/unix-tools.nix @matthewbauer
/pkgs/development/tools/xcbuild @matthewbauer
# Audio
/nixos/modules/services/audio/botamusique.nix @mweinelt
/nixos/modules/services/audio/snapserver.nix @mweinelt
/nixos/tests/modules/services/audio/botamusique.nix @mweinelt
/nixos/tests/snapcast.nix @mweinelt
# Browsers
/pkgs/applications/networking/browsers/firefox @mweinelt
# Beam-related (Erlang, Elixir, LFE, etc)
/pkgs/development/beam-modules @gleber
/pkgs/development/interpreters/erlang @gleber
/pkgs/development/interpreters/lfe @gleber
/pkgs/development/interpreters/elixir @gleber
/pkgs/development/tools/build-managers/rebar @gleber
/pkgs/development/tools/build-managers/rebar3 @gleber
/pkgs/development/tools/erlang @gleber
# Jetbrains
/pkgs/applications/editors/jetbrains @edwtjo
@@ -167,39 +151,21 @@
/nixos/tests/hardened.nix @joachifm
/pkgs/os-specific/linux/kernel/hardened-config.nix @joachifm
# Home Automation
/nixos/modules/services/misc/home-assistant.nix @mweinelt
/nixos/modules/services/misc/zigbee2mqtt.nix @mweinelt
/nixos/tests/home-assistant.nix @mweinelt
/nixos/tests/zigbee2mqtt.nix @mweinelt
/pkgs/servers/home-assistant @mweinelt
/pkgs/tools/misc/esphome @mweinelt
# Network Time Daemons
/pkgs/tools/networking/chrony @thoughtpolice
/pkgs/tools/networking/ntp @thoughtpolice
/pkgs/tools/networking/openntpd @thoughtpolice
/nixos/modules/services/networking/ntp @thoughtpolice
# Network
/pkgs/tools/networking/kea/default.nix @mweinelt
/pkgs/tools/networking/babeld/default.nix @mweinelt
/nixos/modules/services/networking/babeld.nix @mweinelt
/nixos/modules/services/networking/kea.nix @mweinelt
/nixos/modules/services/networking/knot.nix @mweinelt
/nixos/tests/babeld.nix @mweinelt
/nixos/tests/kea.nix @mweinelt
/nixos/tests/knot.nix @mweinelt
# Dhall
/pkgs/development/dhall-modules @Gabriella439 @Profpatsch @ehmry
/pkgs/development/interpreters/dhall @Gabriella439 @Profpatsch @ehmry
/pkgs/development/dhall-modules @Gabriel439 @Profpatsch @ehmry
/pkgs/development/interpreters/dhall @Gabriel439 @Profpatsch @ehmry
# Idris
/pkgs/development/idris-modules @Infinisil
# Bazel
/pkgs/development/tools/build-managers/bazel @Profpatsch
/pkgs/development/tools/build-managers/bazel @mboes @Profpatsch
# NixOS modules for e-mail and dns services
/nixos/modules/services/mail/mailman.nix @peti
@@ -208,18 +174,19 @@
/nixos/modules/services/mail/rspamd.nix @peti
# Emacs
/pkgs/applications/editors/emacs/elisp-packages @adisbladis
/pkgs/applications/editors/emacs @adisbladis
/pkgs/top-level/emacs-packages.nix @adisbladis
/pkgs/applications/editors/emacs-modes @adisbladis
/pkgs/applications/editors/emacs @adisbladis
/pkgs/top-level/emacs-packages.nix @adisbladis
# Neovim
/pkgs/applications/editors/neovim @jonringer @teto
/pkgs/applications/editors/neovim @jonringer
/pkgs/applications/editors/neovim @teto
# VimPlugins
/pkgs/applications/editors/vim/plugins @jonringer
/pkgs/misc/vim-plugins @jonringer @softinio
# VsCode Extensions
/pkgs/applications/editors/vscode/extensions @jonringer
/pkgs/misc/vscode-extensions @jonringer
# Prometheus exporter modules and tests
/nixos/modules/services/monitoring/prometheus/exporters.nix @WilliButz
@@ -227,64 +194,33 @@
/nixos/tests/prometheus-exporters.nix @WilliButz
# PHP interpreter, packages, extensions, tests and documentation
/doc/languages-frameworks/php.section.md @aanderse @etu @globin @ma27 @talyz
/nixos/tests/php @aanderse @etu @globin @ma27 @talyz
/pkgs/build-support/build-pecl.nix @aanderse @etu @globin @ma27 @talyz
/pkgs/development/interpreters/php @jtojnar @aanderse @etu @globin @ma27 @talyz
/pkgs/development/php-packages @aanderse @etu @globin @ma27 @talyz
/pkgs/top-level/php-packages.nix @jtojnar @aanderse @etu @globin @ma27 @talyz
/doc/languages-frameworks/php.section.md @NixOS/php
/nixos/tests/php @NixOS/php
/pkgs/build-support/build-pecl.nix @NixOS/php
/pkgs/development/interpreters/php @NixOS/php
/pkgs/development/php-packages @NixOS/php
/pkgs/top-level/php-packages.nix @NixOS/php
# Podman, CRI-O modules and related
/nixos/modules/virtualisation/containers.nix @zowoq @adisbladis
/nixos/modules/virtualisation/cri-o.nix @zowoq @adisbladis
/nixos/modules/virtualisation/podman @zowoq @adisbladis
/nixos/tests/cri-o.nix @zowoq @adisbladis
/nixos/tests/podman @zowoq @adisbladis
/nixos/modules/virtualisation/containers.nix @NixOS/podman @zowoq
/nixos/modules/virtualisation/cri-o.nix @NixOS/podman @zowoq
/nixos/modules/virtualisation/podman.nix @NixOS/podman @zowoq
/nixos/tests/cri-o.nix @NixOS/podman @zowoq
/nixos/tests/podman.nix @NixOS/podman @zowoq
# Docker tools
/pkgs/build-support/docker @roberth
/nixos/tests/docker-tools* @roberth
/doc/builders/images/dockertools.section.md @roberth
/pkgs/build-support/docker @roberth @utdemir
/nixos/tests/docker-tools-overlay.nix @roberth
/nixos/tests/docker-tools.nix @roberth
/doc/builders/images/dockertools.xml @roberth
# Blockchains
/pkgs/applications/blockchains @mmahut @RaghavSood
# Go
/doc/languages-frameworks/go.section.md @kalbasit @Mic92 @zowoq
/pkgs/build-support/go @kalbasit @Mic92 @zowoq
/pkgs/development/compilers/go @kalbasit @Mic92 @zowoq
# GNOME
/pkgs/desktops/gnome @jtojnar
/pkgs/desktops/gnome/extensions @piegamesde @jtojnar
/pkgs/development/go-modules @kalbasit @Mic92 @zowoq
/pkgs/development/go-packages @kalbasit @Mic92 @zowoq
# Cinnamon
/pkgs/desktops/cinnamon @mkg20001
# nim
/pkgs/development/compilers/nim @ehmry
/pkgs/development/nim-packages @ehmry
/pkgs/top-level/nim-packages.nix @ehmry
# terraform providers
/pkgs/applications/networking/cluster/terraform-providers @zowoq
# kubernetes
/nixos/doc/manual/configuration/kubernetes.chapter.md @zowoq
/nixos/modules/services/cluster/kubernetes @zowoq
/nixos/tests/kubernetes @zowoq
/pkgs/applications/networking/cluster/kubernetes @zowoq
# Matrix
/pkgs/servers/heisenbridge @piegamesde
/pkgs/servers/matrix-conduit @piegamesde
/pkgs/servers/matrix-synapse/matrix-appservice-irc @piegamesde
/nixos/modules/services/misc/heisenbridge.nix @piegamesde
/nixos/modules/services/misc/matrix-appservice-irc.nix @piegamesde
/nixos/modules/services/misc/matrix-conduit.nix @piegamesde
/nixos/tests/matrix-appservice-irc.nix @piegamesde
/nixos/tests/matrix-conduit.nix @piegamesde
# Dotnet
/pkgs/build-support/dotnet @IvarWithoutBones
/pkgs/development/compilers/dotnet @IvarWithoutBones

64
.github/CONTRIBUTING.md vendored Normal file
View File

@@ -0,0 +1,64 @@
# How to contribute
Note: contributing implies licensing those contributions
under the terms of [COPYING](../COPYING), which is an MIT-like license.
## Opening issues
* Make sure you have a [GitHub account](https://github.com/signup/free)
* Make sure there is no open issue on the topic
* [Submit a new issue](https://github.com/NixOS/nixpkgs/issues/new/choose) by choosing the kind of topic and fill out the template
## Submitting changes
* Format the commit messages in the following way:
```
(pkg-name | nixos/<module>): (from -> to | init at version | refactor | etc)
(Motivation for change. Additional information.)
```
For consistency, there should not be a period at the end of the commit message's summary line (the first line of the commit message).
Examples:
* nginx: init at 2.0.1
* firefox: 54.0.1 -> 55.0
* nixos/hydra: add bazBaz option
Dual baz behavior is needed to do foo.
* nixos/nginx: refactor config generation
The old config generation system used impure shell scripts and could break in specific circumstances (see #1234).
* `meta.description` should:
* Be capitalized.
* Not start with the package name.
* Not have a period at the end.
* `meta.license` must be set and fit the upstream license.
* If there is no upstream license, `meta.license` should default to `lib.licenses.unfree`.
* `meta.maintainers` must be set.
See the nixpkgs manual for more details on [standard meta-attributes](https://nixos.org/nixpkgs/manual/#sec-standard-meta-attributes) and on how to [submit changes to nixpkgs](https://nixos.org/nixpkgs/manual/#chap-submitting-changes).
## Writing good commit messages
In addition to writing properly formatted commit messages, it's important to include relevant information so other developers can later understand *why* a change was made. While this information usually can be found by digging code, mailing list/Discourse archives, pull request discussions or upstream changes, it may require a lot of work.
For package version upgrades and such a one-line commit message is usually sufficient.
## Backporting changes
Follow these steps to backport a change into a release branch in compliance with the [commit policy](https://nixos.org/nixpkgs/manual/#submitting-changes-stable-release-branches).
1. Take note of the commits in which the change was introduced into `master` branch.
2. Check out the target _release branch_, e.g. `release-20.09`. Do not use a _channel branch_ like `nixos-20.09` or `nixpkgs-20.09`.
3. Create a branch for your change, e.g. `git checkout -b backport`.
4. When the reason to backport is not obvious from the original commit message, use `git cherry-pick -xe <original commit>` and add a reason. Otherwise use `git cherry-pick -x <original commit>`. That's fine for minor version updates that only include security and bug fixes, commits that fixes an otherwise broken package or similar. Please also ensure the commits exists on the master branch; in the case of squashed or rebased merges, the commit hash will change and the new commits can be found in the merge message at the bottom of the master pull request.
5. Push to GitHub and open a backport pull request. Make sure to select the release branch (e.g. `release-20.09`) as the target branch of the pull request, and link to the pull request in which the original change was comitted to `master`. The pull request title should be the commit title with the release version as prefix, e.g. `[20.09]`.
6. When the backport pull request is merged and you have the necessary privileges you can also replace the label `9.needs: port to stable` with `8.has: port to stable` on the original pull request. This way maintainers can keep track of missing backports easier.
## Reviewing contributions
See the nixpkgs manual for more details on how to [Review contributions](https://nixos.org/nixpkgs/manual/#chap-reviewing-contributions).

View File

@@ -7,34 +7,37 @@ assignees: ''
---
### Describe the bug
**Describe the bug**
A clear and concise description of what the bug is.
### Steps To Reproduce
**To Reproduce**
Steps to reproduce the behavior:
1. ...
2. ...
3. ...
### Expected behavior
**Expected behavior**
A clear and concise description of what you expected to happen.
### Screenshots
**Screenshots**
If applicable, add screenshots to help explain your problem.
### Additional context
**Additional context**
Add any other context about the problem here.
### Notify maintainers
**Notify maintainers**
<!--
Please @ people who are in the `meta.maintainers` list of the offending package or module.
If in doubt, check `git blame` for whoever last touched something.
-->
### Metadata
**Metadata**
Please run `nix-shell -p nix-info --run "nix-info -m"` and paste the result.
```console
[user@system:~]$ nix-shell -p nix-info --run "nix-info -m"
output here
Maintainer information:
```yaml
# a list of nixpkgs attributes affected by the problem
attribute:
# a list of nixos modules affected by the problem
module:
```

View File

@@ -1,34 +0,0 @@
---
name: Build failure
about: Create a report to help us improve
title: ''
labels: '0.kind: build failure'
assignees: ''
---
### Steps To Reproduce
Steps to reproduce the behavior:
1. build *X*
### Build log
```
log here if short otherwise a link to a gist
```
### Additional context
Add any other context about the problem here.
### Notify maintainers
<!--
Please @ people who are in the `meta.maintainers` list of the offending package or module.
If in doubt, check `git blame` for whoever last touched something.
-->
### Metadata
Please run `nix-shell -p nix-info --run "nix-info -m"` and paste the result.
```console
[user@system:~]$ nix-shell -p nix-info --run "nix-info -m"
output here
```

View File

@@ -1,32 +0,0 @@
---
name: Missing or incorrect documentation
about: Help us improve the Nixpkgs and NixOS reference manuals
title: ''
labels: '9.needs: documentation'
assignees: ''
---
## Problem
<!-- describe your problem -->
## Checklist
<!-- make sure this issue is not redundant or obsolete -->
- [ ] checked [latest Nixpkgs manual] \([source][nixpkgs-source]) and [latest NixOS manual] \([source][nixos-source])
- [ ] checked [open documentation issues] for possible duplicates
- [ ] checked [open documentation pull requests] for possible solutions
[latest Nixpkgs manual]: https://nixos.org/manual/nixpkgs/unstable/
[latest NixOS manual]: https://nixos.org/manual/nixos/unstable/
[nixpkgs-source]: https://github.com/NixOS/nixpkgs/tree/master/doc
[nixos-source]: https://github.com/NixOS/nixpkgs/tree/master/nixos/doc/manual
[open documentation issues]: https://github.com/NixOS/nixpkgs/issues?q=is%3Aissue+is%3Aopen+label%3A%229.needs%3A+documentation%22
[open documentation pull requests]: https://github.com/NixOS/nixpkgs/pulls?q=is%3Aopen+is%3Apr+label%3A%228.has%3A+documentation%22%2C%226.topic%3A+documentation%22
## Proposal
<!-- propose a solution -->

View File

@@ -13,10 +13,10 @@ assignees: ''
<!-- Note that these are hard requirements -->
<!--
You can use the "Go to file" functionality on GitHub to find the package
You can use the "Go to file" functionality on github to find the package
Then you can go to the history for this package
Find the latest "package_name: old_version -> new_version" commit
The "new_version" is the current version of the package
The "new_version" is the the current version of the package
-->
- [ ] Checked the [nixpkgs master branch](https://github.com/NixOS/nixpkgs)
<!--
@@ -29,7 +29,7 @@ There's a high chance that you'll have the new version right away while helping
###### Project name
`nix search` name:
<!--
The current version can be found easily with the same process as above for checking the master branch
The current version can be found easily with the same process than above for checking the master branch
If an open PR is present for the package, take this version as the current one and link to the PR
-->
current version:

View File

@@ -1,41 +1,27 @@
###### Description of changes
<!--
For package updates please link to a changelog or describe changes, this helps your fellow maintainers discover breaking updates.
For new packages please briefly describe the package or provide a link to its homepage.
-->
###### Things done
<!-- Please check what applies. Note that these are not hard requirements but merely serve as information for reviewers. -->
- Built on platform(s)
- [ ] x86_64-linux
- [ ] aarch64-linux
- [ ] x86_64-darwin
- [ ] aarch64-darwin
- [ ] For non-Linux: Is `sandbox = true` set in `nix.conf`? (See [Nix manual](https://nixos.org/manual/nix/stable/command-ref/conf-file.html))
- [ ] Tested, as applicable:
- [NixOS test(s)](https://nixos.org/manual/nixos/unstable/index.html#sec-nixos-tests) (look inside [nixos/tests](https://github.com/NixOS/nixpkgs/blob/master/nixos/tests))
- and/or [package tests](https://nixos.org/manual/nixpkgs/unstable/#sec-package-tests)
- or, for functions and "core" functionality, tests in [lib/tests](https://github.com/NixOS/nixpkgs/blob/master/lib/tests) or [pkgs/test](https://github.com/NixOS/nixpkgs/blob/master/pkgs/test)
- made sure NixOS tests are [linked](https://nixos.org/manual/nixpkgs/unstable/#ssec-nixos-tests-linking) to the relevant packages
- [ ] Tested compilation of all packages that depend on this change using `nix-shell -p nixpkgs-review --run "nixpkgs-review rev HEAD"`. Note: all changes have to be committed, also see [nixpkgs-review usage](https://github.com/Mic92/nixpkgs-review#usage)
- [ ] Tested basic functionality of all binary files (usually in `./result/bin/`)
- [22.11 Release Notes (or backporting 22.05 Release notes)](https://github.com/NixOS/nixpkgs/blob/master/CONTRIBUTING.md#generating-2211-release-notes)
- [ ] (Package updates) Added a release notes entry if the change is major or breaking
- [ ] (Module updates) Added a release notes entry if the change is significant
- [ ] (Module addition) Added a release notes entry if adding a new NixOS module
- [ ] (Release notes changes) Ran `nixos/doc/manual/md-to-db.sh` to update generated release notes
- [ ] Fits [CONTRIBUTING.md](https://github.com/NixOS/nixpkgs/blob/master/CONTRIBUTING.md).
<!--
To help with the large amounts of pull requests, we would appreciate your
reviews of other pull requests, especially simple package updates. Just leave a
comment describing what you have tested in the relevant package/service.
Reviewing helps to reduce the average time-to-merge for everyone.
Thanks a lot if you do!
List of open PRs: https://github.com/NixOS/nixpkgs/pulls
Reviewing guidelines: https://nixos.org/manual/nixpkgs/unstable/#chap-reviewing-contributions
-->
###### Motivation for this change
###### Things done
<!-- Please check what applies. Note that these are not hard requirements but merely serve as information for reviewers. -->
- [ ] Tested using sandboxing ([nix.useSandbox](https://nixos.org/nixos/manual/options.html#opt-nix.useSandbox) on NixOS, or option `sandbox` in [`nix.conf`](https://nixos.org/nix/manual/#sec-conf-file) on non-NixOS linux)
- Built on platform(s)
- [ ] NixOS
- [ ] macOS
- [ ] other Linux distributions
- [ ] Tested via one or more NixOS test(s) if existing and applicable for the change (look inside [nixos/tests](https://github.com/NixOS/nixpkgs/blob/master/nixos/tests))
- [ ] Tested compilation of all pkgs that depend on this change using `nix-shell -p nixpkgs-review --run "nixpkgs-review wip"`
- [ ] Tested execution of all binary files (usually in `./result/bin/`)
- [ ] Added a release notes entry if the change is major or breaking
- [ ] Fits [CONTRIBUTING.md](https://github.com/NixOS/nixpkgs/blob/master/.github/CONTRIBUTING.md).

View File

@@ -1,10 +1,9 @@
# Stale bot information
- Thanks for your contribution!
- Our stale bot will never close an issue or PR.
- To remove the stale label, just leave a new comment.
- _How to find the right people to ping?_ &rarr; [`git blame`](https://git-scm.com/docs/git-blame) to the rescue! (or GitHub's history and blame buttons.)
- You can always ask for help on [our Discourse Forum](https://discourse.nixos.org/), [our Matrix room](https://matrix.to/#/#nix:nixos.org), or on the [#nixos IRC channel](https://web.libera.chat/#nixos).
- You can always ask for help on [our Discourse Forum](https://discourse.nixos.org/) or on the [#nixos IRC channel](https://webchat.freenode.net/#nixos).
## Suggestions for PRs

25
.github/labeler.yml vendored
View File

@@ -5,16 +5,18 @@
- pkgs/development/libraries/agda/**/*
- pkgs/top-level/agda-packages.nix
"6.topic: bsd":
- pkgs/os-specific/bsd/**/*
- pkgs/stdenv/freebsd/**/*
"6.topic: cinnamon":
- pkgs/desktops/cinnamon/**/*
- nixos/modules/services/x11/desktop-managers/cinnamon.nix
- nixos/tests/cinnamon.nix
"6.topic: emacs":
- nixos/modules/services/editors/emacs.nix
- nixos/modules/services/editors/emacs.xml
- nixos/tests/emacs-daemon.nix
- pkgs/applications/editors/emacs/elisp-packages/**/*
- pkgs/applications/editors/emacs-modes/**/*
- pkgs/applications/editors/emacs/**/*
- pkgs/build-support/emacs/**/*
- pkgs/top-level/emacs-packages.nix
@@ -42,8 +44,9 @@
"6.topic: golang":
- doc/languages-frameworks/go.section.md
- pkgs/build-support/go/**/*
- pkgs/development/compilers/go/**/*
- pkgs/development/go-modules/**/*
- pkgs/development/go-packages/**/*
"6.topic: haskell":
- doc/languages-frameworks/haskell.section.md
@@ -67,13 +70,6 @@
"6.topic: nixos":
- nixos/**/*
- pkgs/os-specific/linux/nixos-rebuild/**/*
"6.topic: nim":
- doc/languages-frameworks/nim.section.md
- pkgs/development/compilers/nim/*
- pkgs/development/nim-packages/**/*
- pkgs/top-level/nim-packages.nix
"6.topic: ocaml":
- doc/languages-frameworks/ocaml.section.md
@@ -139,12 +135,7 @@
"6.topic: vim":
- doc/languages-frameworks/vim.section.md
- pkgs/applications/editors/vim/**/*
- pkgs/applications/editors/vim/plugins/**/*
- nixos/modules/programs/neovim.nix
- pkgs/applications/editors/neovim/**/*
"6.topic: vscode":
- pkgs/applications/editors/vscode/**/*
- pkgs/misc/vim-plugins/**/*
"6.topic: xfce":
- nixos/doc/manual/configuration/xfce.xml

3
.github/stale.yml vendored
View File

@@ -5,5 +5,6 @@ exemptLabels:
- "1.severity: security"
- "2.status: never-stale"
staleLabel: "2.status: stale"
markComment: false
markComment: |
I marked this as stale due to inactivity. &rarr; [More info](https://github.com/NixOS/nixpkgs/blob/master/.github/STALE-BOT.md)
closeComment: false

View File

@@ -1,38 +0,0 @@
name: Backport
on:
pull_request_target:
types: [closed, labeled]
# WARNING:
# When extending this action, be aware that $GITHUB_TOKEN allows write access to
# the GitHub repository. This means that it should not evaluate user input in a
# way that allows code injection.
permissions:
contents: read
jobs:
backport:
permissions:
contents: write # for zeebe-io/backport-action to create branch
pull-requests: write # for zeebe-io/backport-action to create PR to backport
name: Backport Pull Request
if: github.repository_owner == 'NixOS' && github.event.pull_request.merged == true && (github.event_name != 'labeled' || startsWith('backport', github.event.label.name))
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
# required to find all branches
fetch-depth: 0
ref: ${{ github.event.pull_request.head.sha }}
- name: Create backport PRs
uses: zeebe-io/backport-action@v0.0.8
with:
# Config README: https://github.com/zeebe-io/backport-action#backport-action
github_token: ${{ secrets.GITHUB_TOKEN }}
github_workspace: ${{ github.workspace }}
pull_description: |-
Bot-based backport to `${target_branch}`, triggered by a label in #${pull_number}.
* [ ] Before merging, ensure that this backport complies with the [Criteria for Backporting](https://github.com/NixOS/nixpkgs/blob/master/CONTRIBUTING.md#criteria-for-backporting-changes).
* Even as a non-commiter, if you find that it does not comply, leave a comment.

View File

@@ -1,29 +0,0 @@
name: Basic evaluation checks
on:
workflow_dispatch
# pull_request:
# branches:
# - master
# - release-**
# push:
# branches:
# - master
# - release-**
permissions:
contents: read
jobs:
tests:
runs-on: ubuntu-latest
# we don't limit this action to only NixOS repo since the checks are cheap and useful developer feedback
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v18
- uses: cachix/cachix-action@v11
with:
# This cache is for the nixpkgs repo checks and should not be trusted or used elsewhere.
name: nixpkgs-ci
signingKey: '${{ secrets.CACHIX_SIGNING_KEY }}'
# explicit list of supportedSystems is needed until aarch64-darwin becomes part of the trunk jobset
- run: nix-build pkgs/top-level/release.nix -A tarball.nixpkgs-basic-release-checks --arg supportedSystems '[ "aarch64-darwin" "aarch64-linux" "x86_64-linux" "x86_64-darwin" ]'

View File

@@ -1,21 +0,0 @@
#!/usr/bin/env nix-shell
#! nix-shell -i bash -p html-tidy
set -euo pipefail
shopt -s inherit_errexit
normalize() {
tidy \
--anchor-as-name no \
--coerce-endtags no \
--escape-scripts no \
--fix-backslash no \
--fix-style-tags no \
--fix-uri no \
--indent yes \
--wrap 0 \
< "$1" \
2> /dev/null
}
diff -U3 <(normalize "$1") <(normalize "$2")

View File

@@ -4,13 +4,8 @@ on:
branches:
- master
- release-**
permissions:
contents: read
jobs:
build:
permissions:
contents: write # for peter-evans/commit-comment to comment on commit
runs-on: ubuntu-latest
if: github.repository_owner == 'NixOS'
env:
@@ -22,12 +17,9 @@ jobs:
run: |
ISMERGE=$(curl -H 'Accept: application/vnd.github.groot-preview+json' -H "authorization: Bearer ${{ secrets.GITHUB_TOKEN }}" https://api.github.com/repos/${{ env.GITHUB_REPOSITORY }}/commits/${{ env.GITHUB_SHA }}/pulls | jq -r '.[] | select(.merge_commit_sha == "${{ env.GITHUB_SHA }}") | any')
echo "::set-output name=ismerge::$ISMERGE"
# github events are eventually consistent, so wait until changes propagate to thier DB
- run: sleep 60
if: steps.ismerge.outputs.ismerge != 'true'
- name: Warn if the commit was a direct push
if: steps.ismerge.outputs.ismerge != 'true'
uses: peter-evans/commit-comment@v2
uses: peter-evans/commit-comment@v1
with:
body: |
@${{ github.actor }}, you pushed a commit directly to master/release branch

View File

@@ -11,33 +11,36 @@ on:
jobs:
tests:
runs-on: ubuntu-latest
if: "github.repository_owner == 'NixOS' && !contains(github.event.pull_request.title, '[skip editorconfig]')"
if: github.repository_owner == 'NixOS'
steps:
- name: Get list of changed files from PR
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
echo 'PR_DIFF<<EOF' >> $GITHUB_ENV
gh api \
repos/NixOS/nixpkgs/pulls/${{github.event.number}}/files --paginate \
| jq '.[] | select(.status != "removed") | .filename' \
> "$HOME/changed_files"
- name: print list of changed files
run: |
cat "$HOME/changed_files"
- uses: actions/checkout@v3
>> $GITHUB_ENV
echo 'EOF' >> $GITHUB_ENV
- uses: actions/checkout@v2
with:
# pull_request_target checks out the base branch by default
ref: refs/pull/${{ github.event.pull_request.number }}/merge
- uses: cachix/install-nix-action@v18
if: env.PR_DIFF
- uses: cachix/install-nix-action@v13
if: env.PR_DIFF
with:
# nixpkgs commit is pinned so that it doesn't break
# editorconfig-checker 2.4.0
nix_path: nixpkgs=https://github.com/NixOS/nixpkgs/archive/c473cc8714710179df205b153f4e9fa007107ff9.tar.gz
nix_path: nixpkgs=https://github.com/NixOS/nixpkgs/archive/f93ecc4f6bc60414d8b73dbdf615ceb6a2c604df.tar.gz
- name: install editorconfig-checker
run: nix-env -iA editorconfig-checker -f '<nixpkgs>'
if: env.PR_DIFF
- name: Checking EditorConfig
if: env.PR_DIFF
run: |
cat "$HOME/changed_files" | xargs -r editorconfig-checker -disable-indent-size
echo "$PR_DIFF" | xargs editorconfig-checker -disable-indent-size
- if: ${{ failure() }}
run: |
echo "::error :: Hey! It looks like your changes don't follow our editorconfig settings. Read https://editorconfig.org/#download to configure your editor so you never see this error again."

View File

@@ -4,11 +4,6 @@ on:
pull_request_target:
types: [edited, opened, synchronize, reopened]
# WARNING:
# When extending this action, be aware that $GITHUB_TOKEN allows some write
# access to the GitHub API. This means that it should not evaluate user input in
# a way that allows code injection.
permissions:
contents: read
pull-requests: write
@@ -18,7 +13,7 @@ jobs:
runs-on: ubuntu-latest
if: github.repository_owner == 'NixOS'
steps:
- uses: actions/labeler@v4
- uses: actions/labeler@v3
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
sync-labels: true

View File

@@ -12,28 +12,19 @@ on:
jobs:
nixos:
runs-on: ubuntu-latest
if: github.repository_owner == 'NixOS'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v2
with:
# pull_request_target checks out the base branch by default
ref: refs/pull/${{ github.event.pull_request.number }}/merge
- uses: cachix/install-nix-action@v18
- uses: cachix/install-nix-action@v13
with:
# explicitly enable sandbox
extra_nix_config: sandbox = true
- uses: cachix/cachix-action@v11
- uses: cachix/cachix-action@v9
with:
# This cache is for the nixpkgs repo checks and should not be trusted or used elsewhere.
# This cache is for the nixos/nixpkgs manual builds and should not be trusted or used elsewhere.
name: nixpkgs-ci
signingKey: '${{ secrets.CACHIX_SIGNING_KEY }}'
- name: Building NixOS manual with DocBook options
- name: Building NixOS manual
run: NIX_PATH=nixpkgs=$(pwd) nix-build --option restrict-eval true nixos/release.nix -A manual.x86_64-linux
- name: Building NixOS manual with Markdown options
run: |
export NIX_PATH=nixpkgs=$(pwd)
nix-build \
--option restrict-eval true \
--arg configuration '{ documentation.nixos.options.allowDocBook = false; }' \
nixos/release.nix \
-A manual.x86_64-linux

View File

@@ -12,19 +12,18 @@ on:
jobs:
nixpkgs:
runs-on: ubuntu-latest
if: github.repository_owner == 'NixOS'
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v2
with:
# pull_request_target checks out the base branch by default
ref: refs/pull/${{ github.event.pull_request.number }}/merge
- uses: cachix/install-nix-action@v18
- uses: cachix/install-nix-action@v13
with:
# explicitly enable sandbox
extra_nix_config: sandbox = true
- uses: cachix/cachix-action@v11
- uses: cachix/cachix-action@v9
with:
# This cache is for the nixpkgs repo checks and should not be trusted or used elsewhere.
# This cache is for the nixos/nixpkgs manual builds and should not be trusted or used elsewhere.
name: nixpkgs-ci
signingKey: '${{ secrets.CACHIX_SIGNING_KEY }}'
- name: Building Nixpkgs manual

View File

@@ -1,64 +0,0 @@
name: "Check NixOS Manual DocBook rendering against MD rendering"
on:
schedule:
# * is a special character in YAML so you have to quote this string
# Check every 24 hours
- cron: '0 0 * * *'
permissions:
contents: read
jobs:
check-rendering-equivalence:
permissions:
pull-requests: write # for peter-evans/create-or-update-comment to create or update comment
if: github.repository_owner == 'NixOS'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v18
with:
# explicitly enable sandbox
extra_nix_config: sandbox = true
- uses: cachix/cachix-action@v11
with:
# This cache is for the nixpkgs repo checks and should not be trusted or used elsewhere.
name: nixpkgs-ci
signingKey: '${{ secrets.CACHIX_SIGNING_KEY }}'
- name: Build DocBook and MD manuals
run: |
export NIX_PATH=nixpkgs=$(pwd)
nix-build \
--option restrict-eval true \
-o docbook nixos/release.nix \
-A manual.x86_64-linux
nix-build \
--option restrict-eval true \
--arg configuration '{ documentation.nixos.options.allowDocBook = false; }' \
-o md nixos/release.nix \
-A manual.x86_64-linux
- name: Compare DocBook and MD manuals
id: check
run: |
export NIX_PATH=nixpkgs=$(pwd)
.github/workflows/compare-manuals.sh \
docbook/share/doc/nixos/options.html \
md/share/doc/nixos/options.html
# if the manual can't be built we don't want to notify anyone.
# while this may temporarily hide rendering failures it will be a lot
# less noisy until all nixpkgs pull requests have stopped using
# docbook for option docs.
- name: Comment on failure
uses: peter-evans/create-or-update-comment@v2
if: ${{ failure() && steps.check.conclusion == 'failure' }}
with:
issue-number: 189318
body: |
Markdown and DocBook manuals do not agree.
Check https://github.com/NixOS/nixpkgs/actions/runs/${{ github.run_id }} for details.

41
.github/workflows/merge-staging.yml vendored Normal file
View File

@@ -0,0 +1,41 @@
name: "merge staging(-next)"
on:
schedule:
# * is a special character in YAML so you have to quote this string
# Merge every 6 hours
- cron: '0 */6 * * *'
jobs:
sync-branch:
if: github.repository == 'NixOS/nixpkgs'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Merge master into staging-next
id: staging_next
uses: devmasx/merge-branch@v1.3.1
with:
type: now
from_branch: master
target_branch: staging-next
github_token: ${{ secrets.GITHUB_TOKEN }}
- name: Merge staging-next into staging
id: staging
uses: devmasx/merge-branch@v1.3.1
with:
type: now
from_branch: staging-next
target_branch: staging
github_token: ${{ secrets.GITHUB_TOKEN }}
- name: Comment on failure
uses: peter-evans/create-or-update-comment@v1
if: ${{ failure() }}
with:
issue-number: 105153
body: |
An automatic merge${{ (steps.staging_next.outcome == 'failure' && ' from master to staging-next') || ((steps.staging.outcome == 'failure' && ' from staging-next to staging') || '') }} [failed](https://github.com/NixOS/nixpkgs/actions/runs/${{ github.run_id }}).

View File

@@ -1,34 +0,0 @@
name: NixOS manual checks
permissions: read-all
on:
pull_request_target:
branches-ignore:
- 'release-**'
paths:
- 'nixos/**/*.xml'
- 'nixos/**/*.md'
jobs:
tests:
runs-on: ubuntu-latest
if: github.repository_owner == 'NixOS'
steps:
- uses: actions/checkout@v3
with:
# pull_request_target checks out the base branch by default
ref: refs/pull/${{ github.event.pull_request.number }}/merge
- uses: cachix/install-nix-action@v18
- name: Check DocBook files generated from Markdown are consistent
run: |
nixos/doc/manual/md-to-db.sh
git diff --exit-code || {
echo
echo 'Generated manual files are out of date.'
echo 'Please run'
echo
echo ' nixos/doc/manual/md-to-db.sh'
echo
exit 1
}

View File

@@ -1,26 +0,0 @@
name: "No channel PR"
on:
pull_request:
branches:
- 'nixos-**'
- 'nixpkgs-**'
permissions:
contents: read
jobs:
fail:
permissions:
contents: none
name: "This PR is is targeting a channel branch"
runs-on: ubuntu-latest
steps:
- run: |
cat <<EOF
The nixos-* and nixpkgs-* branches are pushed to by the channel
release script and should not be merged into directly.
Please target the equivalent release-* branch or master instead.
EOF
exit 1

View File

@@ -1,33 +0,0 @@
name: "Set pending OfBorg status"
on:
pull_request_target:
# Sets the ofborg-eval status to "pending" to signal that we are waiting for
# OfBorg even if it is running late. The status will be overwritten by OfBorg
# once it starts evaluation.
# WARNING:
# When extending this action, be aware that $GITHUB_TOKEN allows (restricted) write access to
# the GitHub repository. This means that it should not evaluate user input in a
# way that allows code injection.
permissions:
contents: read
jobs:
action:
if: github.repository_owner == 'NixOS'
permissions:
statuses: write
runs-on: ubuntu-latest
steps:
- name: "Set pending OfBorg status"
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
curl \
-X POST \
-H "Accept: application/vnd.github.v3+json" \
-H "Authorization: Bearer $GITHUB_TOKEN" \
-d '{"context": "ofborg-eval", "state": "pending", "description": "Waiting for OfBorg..."}' \
"https://api.github.com/repos/NixOS/nixpkgs/commits/${{ github.event.pull_request.head.sha }}/statuses"

21
.github/workflows/pending-clear.yml vendored Normal file
View File

@@ -0,0 +1,21 @@
name: "clear pending status"
on:
check_suite:
types: [ completed ]
jobs:
action:
runs-on: ubuntu-latest
steps:
- name: clear pending status
if: github.repository_owner == 'NixOS' && github.event.check_suite.app.name == 'OfBorg'
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
curl \
-X POST \
-H "Accept: application/vnd.github.v3+json" \
-H "Authorization: token $GITHUB_TOKEN" \
-d '{"state": "success", "target_url": " ", "description": " ", "context": "Wait for ofborg"}' \
"https://api.github.com/repos/NixOS/nixpkgs/statuses/${{ github.event.check_suite.head_sha }}"

20
.github/workflows/pending-set.yml vendored Normal file
View File

@@ -0,0 +1,20 @@
name: "set pending status"
on:
pull_request_target:
jobs:
action:
runs-on: ubuntu-latest
steps:
- name: set pending status
if: github.repository_owner == 'NixOS'
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
curl \
-X POST \
-H "Accept: application/vnd.github.v3+json" \
-H "Authorization: token $GITHUB_TOKEN" \
-d '{"state": "pending", "target_url": " ", "description": "This pending status will be cleared when ofborg starts eval.", "context": "Wait for ofborg"}' \
"https://api.github.com/repos/NixOS/nixpkgs/statuses/${{ github.event.pull_request.head.sha }}"

View File

@@ -1,59 +0,0 @@
# This action periodically merges base branches into staging branches.
# This is done to
# * prevent conflicts or rather resolve them early
# * make all potential breakage happen on the staging branch
# * and make sure that all major rebuilds happen before the staging
# branch gets merged back into its base branch.
name: "Periodic Merges (24h)"
on:
schedule:
# * is a special character in YAML so you have to quote this string
# Merge every 24 hours
- cron: '0 0 * * *'
permissions:
contents: read
jobs:
periodic-merge:
permissions:
contents: write # for devmasx/merge-branch to merge branches
pull-requests: write # for peter-evans/create-or-update-comment to create or update comment
if: github.repository_owner == 'NixOS'
runs-on: ubuntu-latest
strategy:
# don't fail fast, so that all pairs are tried
fail-fast: false
# certain branches need to be merged in order, like master->staging-next->staging
# and disabling parallelism ensures the order of the pairs below.
max-parallel: 1
matrix:
pairs:
- from: master
into: haskell-updates
- from: release-22.05
into: staging-next-22.05
- from: staging-next-22.05
into: staging-22.05
name: ${{ matrix.pairs.from }} → ${{ matrix.pairs.into }}
steps:
- uses: actions/checkout@v3
- name: ${{ matrix.pairs.from }} → ${{ matrix.pairs.into }}
uses: devmasx/merge-branch@1.4.0
with:
type: now
from_branch: ${{ matrix.pairs.from }}
target_branch: ${{ matrix.pairs.into }}
github_token: ${{ secrets.GITHUB_TOKEN }}
- name: Comment on failure
uses: peter-evans/create-or-update-comment@v2
if: ${{ failure() }}
with:
issue-number: 105153
body: |
Periodic merge from `${{ matrix.pairs.from }}` into `${{ matrix.pairs.into }}` has [failed](https://github.com/NixOS/nixpkgs/actions/runs/${{ github.run_id }}).

View File

@@ -1,57 +0,0 @@
# This action periodically merges base branches into staging branches.
# This is done to
# * prevent conflicts or rather resolve them early
# * make all potential breakage happen on the staging branch
# * and make sure that all major rebuilds happen before the staging
# branch gets merged back into its base branch.
name: "Periodic Merges (6h)"
on:
schedule:
# * is a special character in YAML so you have to quote this string
# Merge every 6 hours
- cron: '0 */6 * * *'
permissions:
contents: read
jobs:
periodic-merge:
permissions:
contents: write # for devmasx/merge-branch to merge branches
pull-requests: write # for peter-evans/create-or-update-comment to create or update comment
if: github.repository_owner == 'NixOS'
runs-on: ubuntu-latest
strategy:
# don't fail fast, so that all pairs are tried
fail-fast: false
# certain branches need to be merged in order, like master->staging-next->staging
# and disabling parallelism ensures the order of the pairs below.
max-parallel: 1
matrix:
pairs:
- from: master
into: staging-next
- from: staging-next
into: staging
name: ${{ matrix.pairs.from }} → ${{ matrix.pairs.into }}
steps:
- uses: actions/checkout@v3
- name: ${{ matrix.pairs.from }} → ${{ matrix.pairs.into }}
uses: devmasx/merge-branch@1.4.0
with:
type: now
from_branch: ${{ matrix.pairs.from }}
target_branch: ${{ matrix.pairs.into }}
github_token: ${{ secrets.GITHUB_TOKEN }}
- name: Comment on failure
uses: peter-evans/create-or-update-comment@v2
if: ${{ failure() }}
with:
issue-number: 105153
body: |
Periodic merge from `${{ matrix.pairs.from }}` into `${{ matrix.pairs.into }}` has [failed](https://github.com/NixOS/nixpkgs/actions/runs/${{ github.run_id }}).

134
.github/workflows/rebase.yml vendored Normal file
View File

@@ -0,0 +1,134 @@
on:
issue_comment:
types:
- created
# This action allows people with write access to the repo to rebase a PRs base branch
# by commenting `/rebase ${branch}` on the PR while avoiding CODEOWNER notifications.
jobs:
rebase:
runs-on: ubuntu-latest
if: github.repository_owner == 'NixOS' && github.event.issue.pull_request != '' && contains(github.event.comment.body, '/rebase')
steps:
- uses: peter-evans/create-or-update-comment@v1
with:
comment-id: ${{ github.event.comment.id }}
reactions: eyes
- uses: scherermichael-oss/action-has-permission@1.0.6
id: check-write-access
with:
required-permission: write
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: check permissions
run: |
echo "Commenter doesn't have write access to the repo"
exit 1
if: "! steps.check-write-access.outputs.has-permission"
- name: setup
run: |
curl "https://api.github.com/repos/${{ github.repository }}/pulls/${{ github.event.issue.number }}" 2>/dev/null >pr.json
cat <<EOF >>"$GITHUB_ENV"
CAN_MODIFY=$(jq -r '.maintainer_can_modify' pr.json)
COMMITS=$(jq -r '.commits' pr.json)
CURRENT_BASE=$(jq -r '.base.ref' pr.json)
PR_BRANCH=$(jq -r '.head.ref' pr.json)
COMMENT_BRANCH=$(echo ${{ github.event.comment.body }} | awk "/^\/rebase / {print \$2}")
PULL_REQUEST=${{ github.event.issue.number }}
EOF
rm pr.json
- name: check branch
env:
PERMANENT_BRANCHES: "haskell-updates|master|nixos|nixpkgs|python-unstable|release|staging"
VALID_BRANCHES: "haskell-updates|master|python-unstable|release-20.09|staging|staging-20.09|staging-next"
run: |
message() {
cat <<EOF
Can't rebase $PR_BRANCH from $CURRENT_BASE onto $COMMENT_BRANCH (PR:$PULL_REQUEST COMMITS:$COMMITS)
EOF
}
if ! [[ "$COMMENT_BRANCH" =~ ^($VALID_BRANCHES)$ ]]; then
cat <<EOF
Check that the branch from the comment is valid:
$(message)
This action can only rebase onto these branches:
$VALID_BRANCHES
\`/rebase \${branch}\` must be at the start of the line
EOF
exit 1
fi
if [[ "$COMMENT_BRANCH" == "$CURRENT_BASE" ]]; then
cat <<EOF
Check that the branch from the comment isn't the current base branch:
$(message)
EOF
exit 1
fi
if [[ "$COMMENT_BRANCH" == "$PR_BRANCH" ]]; then
cat <<EOF
Check that the branch from the comment isn't the current branch:
$(message)
EOF
exit 1
fi
if [[ "$PR_BRANCH" =~ ^($PERMANENT_BRANCHES) ]]; then
cat <<EOF
Check that the PR branch isn't a permanent branch:
$(message)
EOF
exit 1
fi
if [[ "$CAN_MODIFY" != "true" ]]; then
cat <<EOF
Check that maintainers can edit the PR branch:
$(message)
EOF
exit 1
fi
- uses: actions/checkout@v2
with:
fetch-depth: 0
- name: rebase pull request
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
git config --global user.email "41898282+github-actions[bot]@users.noreply.github.com"
git config --global user.name "github-actions[bot]"
git fetch origin
gh pr checkout "$PULL_REQUEST"
git rebase \
--onto="$(git merge-base origin/"$CURRENT_BASE" origin/"$COMMENT_BRANCH")" \
"HEAD~$COMMITS"
git push --force
curl \
-X POST \
-H "Accept: application/vnd.github.v3+json" \
-H "Authorization: token $GITHUB_TOKEN" \
-d "{ \"base\": \"$COMMENT_BRANCH\" }" \
"https://api.github.com/repos/${{ github.repository }}/pulls/$PULL_REQUEST"
curl \
-X PATCH \
-H "Accept: application/vnd.github.v3+json" \
-H "Authorization: token $GITHUB_TOKEN" \
-d '{ "state": "closed" }' \
"https://api.github.com/repos/${{ github.repository }}/pulls/$PULL_REQUEST"
- uses: peter-evans/create-or-update-comment@v1
with:
issue-number: ${{ github.event.issue.number }}
body: |
Rebased, please reopen the pull request to restart CI
- uses: peter-evans/create-or-update-comment@v1
if: failure()
with:
issue-number: ${{ github.event.issue.number }}
body: |
[Failed to rebase](https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }})

View File

@@ -1,55 +0,0 @@
name: "Update terraform-providers"
on:
schedule:
- cron: "0 3 * * *"
workflow_dispatch:
permissions:
contents: read
jobs:
tf-providers:
permissions:
contents: write # for peter-evans/create-pull-request to create branch
pull-requests: write # for peter-evans/create-pull-request to create a PR, for peter-evans/create-or-update-comment to create or update comment
if: github.repository_owner == 'NixOS' && github.ref == 'refs/heads/master' # ensure workflow_dispatch only runs on master
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v18
with:
nix_path: nixpkgs=channel:nixpkgs-unstable
- name: setup
id: setup
run: |
echo ::set-output name=title::"terraform-providers: update $(date -u +"%Y-%m-%d")"
- name: update terraform-providers
run: |
git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
git config user.name "github-actions[bot]"
echo | nix-shell \
maintainers/scripts/update.nix \
--argstr commit true \
--argstr keep-going true \
--argstr max-workers 2 \
--argstr path terraform-providers
- name: clean repo
run: |
git clean -f
- name: create PR
uses: peter-evans/create-pull-request@v4
with:
body: |
Automatic update by [update-terraform-providers](https://github.com/NixOS/nixpkgs/blob/master/.github/workflows/update-terraform-providers.yml) action.
https://github.com/NixOS/nixpkgs/actions/runs/${{ github.run_id }}
Check that all providers build with:
```
@ofborg build terraform.full
```
branch: terraform-providers-update
delete-branch: false
title: ${{ steps.setup.outputs.title }}
token: ${{ secrets.GITHUB_TOKEN }}

11
.gitignore vendored
View File

@@ -2,18 +2,12 @@
,*
.*.swp
.*.swo
.idea/
.vscode/
outputs/
result-*
result
!pkgs/development/python-modules/result
result-*
/doc/NEWS.html
/doc/NEWS.txt
/doc/manual.html
/doc/manual.pdf
/result
/source/
.version-suffix
.DS_Store
@@ -26,6 +20,3 @@ __pycache__
# generated by pkgs/common-updater/update-script.nix
update-git-commits.txt
# JetBrains IDEA module declaration file
/nixpkgs.iml

View File

@@ -1 +0,0 @@
Daniel Løvbrøtte Olsen <me@dandellion.xyz> <daniel.olsen99@gmail.com>

View File

@@ -1 +1 @@
22.11
21.05

View File

@@ -1,135 +0,0 @@
# How to contribute
Note: contributing implies licensing those contributions
under the terms of [COPYING](COPYING), which is an MIT-like license.
## Opening issues
* Make sure you have a [GitHub account](https://github.com/signup/free)
* Make sure there is no open issue on the topic
* [Submit a new issue](https://github.com/NixOS/nixpkgs/issues/new/choose) by choosing the kind of topic and fill out the template
## Submitting changes
Read the ["Submitting changes"](https://nixos.org/nixpkgs/manual/#chap-submitting-changes) section of the nixpkgs manual. It explains how to write, test, and iterate on your change, and which branch to base your pull request against.
Below is a short excerpt of some points in there:
* Format the commit messages in the following way:
```
(pkg-name | nixos/<module>): (from -> to | init at version | refactor | etc)
(Motivation for change. Link to release notes. Additional information.)
```
For consistency, there should not be a period at the end of the commit message's summary line (the first line of the commit message).
Examples:
* nginx: init at 2.0.1
* firefox: 54.0.1 -> 55.0
https://www.mozilla.org/en-US/firefox/55.0/releasenotes/
* nixos/hydra: add bazBaz option
Dual baz behavior is needed to do foo.
* nixos/nginx: refactor config generation
The old config generation system used impure shell scripts and could break in specific circumstances (see #1234).
* `meta.description` should:
* Be capitalized.
* Not start with the package name.
* Not have a period at the end.
* `meta.license` must be set and fit the upstream license.
* If there is no upstream license, `meta.license` should default to `lib.licenses.unfree`.
* `meta.maintainers` must be set.
See the nixpkgs manual for more details on [standard meta-attributes](https://nixos.org/nixpkgs/manual/#sec-standard-meta-attributes).
## Writing good commit messages
In addition to writing properly formatted commit messages, it's important to include relevant information so other developers can later understand *why* a change was made. While this information usually can be found by digging code, mailing list/Discourse archives, pull request discussions or upstream changes, it may require a lot of work.
For package version upgrades and such a one-line commit message is usually sufficient.
## Rebasing between branches (i.e. from master to staging)
From time to time, changes between branches must be rebased, for example, if the
number of new rebuilds they would cause is too large for the target branch. When
rebasing, care must be taken to include only the intended changes, otherwise
many CODEOWNERS will be inadvertently requested for review. To achieve this,
rebasing should not be performed directly on the target branch, but on the merge
base between the current and target branch.
In the following example, we assume that the current branch, called `feature`,
is based on `master`, and we rebase it onto the merge base between
`master` and `staging` so that the PR can eventually be retargeted to
`staging` without causing a mess. The example uses `upstream` as the remote for `NixOS/nixpkgs.git`
while `origin` is the remote you are pushing to.
```console
# Rebase your commits onto the common merge base
git rebase --onto upstream/staging... upstream/master
# Force push your changes
git push origin feature --force-with-lease
```
The syntax `upstream/staging...` is equivalent to `upstream/staging...HEAD` and
stands for the merge base between `upstream/staging` and `HEAD` (hence between
`upstream/staging` and `upstream/master`).
Then change the base branch in the GitHub PR using the *Edit* button in the upper
right corner, and switch from `master` to `staging`. *After* the PR has been
retargeted it might be necessary to do a final rebase onto the target branch, to
resolve any outstanding merge conflicts.
```console
# Rebase onto target branch
git rebase upstream/staging
# Review and fixup possible conflicts
git status
# Force push your changes
git push origin feature --force-with-lease
```
## Backporting changes
Follow these steps to backport a change into a release branch in compliance with the [commit policy](https://nixos.org/nixpkgs/manual/#submitting-changes-stable-release-branches).
You can add a label such as `backport release-22.05` to a PR, so that merging it will
automatically create a backport (via [a GitHub Action](.github/workflows/backport.yml)).
This also works for PR's that have already been merged, and might take a couple of minutes to trigger.
You can also create the backport manually:
1. Take note of the commits in which the change was introduced into `master` branch.
2. Check out the target _release branch_, e.g. `release-22.05`. Do not use a _channel branch_ like `nixos-22.05` or `nixpkgs-22.05-darwin`.
3. Create a branch for your change, e.g. `git checkout -b backport`.
4. When the reason to backport is not obvious from the original commit message, use `git cherry-pick -xe <original commit>` and add a reason. Otherwise use `git cherry-pick -x <original commit>`. That's fine for minor version updates that only include security and bug fixes, commits that fixes an otherwise broken package or similar. Please also ensure the commits exists on the master branch; in the case of squashed or rebased merges, the commit hash will change and the new commits can be found in the merge message at the bottom of the master pull request.
5. Push to GitHub and open a backport pull request. Make sure to select the release branch (e.g. `release-22.05`) as the target branch of the pull request, and link to the pull request in which the original change was comitted to `master`. The pull request title should be the commit title with the release version as prefix, e.g. `[22.05]`.
6. When the backport pull request is merged and you have the necessary privileges you can also replace the label `9.needs: port to stable` with `8.has: port to stable` on the original pull request. This way maintainers can keep track of missing backports easier.
## Criteria for Backporting changes
Anything that does not cause user or downstream dependency regressions can be backported. This includes:
- New Packages / Modules
- Security / Patch updates
- Version updates which include new functionality (but no breaking changes)
- Services which require a client to be up-to-date regardless. (E.g. `spotify`, `steam`, or `discord`)
- Security critical applications (E.g. `firefox`)
## Generating 22.11 Release Notes
Documentation in nixpkgs is transitioning to a markdown-centric workflow. Release notes now require a translation step to convert from markdown to a compatible docbook document.
Steps for updating 22.11 Release notes:
1. Edit `nixos/doc/manual/release-notes/rl-2211.section.md` with the desired changes
2. Run `./nixos/doc/manual/md-to-db.sh` to render `nixos/doc/manual/from_md/release-notes/rl-2211.section.xml`
3. Include changes to `rl-2211.section.md` and `rl-2211.section.xml` in the same commit.
## Reviewing contributions
See the nixpkgs manual for more details on how to [Review contributions](https://nixos.org/nixpkgs/manual/#chap-reviewing-contributions).

View File

@@ -1,4 +1,4 @@
Copyright (c) 2003-2022 Eelco Dolstra and the Nixpkgs/NixOS contributors
Copyright (c) 2003-2021 Eelco Dolstra and the Nixpkgs/NixOS contributors
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the

View File

@@ -1,19 +1,14 @@
<p align="center">
<a href="https://nixos.org#gh-light-mode-only">
<img src="https://raw.githubusercontent.com/NixOS/nixos-homepage/master/logo/nixos-hires.png" width="500px" alt="NixOS logo"/>
</a>
<a href="https://nixos.org#gh-dark-mode-only">
<img src="https://raw.githubusercontent.com/NixOS/nixos-artwork/master/logo/nixos-white.png" width="500px" alt="NixOS logo"/>
</a>
<a href="https://nixos.org/nixos"><img src="https://nixos.org/logo/nixos-hires.png" width="500px" alt="NixOS logo" /></a>
</p>
<p align="center">
<a href="https://github.com/NixOS/nixpkgs/blob/master/CONTRIBUTING.md"><img src="https://img.shields.io/github/contributors-anon/NixOS/nixpkgs" alt="Contributors badge" /></a>
<a href="https://opencollective.com/nixos"><img src="https://opencollective.com/nixos/tiers/supporter/badge.svg?label=supporters&color=brightgreen" alt="Open Collective supporters" /></a>
<a href="https://www.codetriage.com/nixos/nixpkgs"><img src="https://www.codetriage.com/nixos/nixpkgs/badges/users.svg" alt="Code Triagers badge" /></a>
<a href="https://opencollective.com/nixos"><img src="https://opencollective.com/nixos/tiers/supporter/badge.svg?label=Supporter&color=brightgreen" alt="Open Collective supporters" /></a>
</p>
[Nixpkgs](https://github.com/nixos/nixpkgs) is a collection of over
80,000 software packages that can be installed with the
60,000 software packages that can be installed with the
[Nix](https://nixos.org/nix/) package manager. It also implements
[NixOS](https://nixos.org/nixos/), a purely-functional Linux distribution.
@@ -26,10 +21,10 @@
# Community
* [Discourse Forum](https://discourse.nixos.org/)
* [Matrix Chat](https://matrix.to/#/#community:nixos.org)
* [IRC - #nixos on freenode.net](irc://irc.freenode.net/#nixos)
* [NixOS Weekly](https://weekly.nixos.org/)
* [Community-maintained wiki](https://nixos.wiki/)
* [Community-maintained list of ways to get in touch](https://nixos.wiki/wiki/Get_In_Touch#Chat) (Discord, Telegram, IRC, etc.)
* [Community-maintained list of ways to get in touch](https://nixos.wiki/wiki/Get_In_Touch#Chat) (Discord, Matrix, Telegram, other IRC channels, etc.)
# Other Project Repositories
@@ -51,14 +46,14 @@ Nixpkgs and NixOS are built and tested by our continuous integration
system, [Hydra](https://hydra.nixos.org/).
* [Continuous package builds for unstable/master](https://hydra.nixos.org/jobset/nixos/trunk-combined)
* [Continuous package builds for the NixOS 22.05 release](https://hydra.nixos.org/jobset/nixos/release-22.05)
* [Continuous package builds for the NixOS 20.09 release](https://hydra.nixos.org/jobset/nixos/release-20.09)
* [Tests for unstable/master](https://hydra.nixos.org/job/nixos/trunk-combined/tested#tabs-constituents)
* [Tests for the NixOS 22.05 release](https://hydra.nixos.org/job/nixos/release-22.05/tested#tabs-constituents)
* [Tests for the NixOS 20.09 release](https://hydra.nixos.org/job/nixos/release-20.09/tested#tabs-constituents)
Artifacts successfully built with Hydra are published to cache at
https://cache.nixos.org/. When successful build and test criteria are
met, the Nixpkgs expressions are distributed via [Nix
channels](https://nixos.org/manual/nix/stable/package-management/channels.html).
channels](https://nixos.org/nix/manual/#sec-channels).
# Contributing
@@ -92,7 +87,7 @@ Most contributions are based on and merged into these branches:
deemed of sufficiently high quality
For more information about contributing to the project, please visit
the [contributing page](https://github.com/NixOS/nixpkgs/blob/master/CONTRIBUTING.md).
the [contributing page](https://github.com/NixOS/nixpkgs/blob/master/.github/CONTRIBUTING.md).
# Donations
@@ -102,8 +97,7 @@ Foundation](https://nixos.org/nixos/foundation.html). To ensure the
continuity and expansion of the NixOS infrastructure, we are looking
for donations to our organization.
You can donate to the NixOS foundation through [SEPA bank
transfers](https://nixos.org/donate.html) or by using Open Collective:
You can donate to the NixOS foundation by using Open Collective:
<a href="https://opencollective.com/nixos#support"><img src="https://opencollective.com/nixos/tiers/supporter.svg?width=890" /></a>

View File

@@ -1,21 +1,5 @@
MD_TARGETS=$(addsuffix .xml, $(basename $(shell find . -type f -regex '.*\.md$$' -not -name README.md)))
PANDOC ?= pandoc
pandoc_media_dir = media
# NOTE: Keep in sync with NixOS manual (/nixos/doc/manual/md-to-db.sh) and conversion script (/maintainers/scripts/db-to-md.sh).
# TODO: Remove raw-attribute when we can get rid of DocBook altogether.
pandoc_commonmark_enabled_extensions = +attributes+fenced_divs+footnotes+bracketed_spans+definition_lists+pipe_tables+raw_attribute
# Not needed:
# - docbook-reader/citerefentry-to-rst-role.lua (only relevant for DocBook → MarkDown/rST/MyST)
pandoc_flags = --extract-media=$(pandoc_media_dir) \
--lua-filter=$(PANDOC_LUA_FILTERS_DIR)/diagram-generator.lua \
--lua-filter=build-aux/pandoc-filters/myst-reader/roles.lua \
--lua-filter=build-aux/pandoc-filters/link-unix-man-references.lua \
--lua-filter=build-aux/pandoc-filters/docbook-writer/rst-roles.lua \
--lua-filter=build-aux/pandoc-filters/docbook-writer/labelless-link-is-xref.lua \
-f commonmark$(pandoc_commonmark_enabled_extensions)+smart
.PHONY: all
all: validate format out/html/index.html out/epub/manual.epub
@@ -38,7 +22,7 @@ fix-misc-xml:
.PHONY: clean
clean:
rm -f ${MD_TARGETS} doc-support/result .version manual-full.xml functions/library/locations.xml functions/library/generated
rm -rf ./out/ ./highlightjs ./media
rm -rf ./out/ ./highlightjs
.PHONY: validate
validate: manual-full.xml doc-support/result
@@ -55,7 +39,7 @@ out/html/index.html: doc-support/result manual-full.xml style.css highlightjs
mkdir -p out/html/highlightjs/
cp -r highlightjs out/html/
cp -r $(pandoc_media_dir) out/html/
cp -r media out/html/
cp ./overrides.css out/html/
cp ./style.css out/html/style.css
@@ -70,7 +54,7 @@ out/epub/manual.epub: manual-full.xml
doc-support/result/epub.xsl \
./manual-full.xml
cp -r $(pandoc_media_dir) out/epub/scratch/OEBPS
cp -r media out/epub/scratch/OEBPS
cp ./overrides.css out/epub/scratch/OEBPS
cp ./style.css out/epub/scratch/OEBPS
mkdir -p out/epub/scratch/OEBPS/images/callouts/
@@ -105,12 +89,16 @@ functions/library/generated: doc-support/result
ln -rfs ./doc-support/result/function-docs functions/library/generated
%.section.xml: %.section.md
$(PANDOC) $^ -t docbook \
$(pandoc_flags) \
-o $@
pandoc $^ -t docbook \
--extract-media=media \
--lua-filter=$(PANDOC_LUA_FILTERS_DIR)/diagram-generator.lua \
-f markdown+smart \
| cat > $@
%.chapter.xml: %.chapter.md
$(PANDOC) $^ -t docbook \
pandoc $^ -t docbook \
--top-level-division=chapter \
$(pandoc_flags) \
-o $@
--extract-media=media \
--lua-filter=$(PANDOC_LUA_FILTERS_DIR)/diagram-generator.lua \
-f markdown+smart \
| cat > $@

View File

@@ -1,23 +0,0 @@
--[[
Converts Code AST nodes produced by pandocs DocBook reader
from citerefentry elements into AST for corresponding role
for reStructuredText.
We use subset of MyST syntax (CommonMark with features from rST)
so lets use the rST AST for rST features.
Reference: https://www.sphinx-doc.org/en/master/usage/restructuredtext/roles.html#role-manpage
]]
function Code(elem)
elem.classes = elem.classes:map(function (x)
if x == 'citerefentry' then
elem.attributes['role'] = 'manpage'
return 'interpreted-text'
else
return x
end
end)
return elem
end

View File

@@ -1,34 +0,0 @@
--[[
Converts Link AST nodes with empty label to DocBook xref elements.
This is a temporary script to be able use cross-references conveniently
using syntax taken from MyST, while we still use docbook-xsl
for generating the documentation.
Reference: https://myst-parser.readthedocs.io/en/latest/using/syntax.html#targets-and-cross-referencing
]]
local function starts_with(start, str)
return str:sub(1, #start) == start
end
local function escape_xml_arg(arg)
amps = arg:gsub('&', '&amp;')
amps_quotes = amps:gsub('"', '&quot;')
amps_quotes_lt = amps_quotes:gsub('<', '&lt;')
return amps_quotes_lt
end
function Link(elem)
has_no_content = #elem.content == 0
targets_anchor = starts_with('#', elem.target)
has_no_attributes = elem.title == '' and elem.identifier == '' and #elem.classes == 0 and #elem.attributes == 0
if has_no_content and targets_anchor and has_no_attributes then
-- xref expects idref without the pound-sign
target_without_hash = elem.target:sub(2, #elem.target)
return pandoc.RawInline('docbook', '<xref linkend="' .. escape_xml_arg(target_without_hash) .. '" />')
end
end

View File

@@ -1,44 +0,0 @@
--[[
Converts AST for reStructuredText roles into corresponding
DocBook elements.
Currently, only a subset of roles is supported.
Reference:
List of roles:
https://www.sphinx-doc.org/en/master/usage/restructuredtext/roles.html
manpage:
https://tdg.docbook.org/tdg/5.1/citerefentry.html
file:
https://tdg.docbook.org/tdg/5.1/filename.html
]]
function Code(elem)
if elem.classes:includes('interpreted-text') then
local tag = nil
local content = elem.text
if elem.attributes['role'] == 'manpage' then
tag = 'citerefentry'
local title, volnum = content:match('^(.+)%((%w+)%)$')
if title == nil then
-- No volnum in parentheses.
title = content
end
content = '<refentrytitle>' .. title .. '</refentrytitle>' .. (volnum ~= nil and ('<manvolnum>' .. volnum .. '</manvolnum>') or '')
elseif elem.attributes['role'] == 'file' then
tag = 'filename'
elseif elem.attributes['role'] == 'command' then
tag = 'command'
elseif elem.attributes['role'] == 'option' then
tag = 'option'
elseif elem.attributes['role'] == 'var' then
tag = 'varname'
elseif elem.attributes['role'] == 'env' then
tag = 'envar'
end
if tag ~= nil then
return pandoc.RawInline('docbook', '<' .. tag .. '>' .. content .. '</' .. tag .. '>')
end
end
end

View File

@@ -1,17 +0,0 @@
--[[
Turns a manpage reference into a link, when a mapping is defined below.
]]
local man_urls = {
["tmpfiles.d(5)"] = "https://www.freedesktop.org/software/systemd/man/tmpfiles.d.html",
["nix.conf(5)"] = "https://nixos.org/manual/nix/stable/#sec-conf-file",
["systemd.time(7)"] = "https://www.freedesktop.org/software/systemd/man/systemd.time.html",
["systemd.timer(5)"] = "https://www.freedesktop.org/software/systemd/man/systemd.timer.html",
}
function Code(elem)
local is_man_role = elem.classes:includes('interpreted-text') and elem.attributes['role'] == 'manpage'
if is_man_role and man_urls[elem.text] ~= nil then
return pandoc.Link(elem, man_urls[elem.text])
end
end

View File

@@ -1,29 +0,0 @@
--[[
Replaces Str AST nodes containing {role}, followed by a Code node
by a Code node with attrs that would be produced by rST reader
from the role syntax.
This is to emulate MyST syntax in Pandoc.
(MyST is a CommonMark flavour with rST features mixed in.)
Reference: https://myst-parser.readthedocs.io/en/latest/syntax/syntax.html#roles-an-in-line-extension-point
]]
function Inlines(inlines)
for i = #inlines-1,1,-1 do
local first = inlines[i]
local second = inlines[i+1]
local correct_tags = first.tag == 'Str' and second.tag == 'Code'
if correct_tags then
-- docutils supports alphanumeric strings separated by [-._:]
-- We are slightly more liberal for simplicity.
local role = first.text:match('^{([-._+:%w]+)}$')
if role ~= nil then
inlines:remove(i)
second.attributes['role'] = role
second.classes:insert('interpreted-text')
end
end
end
return inlines
end

View File

@@ -1,25 +0,0 @@
--[[
Replaces Code nodes with attrs that would be produced by rST reader
from the role syntax by a Str AST node containing {role}, followed by a Code node.
This is to emulate MyST syntax in Pandoc.
(MyST is a CommonMark flavour with rST features mixed in.)
Reference: https://myst-parser.readthedocs.io/en/latest/syntax/syntax.html#roles-an-in-line-extension-point
]]
function Code(elem)
local role = elem.attributes['role']
if elem.classes:includes('interpreted-text') and role ~= nil then
elem.classes = elem.classes:filter(function (c)
return c ~= 'interpreted-text'
end)
elem.attributes['role'] = nil
return {
pandoc.Str('{' .. role .. '}'),
elem,
}
end
end

View File

@@ -1,55 +1,8 @@
# Fetchers {#chap-pkgs-fetchers}
Building software with Nix often requires downloading source code and other files from the internet.
`nixpkgs` provides *fetchers* for different protocols and services. Fetchers are functions that simplify downloading files.
When using Nix, you will frequently need to download source code and other files from the internet. Nixpkgs comes with a few helper functions that allow you to fetch fixed-output derivations in a structured way.
## Caveats
Fetchers create [fixed output derivations](https://nixos.org/manual/nix/stable/#fixed-output-drvs) from downloaded files.
Nix can reuse the downloaded files via the hash of the resulting derivation.
The fact that the hash belongs to the Nix derivation output and not the file itself can lead to confusion.
For example, consider the following fetcher:
```nix
fetchurl {
url = "http://www.example.org/hello-1.0.tar.gz";
sha256 = "0v6r3wwnsk5pdjr188nip3pjgn1jrn5pc5ajpcfy6had6b3v4dwm";
};
```
A common mistake is to update a fetchers URL, or a version parameter, without updating the hash.
```nix
fetchurl {
url = "http://www.example.org/hello-1.1.tar.gz";
sha256 = "0v6r3wwnsk5pdjr188nip3pjgn1jrn5pc5ajpcfy6had6b3v4dwm";
};
```
**This will reuse the old contents**.
Remember to invalidate the hash argument, in this case by setting the `sha256` attribute to an empty string.
```nix
fetchurl {
url = "http://www.example.org/hello-1.1.tar.gz";
sha256 = "";
};
```
Use the resulting error message to determine the correct hash.
```
error: hash mismatch in fixed-output derivation '/path/to/my.drv':
specified: sha256-AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
got: sha256-RApQUm78dswhBLC/rfU9y0u6pSAzHceIJqgmetRD24E=
```
A similar problem arises while testing changes to a fetcher's implementation. If the output of the derivation already exists in the Nix store, test failures can go undetected. The [`invalidateFetcherByDrvHash`](#tester-invalidateFetcherByDrvHash) function helps prevent reusing cached derivations.
## `fetchurl` and `fetchzip` {#fetchurl}
Two basic fetchers are `fetchurl` and `fetchzip`. Both of these have two required arguments, a URL and a hash. The hash is typically `sha256`, although many more hash algorithms are supported. Nixpkgs contributors are currently recommended to use `sha256`. This hash will be used by Nix to identify your source. A typical usage of `fetchurl` is provided below.
The two fetcher primitives are `fetchurl` and `fetchzip`. Both of these have two required arguments, a URL and a hash. The hash is typically `sha256`, although many more hash algorithms are supported. Nixpkgs contributors are currently recommended to use `sha256`. This hash will be used by Nix to identify your source. A typical usage of fetchurl is provided below.
```nix
{ stdenv, fetchurl }:
@@ -63,103 +16,63 @@ stdenv.mkDerivation {
}
```
The main difference between `fetchurl` and `fetchzip` is in how they store the contents. `fetchurl` will store the unaltered contents of the URL within the Nix store. `fetchzip` on the other hand, will decompress the archive for you, making files and directories directly accessible in the future. `fetchzip` can only be used with archives. Despite the name, `fetchzip` is not limited to .zip files and can also be used with any tarball.
The main difference between `fetchurl` and `fetchzip` is in how they store the contents. `fetchurl` will store the unaltered contents of the URL within the Nix store. `fetchzip` on the other hand will decompress the archive for you, making files and directories directly accessible in the future. `fetchzip` can only be used with archives. Despite the name, `fetchzip` is not limited to .zip files and can also be used with any tarball.
## `fetchpatch` {#fetchpatch}
`fetchpatch` works very similarly to `fetchurl` with the same arguments expected. It expects patch files as a source and performs normalization on them before computing the checksum. For example, it will remove comments or other unstable parts that are sometimes added by version control systems and can change over time.
- `relative`: Similar to using `git-diff`'s `--relative` flag, only keep changes inside the specified directory, making paths relative to it.
- `stripLen`: Remove the first `stripLen` components of pathnames in the patch.
- `extraPrefix`: Prefix pathnames by this string.
- `excludes`: Exclude files matching these patterns (applies after the above arguments).
- `includes`: Include only files matching these patterns (applies after the above arguments).
- `revert`: Revert the patch.
Note that because the checksum is computed after applying these effects, using or modifying these arguments will have no effect unless the `sha256` argument is changed as well.
`fetchpatch` works very similarly to `fetchurl` with the same arguments expected. It expects patch files as a source and performs normalization on them before computing the checksum. For example it will remove comments or other unstable parts that are sometimes added by version control systems and can change over time.
Most other fetchers return a directory rather than a single file.
Other fetcher functions allow you to add source code directly from a VCS such as subversion or git. These are mostly straightforward nambes based on the name of the command used with the VCS system. Because they give you a working repository, they act most like `fetchzip`.
## `fetchsvn` {#fetchsvn}
## `fetchsvn`
Used with Subversion. Expects `url` to a Subversion directory, `rev`, and `sha256`.
## `fetchgit` {#fetchgit}
## `fetchgit`
Used with Git. Expects `url` to a Git repo, `rev`, and `sha256`. `rev` in this case can be full the git commit id (SHA1 hash) or a tag name like `refs/tags/v1.0`.
Additionally, the following optional arguments can be given: `fetchSubmodules = true` makes `fetchgit` also fetch the submodules of a repository. If `deepClone` is set to true, the entire repository is cloned as opposing to just creating a shallow clone. `deepClone = true` also implies `leaveDotGit = true` which means that the `.git` directory of the clone won't be removed after checkout.
Additionally the following optional arguments can be given: `fetchSubmodules = true` makes `fetchgit` also fetch the submodules of a repository. If `deepClone` is set to true, the entire repository is cloned as opposing to just creating a shallow clone. `deepClone = true` also implies `leaveDotGit = true` which means that the `.git` directory of the clone won't be removed after checkout.
If only parts of the repository are needed, `sparseCheckout` can be used. This will prevent git from fetching unnecessary blobs from server, see [git sparse-checkout](https://git-scm.com/docs/git-sparse-checkout) for more information:
```nix
{ stdenv, fetchgit }:
stdenv.mkDerivation {
name = "hello";
src = fetchgit {
url = "https://...";
sparseCheckout = ''
directory/to/be/included
another/directory
'';
sha256 = "0000000000000000000000000000000000000000000000000000";
};
}
```
## `fetchfossil` {#fetchfossil}
## `fetchfossil`
Used with Fossil. Expects `url` to a Fossil archive, `rev`, and `sha256`.
## `fetchcvs` {#fetchcvs}
## `fetchcvs`
Used with CVS. Expects `cvsRoot`, `tag`, and `sha256`.
## `fetchhg` {#fetchhg}
## `fetchhg`
Used with Mercurial. Expects `url`, `rev`, and `sha256`.
A number of fetcher functions wrap part of `fetchurl` and `fetchzip`. They are mainly convenience functions intended for commonly used destinations of source code in Nixpkgs. These wrapper fetchers are listed below.
## `fetchFromGitea` {#fetchfromgitea}
## `fetchFromGitHub`
`fetchFromGitea` expects five arguments. `domain` is the gitea server name. `owner` is a string corresponding to the Gitea user or organization that controls this repository. `repo` corresponds to the name of the software repository. These are located at the top of every Gitea HTML page as `owner`/`repo`. `rev` corresponds to the Git commit hash or tag (e.g `v1.0`) that will be downloaded from Git. Finally, `sha256` corresponds to the hash of the extracted directory. Again, other hash algorithms are also available but `sha256` is currently preferred.
## `fetchFromGitHub` {#fetchfromgithub}
`fetchFromGitHub` expects four arguments. `owner` is a string corresponding to the GitHub user or organization that controls this repository. `repo` corresponds to the name of the software repository. These are located at the top of every GitHub HTML page as `owner`/`repo`. `rev` corresponds to the Git commit hash or tag (e.g `v1.0`) that will be downloaded from Git. Finally, `sha256` corresponds to the hash of the extracted directory. Again, other hash algorithms are also available, but `sha256` is currently preferred.
`fetchFromGitHub` expects four arguments. `owner` is a string corresponding to the GitHub user or organization that controls this repository. `repo` corresponds to the name of the software repository. These are located at the top of every GitHub HTML page as `owner`/`repo`. `rev` corresponds to the Git commit hash or tag (e.g `v1.0`) that will be downloaded from Git. Finally, `sha256` corresponds to the hash of the extracted directory. Again, other hash algorithms are also available but `sha256` is currently preferred.
`fetchFromGitHub` uses `fetchzip` to download the source archive generated by GitHub for the specified revision. If `leaveDotGit`, `deepClone` or `fetchSubmodules` are set to `true`, `fetchFromGitHub` will use `fetchgit` instead. Refer to its section for documentation of these options.
## `fetchFromGitLab` {#fetchfromgitlab}
## `fetchFromGitLab`
This is used with GitLab repositories. The arguments expected are very similar to `fetchFromGitHub` above.
This is used with GitLab repositories. The arguments expected are very similar to fetchFromGitHub above.
## `fetchFromGitiles` {#fetchfromgitiles}
## `fetchFromGitiles`
This is used with Gitiles repositories. The arguments expected are similar to `fetchgit`.
This is used with Gitiles repositories. The arguments expected are similar to fetchgit.
## `fetchFromBitbucket` {#fetchfrombitbucket}
## `fetchFromBitbucket`
This is used with BitBucket repositories. The arguments expected are very similar to fetchFromGitHub above.
## `fetchFromSavannah` {#fetchfromsavannah}
## `fetchFromSavannah`
This is used with Savannah repositories. The arguments expected are very similar to `fetchFromGitHub` above.
This is used with Savannah repositories. The arguments expected are very similar to fetchFromGitHub above.
## `fetchFromRepoOrCz` {#fetchfromrepoorcz}
## `fetchFromRepoOrCz`
This is used with repo.or.cz repositories. The arguments expected are very similar to `fetchFromGitHub` above.
This is used with repo.or.cz repositories. The arguments expected are very similar to fetchFromGitHub above.
## `fetchFromSourcehut` {#fetchfromsourcehut}
## `fetchFromSourcehut`
This is used with sourcehut repositories. Similar to `fetchFromGitHub` above,
it expects `owner`, `repo`, `rev` and `sha256`, but don't forget the tilde (~)
in front of the username! Expected arguments also include `vc` ("git" (default)
or "hg"), `domain` and `fetchSubmodules`.
If `fetchSubmodules` is `true`, `fetchFromSourcehut` uses `fetchgit`
or `fetchhg` with `fetchSubmodules` or `fetchSubrepos` set to `true`,
respectively. Otherwise, the fetcher uses `fetchzip`.
This is used with sourcehut repositories. The arguments expected are very similar to fetchFromGitHub above. Don't forget the tilde (~) in front of the user name!

View File

@@ -9,5 +9,4 @@
<xi:include href="images/dockertools.section.xml" />
<xi:include href="images/ocitools.section.xml" />
<xi:include href="images/snaptools.section.xml" />
<xi:include href="images/portableservice.section.xml" />
</chapter>

View File

@@ -2,7 +2,7 @@
`pkgs.appimageTools` is a set of functions for extracting and wrapping [AppImage](https://appimage.org/) files. They are meant to be used if traditional packaging from source is infeasible, or it would take too long. To quickly run an AppImage file, `pkgs.appimage-run` can be used as well.
::: {.warning}
::: warning
The `appimageTools` API is unstable and may be subject to backwards-incompatible changes in the future.
:::

View File

@@ -1,6 +1,6 @@
# pkgs.dockerTools {#sec-pkgs-dockerTools}
`pkgs.dockerTools` is a set of functions for creating and manipulating Docker images according to the [Docker Image Specification v1.2.0](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#docker-image-specification-v120). Docker itself is not used to perform any of the operations done by these functions.
`pkgs.dockerTools` is a set of functions for creating and manipulating Docker images according to the [ Docker Image Specification v1.2.0 ](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#docker-image-specification-v120). Docker itself is not used to perform any of the operations done by these functions.
## buildImage {#ssec-pkgs-dockerTools-buildImage}
@@ -20,12 +20,7 @@ buildImage {
fromImageName = null;
fromImageTag = "latest";
copyToRoot = pkgs.buildEnv {
name = "image-root";
paths = [ pkgs.redis ];
pathsToLink = [ "/bin" ];
};
contents = pkgs.redis;
runAsRoot = ''
#!${pkgs.runtimeShell}
mkdir -p /data
@@ -36,9 +31,6 @@ buildImage {
WorkingDir = "/data";
Volumes = { "/data" = { }; };
};
diskSize = 1024;
buildVMMemorySize = 512;
}
```
@@ -54,23 +46,19 @@ The above example will build a Docker image `redis/latest` from the given base i
- `fromImageTag` can be used to further specify the tag of the base image within the repository, in case an image contains multiple tags. By default it's `null`, in which case `buildImage` will peek the first tag available for the base image.
- `copyToRoot` is a derivation that will be copied in the new layer of the resulting image. This can be similarly seen as `ADD contents/ /` in a `Dockerfile`. By default it's `null`.
- `contents` is a derivation that will be copied in the new layer of the resulting image. This can be similarly seen as `ADD contents/ /` in a `Dockerfile`. By default it's `null`.
- `runAsRoot` is a bash script that will run as root in an environment that overlays the existing layers of the base image with the new resulting layer, including the previously copied `contents` derivation. This can be similarly seen as `RUN ...` in a `Dockerfile`.
> **_NOTE:_** Using this parameter requires the `kvm` device to be available.
- `config` is used to specify the configuration of the containers that will be started off the built image in Docker. The available options are listed in the [Docker Image Specification v1.2.0](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions).
- `diskSize` is used to specify the disk size of the VM used to build the image in megabytes. By default it's 1024 MiB.
- `buildVMMemorySize` is used to specify the memory size of the VM to build the image in megabytes. By default it's 512 MiB.
- `config` is used to specify the configuration of the containers that will be started off the built image in Docker. The available options are listed in the [ Docker Image Specification v1.2.0 ](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions).
After the new layer has been created, its closure (to which `contents`, `config` and `runAsRoot` contribute) will be copied in the layer itself. Only new dependencies that are not already in the existing layers will be copied.
At the end of the process, only one new single layer will be produced and added to the resulting image.
The resulting repository will only list the single image `image/tag`. In the case of [the `buildImage` example](#ex-dockerTools-buildImage), it would be `redis/latest`.
The resulting repository will only list the single image `image/tag`. In the case of [the `buildImage` example](#ex-dockerTools-buildImage) it would be `redis/latest`.
It is possible to inspect the arguments with which an image was built using its `buildArgs` attribute.
@@ -93,17 +81,13 @@ pkgs.dockerTools.buildImage {
name = "hello";
tag = "latest";
created = "now";
copyToRoot = pkgs.buildEnv {
name = "image-root";
paths = [ pkgs.hello ];
pathsToLink = [ "/bin" ];
};
contents = pkgs.hello;
config.Cmd = [ "/bin/hello" ];
}
```
Now the Docker CLI will display a reasonable date and sort the images as expected:
and now the Docker CLI will display a reasonable date and sort the images as expected:
```ShellSession
$ docker images
@@ -111,7 +95,7 @@ REPOSITORY TAG IMAGE ID CREATED SIZE
hello latest de2bf4786de6 About a minute ago 25.2MB
```
However, the produced images will not be binary reproducible.
however, the produced images will not be binary reproducible.
## buildLayeredImage {#ssec-pkgs-dockerTools-buildLayeredImage}
@@ -135,13 +119,13 @@ Create a Docker image with many of the store paths being on their own layer to i
`contents` _optional_
: Top-level paths in the container. Either a single derivation, or a list of derivations.
: Top level paths in the container. Either a single derivation, or a list of derivations.
*Default:* `[]`
`config` _optional_
: Run-time configuration of the container. A full list of the options are available at in the [Docker Image Specification v1.2.0](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions).
: Run-time configuration of the container. A full list of the options are available at in the [ Docker Image Specification v1.2.0 ](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions).
*Default:* `{}`
@@ -167,12 +151,6 @@ Create a Docker image with many of the store paths being on their own layer to i
: Shell commands to run while creating the archive for the final layer in a fakeroot environment. Unlike `extraCommands`, you can run `chown` to change the owners of the files in the archive, changing fakeroot's state instead of the real filesystem. The latter would require privileges that the build user does not have. Static binaries do not interact with the fakeroot environment. By default all files in the archive will be owned by root.
`enableFakechroot` _optional_
: Whether to run in `fakeRootCommands` in `fakechroot`, making programs behave as though `/` is the root of the image being created, while files in the Nix store are available as usual. This allows scripts that perform installation in `/` to work as expected. Considering that `fakechroot` is implemented via the same mechanism as `fakeroot`, the same caveats apply.
*Default:* `false`
### Behavior of `contents` in the final image {#dockerTools-buildLayeredImage-arg-contents}
Each path directly listed in `contents` will have a symlink in the root of the image.
@@ -211,9 +189,9 @@ pkgs.dockerTools.buildLayeredImage {
Increasing the `maxLayers` increases the number of layers which have a chance to be shared between different images.
Modern Docker installations support up to 128 layers, but older versions support as few as 42.
Modern Docker installations support up to 128 layers, however older versions support as few as 42.
If the produced image will not be extended by other Docker builds, it is safe to set `maxLayers` to `128`. However, it will be impossible to extend the image further.
If the produced image will not be extended by other Docker builds, it is safe to set `maxLayers` to `128`. However it will be impossible to extend the image further.
The first (`maxLayers-2`) most "popular" paths will have their own individual layers, then layer \#`maxLayers-1` will contain all the remaining "unpopular" paths, and finally layer \#`maxLayers` will contain the Image configuration.
@@ -229,7 +207,7 @@ The image produced by running the output script can be piped directly into `dock
$(nix-build) | docker load
```
Alternatively, the image be piped via `gzip` into `skopeo`, e.g., to copy it into a registry:
Alternatively, the image be piped via `gzip` into `skopeo`, e.g. to copy it into a registry:
```ShellSession
$(nix-build) | gzip --fast | skopeo copy docker-archive:/dev/stdin docker://some_docker_registry/myimage:tag
@@ -308,44 +286,7 @@ The parameters relative to the base image have the same synopsis as described in
The `name` argument is the name of the derivation output, which defaults to `fromImage.name`.
## Environment Helpers {#ssec-pkgs-dockerTools-helpers}
Some packages expect certain files to be available globally.
When building an image from scratch (i.e. without `fromImage`), these files are missing.
`pkgs.dockerTools` provides some helpers to set up an environment with the necessary files.
You can include them in `copyToRoot` like this:
```nix
buildImage {
name = "environment-example";
copyToRoot = with pkgs.dockerTools; [
usrBinEnv
binSh
caCertificates
fakeNss
];
}
```
### usrBinEnv {#sssec-pkgs-dockerTools-helpers-usrBinEnv}
This provides the `env` utility at `/usr/bin/env`.
### binSh {#sssec-pkgs-dockerTools-helpers-binSh}
This provides `bashInteractive` at `/bin/sh`.
### caCertificates {#sssec-pkgs-dockerTools-helpers-caCertificates}
This sets up `/etc/ssl/certs/ca-certificates.crt`.
### fakeNss {#sssec-pkgs-dockerTools-helpers-fakeNss}
Provides `/etc/passwd` and `/etc/group` that contain root and nobody.
Useful when packaging binaries that insist on using nss to look up
username/groups (like nginx).
### shadowSetup {#ssec-pkgs-dockerTools-shadowSetup}
## shadowSetup {#ssec-pkgs-dockerTools-shadowSetup}
This constant string is a helper for setting up the base files for managing users and groups, only if such files don't exist already. It is suitable for being used in a [`buildImage` `runAsRoot`](#ex-dockerTools-buildImage-runAsRoot) script for cases like in the example below:
@@ -355,7 +296,7 @@ buildImage {
runAsRoot = ''
#!${pkgs.runtimeShell}
${pkgs.dockerTools.shadowSetup}
${shadowSetup}
groupadd -r redis
useradd -r -g redis redis
mkdir /data
@@ -365,32 +306,3 @@ buildImage {
```
Creating base files like `/etc/passwd` or `/etc/login.defs` is necessary for shadow-utils to manipulate users and groups.
## fakeNss {#ssec-pkgs-dockerTools-fakeNss}
If your primary goal is providing a basic skeleton for user lookups to work,
and/or a lesser privileged user, adding `pkgs.fakeNss` to
the container image root might be the better choice than a custom script
running `useradd` and friends.
It provides a `/etc/passwd` and `/etc/group`, containing `root` and `nobody`
users and groups.
It also provides a `/etc/nsswitch.conf`, configuring NSS host resolution to
first check `/etc/hosts`, before checking DNS, as the default in the absence of
a config file (`dns [!UNAVAIL=return] files`) is quite unexpected.
You can pair it with `binSh`, which provides `bin/sh` as a symlink
to `bashInteractive` (as `/bin/sh` is configured as a shell).
```nix
buildImage {
name = "shadow-basic";
copyToRoot = pkgs.buildEnv {
name = "image-root";
paths = [ binSh pkgs.fakeNss ];
pathsToLink = [ "/bin" "/etc" "/var" ];
};
}
```

View File

@@ -1,10 +1,10 @@
# pkgs.ociTools {#sec-pkgs-ociTools}
`pkgs.ociTools` is a set of functions for creating containers according to the [OCI container specification v1.0.0](https://github.com/opencontainers/runtime-spec). Beyond that, it makes no assumptions about the container runner you choose to use to run the created container.
`pkgs.ociTools` is a set of functions for creating containers according to the [OCI container specification v1.0.0](https://github.com/opencontainers/runtime-spec). Beyond that it makes no assumptions about the container runner you choose to use to run the created container.
## buildContainer {#ssec-pkgs-ociTools-buildContainer}
This function creates a simple OCI container that runs a single command inside of it. An OCI container consists of a `config.json` and a rootfs directory. The nix store of the container will contain all referenced dependencies of the given command.
This function creates a simple OCI container that runs a single command inside of it. An OCI container consists of a `config.json` and a rootfs directory.The nix store of the container will contain all referenced dependencies of the given command.
The parameters of `buildContainer` with an example value are described below:
@@ -30,7 +30,7 @@ buildContainer {
}
```
- `args` specifies a set of arguments to run inside the container. This is the only required argument for `buildContainer`. All referenced packages inside the derivation will be made available inside the container.
- `args` specifies a set of arguments to run inside the container. This is the only required argument for `buildContainer`. All referenced packages inside the derivation will be made available inside the container
- `mounts` specifies additional mount points chosen by the user. By default only a minimal set of necessary filesystems are mounted into the container (e.g procfs, cgroupfs)

View File

@@ -1,81 +0,0 @@
# pkgs.portableService {#sec-pkgs-portableService}
`pkgs.portableService` is a function to create _portable service images_,
as read-only, immutable, `squashfs` archives.
systemd supports a concept of [Portable Services](https://systemd.io/PORTABLE_SERVICES/).
Portable Services are a delivery method for system services that uses two specific features of container management:
* Applications are bundled. I.e. multiple services, their binaries and
all their dependencies are packaged in an image, and are run directly from it.
* Stricter default security policies, i.e. sandboxing of applications.
This allows using Nix to build images which can be run on many recent Linux distributions.
The primary tool for interacting with Portable Services is `portablectl`,
and they are managed by the `systemd-portabled` system service.
:::{.note}
Portable services are supported starting with systemd 239 (released on 2018-06-22).
:::
A very simple example of using `portableService` is described below:
[]{#ex-pkgs-portableService}
```nix
pkgs.portableService {
pname = "demo";
version = "1.0";
units = [ demo-service demo-socket ];
}
```
The above example will build an squashfs archive image in `result/$pname_$version.raw`. The image will contain the
file system structure as required by the portable service specification, and a subset of the Nix store with all the
dependencies of the two derivations in the `units` list.
`units` must be a list of derivations, and their names must be prefixed with the service name (`"demo"` in this case).
Otherwise `systemd-portabled` will ignore them.
:::{.Note}
The `.raw` file extension of the image is required by the portable services specification.
:::
Some other options available are:
- `description`, `homepage`
Are added to the `/etc/os-release` in the image and are shown by the portable services tooling.
Default to empty values, not added to os-release.
- `symlinks`
A list of attribute sets {object, symlink}. Symlinks will be created in the root filesystem of the image to
objects in the Nix store. Defaults to an empty list.
- `contents`
A list of additional derivations to be included in the image Nix store, as-is. Defaults to an empty list.
- `squashfsTools`
Defaults to `pkgs.squashfsTools`, allows you to override the package that provides `mksquashfs`.
- `squash-compression`, `squash-block-size`
Options to `mksquashfs`. Default to `"xz -Xdict-size 100%"` and `"1M"` respectively.
A typical usage of `symlinks` would be:
```nix
symlinks = [
{ object = "${pkgs.cacert}/etc/ssl"; symlink = "/etc/ssl"; }
{ object = "${pkgs.bash}/bin/bash"; symlink = "/bin/sh"; }
{ object = "${pkgs.php}/bin/php"; symlink = "/usr/bin/php"; }
];
```
to create these symlinks for legacy applications that assume them existing globally.
Once the image is created, and deployed on a host in `/var/lib/portables/`, you can attach the image and run the service. As root run:
```console
portablectl attach demo_1.0.raw
systemctl enable --now demo.socket
systemctl enable --now demo.service
```
:::{.Note}
See the [man page](https://www.freedesktop.org/software/systemd/man/portablectl.html) of `portablectl` for more info on its usage.
:::

View File

@@ -14,7 +14,7 @@ Currently, `makeSnap` does not support creating GUI stubs.
The following expression packages GNU Hello as a Snapcraft snap.
``` {#ex-snapTools-buildSnap-hello .nix}
```{#ex-snapTools-buildSnap-hello .nix}
let
inherit (import <nixpkgs> { }) snapTools hello;
in snapTools.makeSnap {
@@ -33,9 +33,9 @@ in snapTools.makeSnap {
## Build a Graphical Snap {#ssec-pkgs-snapTools-build-a-snap-firefox}
Graphical programs require many more integrations with the host. This example uses Firefox as an example because it is one of the most complicated programs we could package.
Graphical programs require many more integrations with the host. This example uses Firefox as an example, because it is one of the most complicated programs we could package.
``` {#ex-snapTools-buildSnap-firefox .nix}
```{#ex-snapTools-buildSnap-firefox .nix}
let
inherit (import <nixpkgs> { }) snapTools firefox;
in snapTools.makeSnap {

View File

@@ -1,6 +1,6 @@
# Cataclysm: Dark Days Ahead {#cataclysm-dark-days-ahead}
## How to install Cataclysm DDA {#how-to-install-cataclysm-dda}
## How to install Cataclysm DDA
To install the latest stable release of Cataclysm DDA to your profile, execute
`nix-env -f "<nixpkgs>" -iA cataclysm-dda`. For the curses build (build
@@ -34,7 +34,7 @@ cataclysm-dda.override {
}
```
## Important note for overriding packages {#important-note-for-overriding-packages}
## Important note for overriding packages
After applying `overrideAttrs`, you need to fix `passthru.pkgs` and
`passthru.withMods` attributes either manually or by using `attachPkgs`:
@@ -69,7 +69,7 @@ in
goodExample2.withMods (_: []) # parallel building enabled
```
## Customizing with mods {#customizing-with-mods}
## Customizing with mods
To install Cataclysm DDA with mods of your choice, you can use `withMods`
attribute:

View File

@@ -4,13 +4,13 @@ The [Citrix Workspace App](https://www.citrix.com/products/workspace-app/) is a
## Basic usage {#sec-citrix-base}
The tarball archive needs to be downloaded manually, as the license agreements of the vendor for [Citrix Workspace](https://www.citrix.de/downloads/workspace-app/linux/workspace-app-for-linux-latest.html) needs to be accepted first. Then run `nix-prefetch-url file://$PWD/linuxx64-$version.tar.gz`. With the archive available in the store, the package can be built and installed with Nix.
The tarball archive needs to be downloaded manually as the license agreements of the vendor for [Citrix Workspace](https://www.citrix.de/downloads/workspace-app/linux/workspace-app-for-linux-latest.html) needs to be accepted first. Then run `nix-prefetch-url file://$PWD/linuxx64-$version.tar.gz`. With the archive available in the store the package can be built and installed with Nix.
## Citrix Self-service {#sec-citrix-selfservice}
## Citrix Selfservice {#sec-citrix-selfservice}
The [self-service](https://support.citrix.com/article/CTX200337) is an application managing Citrix desktops and applications. Please note that this feature only works with at least citrix_workspace_20_06_0 and later versions.
The [selfservice](https://support.citrix.com/article/CTX200337) is an application managing Citrix desktops and applications. Please note that this feature only works with at least citrix_workspace_20_06_0 and later versions.
In order to set this up, you first have to [download the `.cr` file from the Netscaler Gateway](https://its.uiowa.edu/support/article/102186). After that, you can configure the `selfservice` like this:
In order to set this up, you first have to [download the `.cr` file from the Netscaler Gateway](https://its.uiowa.edu/support/article/102186). After that you can configure the `selfservice` like this:
```ShellSession
$ storebrowse -C ~/Downloads/receiverconfig.cr
@@ -19,7 +19,7 @@ $ selfservice
## Custom certificates {#sec-citrix-custom-certs}
The `Citrix Workspace App` in `nixpkgs` trusts several certificates [from the Mozilla database](https://curl.haxx.se/docs/caextract.html) by default. However, several companies using Citrix might require their own corporate certificate. On distros with imperative packaging, these certs can be stored easily in [`$ICAROOT`](https://developer-docs.citrix.com/projects/receiver-for-linux-command-reference/en/13.7/), however this directory is a store path in `nixpkgs`. In order to work around this issue, the package provides a simple mechanism to add custom certificates without rebuilding the entire package using `symlinkJoin`:
The `Citrix Workspace App` in `nixpkgs` trusts several certificates [from the Mozilla database](https://curl.haxx.se/docs/caextract.html) by default. However several companies using Citrix might require their own corporate certificate. On distros with imperative packaging these certs can be stored easily in [`$ICAROOT`](https://developer-docs.citrix.com/projects/receiver-for-linux-command-reference/en/13.7/), however this directory is a store path in `nixpkgs`. In order to work around this issue the package provides a simple mechanism to add custom certificates without rebuilding the entire package using `symlinkJoin`:
```nix
with import <nixpkgs> { config.allowUnfree = true; };

View File

@@ -8,9 +8,9 @@ Nixpkgs provides a number of packages that will install Eclipse in its various f
$ nix-env -f '<nixpkgs>' -qaP -A eclipses --description
```
Once an Eclipse variant is installed, it can be run using the `eclipse` command, as expected. From within Eclipse, it is then possible to install plugins in the usual manner by either manually specifying an Eclipse update site or by installing the Marketplace Client plugin and using it to discover and install other plugins. This installation method provides an Eclipse installation that closely resemble a manually installed Eclipse.
Once an Eclipse variant is installed it can be run using the `eclipse` command, as expected. From within Eclipse it is then possible to install plugins in the usual manner by either manually specifying an Eclipse update site or by installing the Marketplace Client plugin and using it to discover and install other plugins. This installation method provides an Eclipse installation that closely resemble a manually installed Eclipse.
If you prefer to install plugins in a more declarative manner, then Nixpkgs also offer a number of Eclipse plugins that can be installed in an _Eclipse environment_. This type of environment is created using the function `eclipseWithPlugins` found inside the `nixpkgs.eclipses` attribute set. This function takes as argument `{ eclipse, plugins ? [], jvmArgs ? [] }` where `eclipse` is a one of the Eclipse packages described above, `plugins` is a list of plugin derivations, and `jvmArgs` is a list of arguments given to the JVM running the Eclipse. For example, say you wish to install the latest Eclipse Platform with the popular Eclipse Color Theme plugin and also allow Eclipse to use more RAM. You could then add:
If you prefer to install plugins in a more declarative manner then Nixpkgs also offer a number of Eclipse plugins that can be installed in an _Eclipse environment_. This type of environment is created using the function `eclipseWithPlugins` found inside the `nixpkgs.eclipses` attribute set. This function takes as argument `{ eclipse, plugins ? [], jvmArgs ? [] }` where `eclipse` is a one of the Eclipse packages described above, `plugins` is a list of plugin derivations, and `jvmArgs` is a list of arguments given to the JVM running the Eclipse. For example, say you wish to install the latest Eclipse Platform with the popular Eclipse Color Theme plugin and also allow Eclipse to use more RAM. You could then add
```nix
packageOverrides = pkgs: {
@@ -22,15 +22,15 @@ packageOverrides = pkgs: {
}
```
to your Nixpkgs configuration (`~/.config/nixpkgs/config.nix`) and install it by running `nix-env -f '<nixpkgs>' -iA myEclipse` and afterward run Eclipse as usual. It is possible to find out which plugins are available for installation using `eclipseWithPlugins` by running:
to your Nixpkgs configuration (`~/.config/nixpkgs/config.nix`) and install it by running `nix-env -f '<nixpkgs>' -iA myEclipse` and afterward run Eclipse as usual. It is possible to find out which plugins are available for installation using `eclipseWithPlugins` by running
```ShellSession
$ nix-env -f '<nixpkgs>' -qaP -A eclipses.plugins --description
```
If there is a need to install plugins that are not available in Nixpkgs then it may be possible to define these plugins outside Nixpkgs using the `buildEclipseUpdateSite` and `buildEclipsePlugin` functions found in the `nixpkgs.eclipses.plugins` attribute set. Use the `buildEclipseUpdateSite` function to install a plugin distributed as an Eclipse update site. This function takes `{ name, src }` as argument, where `src` indicates the Eclipse update site archive. All Eclipse features and plugins within the downloaded update site will be installed. When an update site archive is not available, then the `buildEclipsePlugin` function can be used to install a plugin that consists of a pair of feature and plugin JARs. This function takes an argument `{ name, srcFeature, srcPlugin }` where `srcFeature` and `srcPlugin` are the feature and plugin JARs, respectively.
If there is a need to install plugins that are not available in Nixpkgs then it may be possible to define these plugins outside Nixpkgs using the `buildEclipseUpdateSite` and `buildEclipsePlugin` functions found in the `nixpkgs.eclipses.plugins` attribute set. Use the `buildEclipseUpdateSite` function to install a plugin distributed as an Eclipse update site. This function takes `{ name, src }` as argument where `src` indicates the Eclipse update site archive. All Eclipse features and plugins within the downloaded update site will be installed. When an update site archive is not available then the `buildEclipsePlugin` function can be used to install a plugin that consists of a pair of feature and plugin JARs. This function takes an argument `{ name, srcFeature, srcPlugin }` where `srcFeature` and `srcPlugin` are the feature and plugin JARs, respectively.
Expanding the previous example with two plugins using the above functions, we have:
Expanding the previous example with two plugins using the above functions we have
```nix
packageOverrides = pkgs: {

View File

@@ -1,11 +1,11 @@
# Elm {#sec-elm}
To start a development environment, run:
To start a development environment do
```ShellSession
nix-shell -p elmPackages.elm elmPackages.elm-format
```
To update the Elm compiler, see `nixpkgs/pkgs/development/compilers/elm/README.md`.
To update the Elm compiler, see <filename>nixpkgs/pkgs/development/compilers/elm/README.md</filename>.
To package Elm applications, [read about elm2nix](https://github.com/hercules-ci/elm2nix#elm2nix).

View File

@@ -20,7 +20,7 @@ The Emacs package comes with some extra helpers to make it easier to configure.
}
```
You can install it like any other packages via `nix-env -iA myEmacs`. However, this will only install those packages. It will not `configure` them for us. To do this, we need to provide a configuration file. Luckily, it is possible to do this from within Nix! By modifying the above example, we can make Emacs load a custom config file. The key is to create a package that provides a `default.el` file in `/share/emacs/site-start/`. Emacs knows to load this file automatically when it starts.
You can install it like any other packages via `nix-env -iA myEmacs`. However, this will only install those packages. It will not `configure` them for us. To do this, we need to provide a configuration file. Luckily, it is possible to do this from within Nix! By modifying the above example, we can make Emacs load a custom config file. The key is to create a package that provide a `default.el` file in `/share/emacs/site-start/`. Emacs knows to load this file automatically when it starts.
```nix
{
@@ -101,16 +101,16 @@ You can install it like any other packages via `nix-env -iA myEmacs`. However, t
}
```
This provides a fairly full Emacs start file. It will load in addition to the user's personal config. You can always disable it by passing `-q` to the Emacs command.
This provides a fairly full Emacs start file. It will load in addition to the user's presonal config. You can always disable it by passing `-q` to the Emacs command.
Sometimes `emacs.pkgs.withPackages` is not enough, as this package set has some priorities imposed on packages (with the lowest priority assigned to Melpa Unstable, and the highest for packages manually defined in `pkgs/top-level/emacs-packages.nix`). But you can't control these priorities when some package is installed as a dependency. You can override it on a per-package-basis, providing all the required dependencies manually, but it's tedious and there is always a possibility that an unwanted dependency will sneak in through some other package. To completely override such a package, you can use `overrideScope'`.
Sometimes `emacs.pkgs.withPackages` is not enough, as this package set has some priorities imposed on packages (with the lowest priority assigned to Melpa Unstable, and the highest for packages manually defined in `pkgs/top-level/emacs-packages.nix`). But you can't control this priorities when some package is installed as a dependency. You can override it on per-package-basis, providing all the required dependencies manually - but it's tedious and there is always a possibility that an unwanted dependency will sneak in through some other package. To completely override such a package you can use `overrideScope'`.
```nix
overrides = self: super: rec {
haskell-mode = self.melpaPackages.haskell-mode;
...
};
((emacsPackagesFor emacs).overrideScope' overrides).withPackages
((emacsPackagesFor emacs).overrideScope' overrides).emacs.pkgs.withPackages
(p: with p; [
# here both these package will use haskell-mode of our own choice
ghc-mod

View File

@@ -1,18 +0,0 @@
# /etc files {#etc}
Certain calls in glibc require access to runtime files found in `/etc` such as `/etc/protocols` or `/etc/services` -- [getprotobyname](https://linux.die.net/man/3/getprotobyname) is one such function.
On non-NixOS distributions these files are typically provided by packages (i.e., [netbase](https://packages.debian.org/sid/netbase)) if not already pre-installed in your distribution. This can cause non-reproducibility for code if they rely on these files being present.
If [iana-etc](https://hydra.nixos.org/job/nixos/trunk-combined/nixpkgs.iana-etc.x86_64-linux) is part of your `buildInputs`, then it will set the environment variables `NIX_ETC_PROTOCOLS` and `NIX_ETC_SERVICES` to the corresponding files in the package through a setup hook.
```bash
> nix-shell -p iana-etc
[nix-shell:~]$ env | grep NIX_ETC
NIX_ETC_SERVICES=/nix/store/aj866hr8fad8flnggwdhrldm0g799ccz-iana-etc-20210225/etc/services
NIX_ETC_PROTOCOLS=/nix/store/aj866hr8fad8flnggwdhrldm0g799ccz-iana-etc-20210225/etc/protocols
```
Nixpkg's version of [glibc](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/libraries/glibc/default.nix) has been patched to check for the existence of these environment variables. If the environment variables are *not* set, then it will attempt to find the files at the default location within `/etc`.

View File

@@ -1,13 +1,12 @@
# Firefox {#sec-firefox}
## Build wrapped Firefox with extensions and policies {#build-wrapped-firefox-with-extensions-and-policies}
## Build wrapped Firefox with extensions and policies
The `wrapFirefox` function allows to pass policies, preferences and extensions that are available to Firefox. With the help of `fetchFirefoxAddon` this allows to build a Firefox version that already comes with add-ons pre-installed:
The `wrapFirefox` function allows to pass policies, preferences and extension that are available to firefox. With the help of `fetchFirefoxAddon` this allows build a firefox version that already comes with addons pre-installed:
```nix
{
# Nix firefox addons only work with the firefox-esr package.
myFirefox = wrapFirefox firefox-esr-unwrapped {
myFirefox = wrapFirefox firefox-unwrapped {
nixExtensions = [
(fetchFirefoxAddon {
name = "ublock"; # Has to be unique!
@@ -26,14 +25,10 @@ The `wrapFirefox` function allows to pass policies, preferences and extensions t
Pocket = false;
Snippets = false;
};
UserMessaging = {
ExtensionRecommendations = false;
SkipOnboarding = true;
};
SecurityDevices = {
# Use a proxy module rather than `nixpkgs.config.firefox.smartcardSupport = true`
"PKCS#11 Proxy Module" = "${pkgs.p11-kit}/lib/p11-kit-proxy.so";
};
UserMessaging = {
ExtensionRecommendations = false;
SkipOnboarding = true;
};
};
extraPrefs = ''
@@ -44,12 +39,11 @@ The `wrapFirefox` function allows to pass policies, preferences and extensions t
}
```
If `nixExtensions != null`, then all manually installed add-ons will be uninstalled from your browser profile.
To view available enterprise policies, visit [enterprise policies](https://github.com/mozilla/policy-templates#enterprisepoliciesenabled)
or type into the Firefox URL bar: `about:policies#documentation`.
Nix installed add-ons do not have a valid signature, which is why signature verification is disabled. This does not compromise security because downloaded add-ons are checksummed and manual add-ons can't be installed. Also, make sure that the `name` field of `fetchFirefoxAddon` is unique. If you remove an add-on from the `nixExtensions` array, rebuild and start Firefox: the removed add-on will be completely removed with all of its settings.
If `nixExtensions != null` then all manually installed addons will be uninstalled from your browser profile.
To view available enterprise policies visit [enterprise policies](https://github.com/mozilla/policy-templates#enterprisepoliciesenabled)
or type into the Firefox url bar: `about:policies#documentation`.
Nix installed addons do not have a valid signature, which is why signature verification is disabled. This does not compromise security because downloaded addons are checksumed and manual addons can't be installed. Also make sure that the `name` field of fetchFirefoxAddon is unique. If you remove an addon from the nixExtensions array, rebuild and start Firefox the removed addon will be completly removed with all of its settings.
## Troubleshooting {#sec-firefox-troubleshooting}
If add-ons are marked as broken or the signature is invalid, make sure you have Firefox ESR installed. Normal Firefox does not provide the ability anymore to disable signature verification for add-ons thus nix add-ons get disabled by the normal Firefox binary.
If addons do not appear installed although they have been defined in your nix configuration file reset the local addon state of your Firefox profile by clicking `help -> restart with addons disabled -> restart -> refresh firefox`. This can happen if you switch from manual addon mode to nix addon mode and then back to manual mode and then again to nix addon mode.
If add-ons do not appear installed despite being defined in your nix configuration file, reset the local add-on state of your Firefox profile by clicking `Help -> More Troubleshooting Information -> Refresh Firefox`. This can happen if you switch from manual add-on mode to nix add-on mode and then back to manual mode and then again to nix add-on mode.

View File

@@ -36,7 +36,7 @@ using `buildFishPlugin` and running unit tests with the `fishtape` test runner.
## Fish wrapper {#sec-fish-wrapper}
The `wrapFish` package is a wrapper around Fish which can be used to create
Fish shells initialized with some plugins as well as completions, configuration
Fish shells initialised with some plugins as well as completions, configuration
snippets and functions sourced from the given paths. This provides a convenient
way to test Fish plugins and scripts without having to alter the environment.

View File

@@ -24,10 +24,10 @@ packages on macOS:
checking for fuse.h... no
configure: error: No fuse.h found.
This happens on autoconf based projects that use `AC_CHECK_HEADERS` or
This happens on autoconf based projects that uses `AC_CHECK_HEADERS` or
`AC_CHECK_LIBS` to detect libfuse, and will occur even when the `fuse` package
is included in `buildInputs`. It happens because libfuse headers throw an error
on macOS if the `FUSE_USE_VERSION` macro is undefined. Many projects do define
on macOS if the `FUSE_USE_VERSION` macro is undefined. Many proejcts do define
`FUSE_USE_VERSION`, but only inside C source files. This results in the above
error at configure time because the configure script would attempt to compile
sample FUSE programs without defining `FUSE_USE_VERSION`.

View File

@@ -6,7 +6,7 @@ This package is an ibus-based completion method to speed up typing.
IBus needs to be configured accordingly to activate `typing-booster`. The configuration depends on the desktop manager in use. For detailed instructions, please refer to the [upstream docs](https://mike-fabian.github.io/ibus-typing-booster/documentation.html).
On NixOS, you need to explicitly enable `ibus` with given engines before customizing your desktop to use `typing-booster`. This can be achieved using the `ibus` module:
On NixOS you need to explicitly enable `ibus` with given engines before customizing your desktop to use `typing-booster`. This can be achieved using the `ibus` module:
```nix
{ pkgs, ... }: {
@@ -19,7 +19,7 @@ On NixOS, you need to explicitly enable `ibus` with given engines before customi
## Using custom hunspell dictionaries {#sec-ibus-typing-booster-customize-hunspell}
The IBus engine is based on `hunspell` to support completion in many languages. By default, the dictionaries `de-de`, `en-us`, `fr-moderne` `es-es`, `it-it`, `sv-se` and `sv-fi` are in use. To add another dictionary, the package can be overridden like this:
The IBus engine is based on `hunspell` to support completion in many languages. By default the dictionaries `de-de`, `en-us`, `fr-moderne` `es-es`, `it-it`, `sv-se` and `sv-fi` are in use. To add another dictionary, the package can be overridden like this:
```nix
ibus-engines.typing-booster.override { langs = [ "de-at" "en-gb" ]; }
@@ -31,7 +31,7 @@ _Note: each language passed to `langs` must be an attribute name in `pkgs.hunspe
The `ibus-engines.typing-booster` package contains a program named `emoji-picker`. To display all emojis correctly, a special font such as `noto-fonts-emoji` is needed:
On NixOS, it can be installed using the following expression:
On NixOS it can be installed using the following expression:
```nix
{ pkgs, ... }: { fonts.fonts = with pkgs; [ noto-fonts-emoji ]; }

View File

@@ -17,7 +17,6 @@
<xi:include href="kakoune.section.xml" />
<xi:include href="linux.section.xml" />
<xi:include href="locales.section.xml" />
<xi:include href="etc-files.section.xml" />
<xi:include href="nginx.section.xml" />
<xi:include href="opengl.section.xml" />
<xi:include href="shell-helpers.section.xml" />

View File

@@ -4,7 +4,7 @@ The Nix expressions to build the Linux kernel are in [`pkgs/os-specific/linux/ke
The function that builds the kernel has an argument `kernelPatches` which should be a list of `{name, patch, extraConfig}` attribute sets, where `name` is the name of the patch (which is included in the kernels `meta.description` attribute), `patch` is the patch itself (possibly compressed), and `extraConfig` (optional) is a string specifying extra options to be concatenated to the kernel configuration file (`.config`).
The kernel derivation exports an attribute `features` specifying whether optional functionality is or isnt enabled. This is used in NixOS to implement kernel-specific behaviour. For instance, if the kernel has the `iwlwifi` feature (i.e., has built-in support for Intel wireless chipsets), then NixOS doesnt have to build the external `iwlwifi` package:
The kernel derivation exports an attribute `features` specifying whether optional functionality is or isnt enabled. This is used in NixOS to implement kernel-specific behaviour. For instance, if the kernel has the `iwlwifi` feature (i.e. has built-in support for Intel wireless chipsets), then NixOS doesnt have to build the external `iwlwifi` package:
```nix
modulesTree = [kernel]
@@ -14,28 +14,28 @@ modulesTree = [kernel]
How to add a new (major) version of the Linux kernel to Nixpkgs:
1. Copy the old Nix expression (e.g., `linux-2.6.21.nix`) to the new one (e.g., `linux-2.6.22.nix`) and update it.
1. Copy the old Nix expression (e.g. `linux-2.6.21.nix`) to the new one (e.g. `linux-2.6.22.nix`) and update it.
2. Add the new kernel to the `kernels` attribute set in `linux-kernels.nix` (e.g., create an attribute `kernel_2_6_22`).
2. Add the new kernel to `all-packages.nix` (e.g., create an attribute `kernel_2_6_22`).
3. Now were going to update the kernel configuration. First unpack the kernel. Then for each supported platform (`i686`, `x86_64`, `uml`) do the following:
1. Make a copy from the old config (e.g., `config-2.6.21-i686-smp`) to the new one (e.g., `config-2.6.22-i686-smp`).
1. Make an copy from the old config (e.g. `config-2.6.21-i686-smp`) to the new one (e.g. `config-2.6.22-i686-smp`).
2. Copy the config file for this platform (e.g., `config-2.6.22-i686-smp`) to `.config` in the kernel source tree.
2. Copy the config file for this platform (e.g. `config-2.6.22-i686-smp`) to `.config` in the kernel source tree.
3. Run `make oldconfig ARCH={i386,x86_64,um}` and answer all questions. (For the uml configuration, also add `SHELL=bash`.) Make sure to keep the configuration consistent between platforms (i.e., dont enable some feature on `i686` and disable it on `x86_64`).
3. Run `make oldconfig ARCH={i386,x86_64,um}` and answer all questions. (For the uml configuration, also add `SHELL=bash`.) Make sure to keep the configuration consistent between platforms (i.e. dont enable some feature on `i686` and disable it on `x86_64`).
4. If needed, you can also run `make menuconfig`:
4. If needed you can also run `make menuconfig`:
```ShellSession
$ nix-env -f "<nixpkgs>" -iA ncurses
$ nix-env -i ncurses
$ export NIX_CFLAGS_LINK=-lncurses
$ make menuconfig ARCH=arch
```
5. Copy `.config` over the new config file (e.g., `config-2.6.22-i686-smp`).
5. Copy `.config` over the new config file (e.g. `config-2.6.22-i686-smp`).
4. Test building the kernel: `nix-build -A linuxKernel.kernels.kernel_2_6_22`. If it compiles, ship it! For extra credit, try booting NixOS with it.
4. Test building the kernel: `nix-build -A kernel_2_6_22`. If it compiles, ship it! For extra credit, try booting NixOS with it.
5. It may be that the new kernel requires updating the external kernel modules and kernel-dependent packages listed in the `linuxPackagesFor` function in `linux-kernels.nix` (such as the NVIDIA drivers, AUFS, etc.). If the updated packages arent backwards compatible with older kernels, you may need to keep the older versions around.
5. It may be that the new kernel requires updating the external kernel modules and kernel-dependent packages listed in the `linuxPackagesFor` function in `all-packages.nix` (such as the NVIDIA drivers, AUFS, etc.). If the updated packages arent backwards compatible with older kernels, you may need to keep the older versions around.

View File

@@ -1,5 +1,5 @@
# Locales {#locales}
To allow simultaneous use of packages linked against different versions of `glibc` with different locale archive formats, Nixpkgs patches `glibc` to rely on `LOCALE_ARCHIVE` environment variable.
To allow simultaneous use of packages linked against different versions of `glibc` with different locale archive formats Nixpkgs patches `glibc` to rely on `LOCALE_ARCHIVE` environment variable.
On non-NixOS distributions, this variable is obviously not set. This can cause regressions in language support or even crashes in some Nixpkgs-provided programs. The simplest way to mitigate this problem is exporting the `LOCALE_ARCHIVE` variable pointing to `${glibcLocales}/lib/locale/locale-archive`. The drawback (and the reason this is not the default) is the relatively large (a hundred MiB) size of the full set of locales. It is possible to build a custom set of locales by overriding parameters `allLocales` and `locales` of the package.
On non-NixOS distributions this variable is obviously not set. This can cause regressions in language support or even crashes in some Nixpkgs-provided programs. The simplest way to mitigate this problem is exporting the `LOCALE_ARCHIVE` variable pointing to `${glibcLocales}/lib/locale/locale-archive`. The drawback (and the reason this is not the default) is the relatively large (a hundred MiB) size of the full set of locales. It is possible to build a custom set of locales by overriding parameters `allLocales` and `locales` of the package.

View File

@@ -4,8 +4,8 @@
## ETags on static files served from the Nix store {#sec-nginx-etag}
HTTP has a couple of different mechanisms for caching to prevent clients from having to download the same content repeatedly if a resource has not changed since the last time it was requested. When nginx is used as a server for static files, it implements the caching mechanism based on the [`Last-Modified`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Last-Modified) response header automatically; unfortunately, it works by using filesystem timestamps to determine the value of the `Last-Modified` header. This doesn't give the desired behavior when the file is in the Nix store because all file timestamps are set to 0 (for reasons related to build reproducibility).
HTTP has a couple different mechanisms for caching to prevent clients from having to download the same content repeatedly if a resource has not changed since the last time it was requested. When nginx is used as a server for static files, it implements the caching mechanism based on the [`Last-Modified`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Last-Modified) response header automatically; unfortunately, it works by using filesystem timestamps to determine the value of the `Last-Modified` header. This doesn't give the desired behavior when the file is in the Nix store, because all file timestamps are set to 0 (for reasons related to build reproducibility).
Fortunately, HTTP supports an alternative (and more effective) caching mechanism: the [`ETag`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag) response header. The value of the `ETag` header specifies some identifier for the particular content that the server is sending (e.g., a hash). When a client makes a second request for the same resource, it sends that value back in an `If-None-Match` header. If the ETag value is unchanged, then the server does not need to resend the content.
Fortunately, HTTP supports an alternative (and more effective) caching mechanism: the [`ETag`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag) response header. The value of the `ETag` header specifies some identifier for the particular content that the server is sending (e.g. a hash). When a client makes a second request for the same resource, it sends that value back in an `If-None-Match` header. If the ETag value is unchanged, then the server does not need to resend the content.
As of NixOS 19.09, the nginx package in Nixpkgs is patched such that when nginx serves a file out of `/nix/store`, the hash in the store path is used as the `ETag` header in the HTTP response, thus providing proper caching functionality. This happens automatically; you do not need to do modify any configuration to get this behavior.

View File

@@ -4,12 +4,12 @@ OpenGL support varies depending on which hardware is used and which drivers are
Broadly, we support both GL vendors: Mesa and NVIDIA.
## NixOS Desktop {#nixos-desktop}
## NixOS Desktop
The NixOS desktop or other non-headless configurations are the primary target for OpenGL libraries and applications. The current solution for discovering which drivers are available is based on [libglvnd](https://gitlab.freedesktop.org/glvnd/libglvnd). `libglvnd` performs "vendor-neutral dispatch", trying a variety of techniques to find the system's GL implementation. In practice, this will be either via standard GLX for X11 users or EGL for Wayland users, and supporting either NVIDIA or Mesa extensions.
## Nix on GNU/Linux {#nix-on-gnulinux}
## Nix on GNU/Linux
If you are using a non-NixOS GNU/Linux/X11 desktop with free software video drivers, consider launching OpenGL-dependent programs from Nixpkgs with Nixpkgs versions of `libglvnd` and `mesa.drivers` in `LD_LIBRARY_PATH`. For Mesa drivers, the Linux kernel version doesn't have to match nixpkgs.
For proprietary video drivers, you might have luck with also adding the corresponding video driver package.
For proprietary video drivers you might have luck with also adding the corresponding video driver package.

View File

@@ -4,7 +4,7 @@ Some packages provide the shell integration to be more useful. But unlike other
- `fzf` : `fzf-share`
E.g. `fzf` can then be used in the `.bashrc` like this:
E.g. `fzf` can then used in the `.bashrc` like this:
```bash
source "$(fzf-share)/completion.bash"

View File

@@ -2,25 +2,24 @@
## Steam in Nix {#sec-steam-nix}
Steam is distributed as a `.deb` file, for now only as an i686 package (the amd64 package only has documentation). When unpacked, it has a script called `steam` that in Ubuntu (their target distro) would go to `/usr/bin`. When run for the first time, this script copies some files to the user's home, which include another script that is the ultimate responsible for launching the steam binary, which is also in `$HOME`.
Steam is distributed as a `.deb` file, for now only as an i686 package (the amd64 package only has documentation). When unpacked, it has a script called `steam` that in Ubuntu (their target distro) would go to `/usr/bin`. When run for the first time, this script copies some files to the user's home, which include another script that is the ultimate responsible for launching the steam binary, which is also in \$HOME.
Nix problems and constraints:
- We don't have `/bin/bash` and many scripts point there. Same thing for `/usr/bin/python`.
- We don't have `/bin/bash` and many scripts point there. Similarly for `/usr/bin/python`.
- We don't have the dynamic loader in `/lib`.
- The `steam.sh` script in `$HOME` cannot be patched, as it is checked and rewritten by steam.
- The `steam.sh` script in \$HOME can not be patched, as it is checked and rewritten by steam.
- The steam binary cannot be patched, it's also checked.
The current approach to deploy Steam in NixOS is composing a FHS-compatible chroot environment, as documented [here](http://sandervanderburg.blogspot.nl/2013/09/composing-fhs-compatible-chroot.html). This allows us to have binaries in the expected paths without disrupting the system, and to avoid patching them to work in a non FHS environment.
## How to play {#sec-steam-play}
Use `programs.steam.enable = true;` if you want to add steam to `systemPackages` and also enable a few workarounds as well as Steam controller support or other Steam supported controllers such as the DualShock 4 or Nintendo Switch Pro Controller.
Use `programs.steam.enable = true;` if you want to add steam to systemPackages and also enable a few workarrounds aswell as Steam controller support or other Steam supported controllers such as the DualShock 4 or Nintendo Switch Pr.
## Troubleshooting {#sec-steam-troub}
- **Steam fails to start. What do I do?**
Try to run
```ShellSession
@@ -32,31 +31,38 @@ Use `programs.steam.enable = true;` if you want to add steam to `systemPackages`
- **Using the FOSS Radeon or nouveau (nvidia) drivers**
- The `newStdcpp` parameter was removed since NixOS 17.09 and should not be needed anymore.
- Steam ships statically linked with a version of `libcrypto` that conflicts with the one dynamically loaded by radeonsi_dri.so. If you get the error:
- Steam ships statically linked with a version of libcrypto that conflics with the one dynamically loaded by radeonsi_dri.so. If you get the error
```
steam.sh: line 713: 7842 Segmentation fault (core dumped)
```
have a look at [this pull request](https://github.com/NixOS/nixpkgs/pull/20269).
- **Java**
1. There is no java in steam chrootenv by default. If you get a message like:
1. There is no java in steam chrootenv by default. If you get a message like
```
/home/foo/.local/share/Steam/SteamApps/common/towns/towns.sh: line 1: java: command not found
```
```
/home/foo/.local/share/Steam/SteamApps/common/towns/towns.sh: line 1: java: command not found
```
you need to add:
You need to add
```nix
steam.override { withJava = true; };
```
```nix
steam.override { withJava = true; };
```
## steam-run {#sec-steam-run}
The FHS-compatible chroot used for Steam can also be used to run other Linux games that expect a FHS environment. To use it, install the `steam-run` package and run the game with:
The FHS-compatible chroot used for steam can also be used to run other linux games that expect a FHS environment. To do it, add
```nix
pkgs.steam.override ({
nativeOnly = true;
newStdcpp = true;
}).run
```
to your configuration, rebuild, and run the game with
```
steam-run ./foo

View File

@@ -0,0 +1,13 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="unfree-software">
<title>Unfree software</title>
<para>
All users of Nixpkgs are free software users, and many users (and developers) of Nixpkgs want to limit and tightly control their exposure to unfree software. At the same time, many users need (or want) to run some specific pieces of proprietary software. Nixpkgs includes some expressions for unfree software packages. By default unfree software cannot be installed and doesnt show up in searches. To allow installing unfree software in a single Nix invocation one can export <literal>NIXPKGS_ALLOW_UNFREE=1</literal>. For a persistent solution, users can set <literal>allowUnfree</literal> in the Nixpkgs configuration.
</para>
<para>
Fine-grained control is possible by defining <literal>allowUnfreePredicate</literal> function in config; it takes the <literal>mkDerivation</literal> parameter attrset and returns <literal>true</literal> for unfree packages that should be allowed.
</para>
</section>

View File

@@ -4,7 +4,7 @@ Urxvt, also known as rxvt-unicode, is a highly customizable terminal emulator.
## Configuring urxvt {#sec-urxvt-conf}
In `nixpkgs`, urxvt is provided by the package `rxvt-unicode`. It can be configured to include your choice of plugins, reducing its closure size from the default configuration which includes all available plugins. To make use of this functionality, use an overlay or directly install an expression that overrides its configuration, such as:
In `nixpkgs`, urxvt is provided by the package `rxvt-unicode`. It can be configured to include your choice of plugins, reducing its closure size from the default configuration which includes all available plugins. To make use of this functionality, use an overlay or directly install an expression that overrides its configuration, such as
```nix
rxvt-unicode.override {
@@ -58,14 +58,14 @@ rxvt-unicode.override {
## Packaging urxvt plugins {#sec-urxvt-pkg}
Urxvt plugins resides in `pkgs/applications/misc/rxvt-unicode-plugins`. To add a new plugin, create an expression in a subdirectory and add the package to the set in `pkgs/applications/misc/rxvt-unicode-plugins/default.nix`.
Urxvt plugins resides in `pkgs/applications/misc/rxvt-unicode-plugins`. To add a new plugin create an expression in a subdirectory and add the package to the set in `pkgs/applications/misc/rxvt-unicode-plugins/default.nix`.
A plugin can be any kind of derivation, the only requirement is that it should always install perl scripts in `$out/lib/urxvt/perl`. Look for existing plugins for examples.
If the plugin is itself a Perl package that needs to be imported from other plugins or scripts, add the following passthrough:
If the plugin is itself a perl package that needs to be imported from other plugins or scripts, add the following passthrough:
```nix
passthru.perlPackages = [ "self" ];
```
This will make the urxvt wrapper pick up the dependency and set up the Perl path accordingly.
This will make the urxvt wrapper pick up the dependency and set up the perl path accordingly.

View File

@@ -1,6 +1,6 @@
# WeeChat {#sec-weechat}
# Weechat {#sec-weechat}
WeeChat can be configured to include your choice of plugins, reducing its closure size from the default configuration which includes all available plugins. To make use of this functionality, install an expression that overrides its configuration, such as:
Weechat can be configured to include your choice of plugins, reducing its closure size from the default configuration which includes all available plugins. To make use of this functionality, install an expression that overrides its configuration such as
```nix
weechat.override {configure = {availablePlugins, ...}: {
@@ -13,7 +13,7 @@ If the `configure` function returns an attrset without the `plugins` attribute,
The plugins currently available are `python`, `perl`, `ruby`, `guile`, `tcl` and `lua`.
The Python and Perl plugins allows the addition of extra libraries. For instance, the `inotify.py` script in `weechat-scripts` requires D-Bus or libnotify, and the `fish.py` script requires `pycrypto`. To use these scripts, use the plugin's `withPackages` attribute:
The python and perl plugins allows the addition of extra libraries. For instance, the `inotify.py` script in `weechat-scripts` requires D-Bus or libnotify, and the `fish.py` script requires `pycrypto`. To use these scripts, use the plugin's `withPackages` attribute:
```nix
weechat.override { configure = {availablePlugins, ...}: {
@@ -41,7 +41,7 @@ weechat.override {
configure = { availablePlugins, ... }: {
init = ''
/set foo bar
/server add libera irc.libera.chat
/server add freenode chat.freenode.org
'';
};
}
@@ -49,7 +49,7 @@ weechat.override {
Further values can be added to the list of commands when running `weechat --run-command "your-commands"`.
Additionally, it's possible to specify scripts to be loaded when starting `weechat`. These will be loaded before the commands from `init`:
Additionally it's possible to specify scripts to be loaded when starting `weechat`. These will be loaded before the commands from `init`:
```nix
weechat.override {
@@ -64,7 +64,7 @@ weechat.override {
}
```
In `nixpkgs` there's a subpackage which contains derivations for WeeChat scripts. Such derivations expect a `passthru.scripts` attribute, which contains a list of all scripts inside the store path. Furthermore, all scripts have to live in `$out/share`. An exemplary derivation looks like this:
In `nixpkgs` there's a subpackage which contains derivations for WeeChat scripts. Such derivations expect a `passthru.scripts` attribute which contains a list of all scripts inside the store path. Furthermore all scripts have to live in `$out/share`. An exemplary derivation looks like this:
```nix
{ stdenv, fetchurl }:

View File

@@ -2,7 +2,7 @@
The Nix expressions for the X.org packages reside in `pkgs/servers/x11/xorg/default.nix`. This file is automatically generated from lists of tarballs in an X.org release. As such it should not be modified directly; rather, you should modify the lists, the generator script or the file `pkgs/servers/x11/xorg/overrides.nix`, in which you can override or add to the derivations produced by the generator.
## Katamari Tarballs {#katamari-tarballs}
## Katamari Tarballs
X.org upstream releases used to include [katamari](https://en.wiktionary.org/wiki/%E3%81%8B%E3%81%9F%E3%81%BE%E3%82%8A) releases, which included a holistic recommended version for each tarball, up until 7.7. To create a list of tarballs in a katamari release:
@@ -14,11 +14,11 @@ cat $(PRINT_PATH=1 nix-prefetch-url $url | tail -n 1) \
| sort > "tarballs-$release.list"
```
## Individual Tarballs {#individual-tarballs}
## Individual Tarballs
The upstream release process for [X11R7.8](https://x.org/wiki/Releases/7.8/) does not include a planned katamari. Instead, each component of X.org is released as its own tarball. We maintain `pkgs/servers/x11/xorg/tarballs.list` as a list of tarballs for each individual package. This list includes X.org core libraries and protocol descriptions, extra newer X11 interface libraries, like `xorg.libxcb`, and classic utilities which are largely unused but still available if needed, like `xorg.imake`.
## Generating Nix Expressions {#generating-nix-expressions}
## Generating Nix Expressions
The generator is invoked as follows:
@@ -29,6 +29,6 @@ cd pkgs/servers/x11/xorg
For each of the tarballs in the `.list` files, the script downloads it, unpacks it, and searches its `configure.ac` and `*.pc.in` files for dependencies. This information is used to generate `default.nix`. The generator caches downloaded tarballs between runs. Pay close attention to the `NOT FOUND: $NAME` messages at the end of the run, since they may indicate missing dependencies. (Some might be optional dependencies, however.)
## Overriding the Generator {#overriding-the-generator}
## Overriding the Generator
If the expression for a package requires derivation attributes that the generator cannot figure out automatically (say, `patches` or a `postInstall` hook), you should modify `pkgs/servers/x11/xorg/overrides.nix`.

View File

@@ -18,8 +18,6 @@
Additional commands to be executed for finalizing the derivation with runner script.
- `runScript`
A command that would be executed inside the sandbox and passed all the command line arguments. It defaults to `bash`.
- `profile`
Optional script for `/etc/profile` within the sandbox.
One can create a simple environment using a `shell.nix` like that:
@@ -30,7 +28,7 @@ One can create a simple environment using a `shell.nix` like that:
name = "simple-x11-env";
targetPkgs = pkgs: (with pkgs;
[ udev
alsa-lib
alsaLib
]) ++ (with pkgs.xorg;
[ libX11
libXcursor
@@ -38,12 +36,10 @@ One can create a simple environment using a `shell.nix` like that:
]);
multiPkgs = pkgs: (with pkgs;
[ udev
alsa-lib
alsaLib
]);
runScript = "bash";
}).env
```
Running `nix-shell` would then drop you into a shell with these libraries and binaries available. You can use this to run closed-source applications which expect FHS structure without hassles: simply change `runScript` to the application path, e.g. `./bin/start.sh` -- relative paths are supported.
Additionally, the FHS builder links all relocated gsettings-schemas (the glib setup-hook moves them to `share/gsettings-schemas/${name}/glib-2.0/schemas`) to their standard FHS location. This means you don't need to wrap binaries with `wrapGAppsHook`.

View File

@@ -1,37 +1,17 @@
# pkgs.mkShell {#sec-pkgs-mkShell}
`pkgs.mkShell` is a specialized `stdenv.mkDerivation` that removes some
repetition when using it with `nix-shell` (or `nix develop`).
`pkgs.mkShell` is a special kind of derivation that is only useful when using
it combined with `nix-shell`. It will in fact fail to instantiate when invoked
with `nix-build`.
## Usage {#sec-pkgs-mkShell-usage}
Here is a common usage example:
```nix
{ pkgs ? import <nixpkgs> {} }:
pkgs.mkShell {
# specify which packages to add to the shell environment
packages = [ pkgs.gnumake ];
inputsFrom = [ pkgs.hello pkgs.gnutar ];
shellHook = ''
export DEBUG=1
'';
# add all the dependencies, of the given packages, to the shell environment
inputsFrom = with pkgs; [ hello gnutar ];
}
```
## Attributes
* `name` (default: `nix-shell`). Set the name of the derivation.
* `packages` (default: `[]`). Add executable packages to the `nix-shell` environment.
* `inputsFrom` (default: `[]`). Add build dependencies of the listed derivations to the `nix-shell` environment.
* `shellHook` (default: `""`). Bash statements that are executed by `nix-shell`.
... all the attributes of `stdenv.mkDerivation`.
## Building the shell
This derivation output will contain a text file that contains a reference to
all the build inputs. This is useful in CI where we want to make sure that
every derivation, and its dependencies, build properly. Or when creating a GC
root so that the build dependencies don't get garbage-collected.

View File

@@ -1,134 +0,0 @@
# Testers {#chap-testers}
This chapter describes several testing builders which are available in the <literal>testers</literal> namespace.
## `testVersion` {#tester-testVersion}
Checks the command output contains the specified version
Although simplistic, this test assures that the main program
can run. While there's no substitute for a real test case,
it does catch dynamic linking errors and such. It also provides
some protection against accidentally building the wrong version,
for example when using an 'old' hash in a fixed-output derivation.
Examples:
```nix
passthru.tests.version = testers.testVersion { package = hello; };
passthru.tests.version = testers.testVersion {
package = seaweedfs;
command = "weed version";
};
passthru.tests.version = testers.testVersion {
package = key;
command = "KeY --help";
# Wrong '2.5' version in the code. Drop on next version.
version = "2.5";
};
passthru.tests.version = testers.testVersion {
package = ghr;
# The output needs to contain the 'version' string without any prefix or suffix.
version = "v${version}";
};
```
## `testEqualDerivation` {#tester-testEqualDerivation}
Checks that two packages produce the exact same build instructions.
This can be used to make sure that a certain difference of configuration,
such as the presence of an overlay does not cause a cache miss.
When the derivations are equal, the return value is an empty file.
Otherwise, the build log explains the difference via `nix-diff`.
Example:
```nix
testers.testEqualDerivation
"The hello package must stay the same when enabling checks."
hello
(hello.overrideAttrs(o: { doCheck = true; }))
```
## `invalidateFetcherByDrvHash` {#tester-invalidateFetcherByDrvHash}
Use the derivation hash to invalidate the output via name, for testing.
Type: `(a@{ name, ... } -> Derivation) -> a -> Derivation`
Normally, fixed output derivations can and should be cached by their output
hash only, but for testing we want to re-fetch everytime the fetcher changes.
Changes to the fetcher become apparent in the drvPath, which is a hash of
how to fetch, rather than a fixed store path.
By inserting this hash into the name, we can make sure to re-run the fetcher
every time the fetcher changes.
This relies on the assumption that Nix isn't clever enough to reuse its
database of local store contents to optimize fetching.
You might notice that the "salted" name derives from the normal invocation,
not the final derivation. `invalidateFetcherByDrvHash` has to invoke the fetcher
function twice: once to get a derivation hash, and again to produce the final
fixed output derivation.
Example:
```nix
tests.fetchgit = testers.invalidateFetcherByDrvHash fetchgit {
name = "nix-source";
url = "https://github.com/NixOS/nix";
rev = "9d9dbe6ed05854e03811c361a3380e09183f4f4a";
sha256 = "sha256-7DszvbCNTjpzGRmpIVAWXk20P0/XTrWZ79KSOGLrUWY=";
};
```
## `nixosTest` {#tester-nixosTest}
Run a NixOS VM network test using this evaluation of Nixpkgs.
NOTE: This function is primarily for external use. NixOS itself uses `make-test-python.nix` directly. Packages defined in Nixpkgs [reuse NixOS tests via `nixosTests`, plural](#ssec-nixos-tests-linking).
It is mostly equivalent to the function `import ./make-test-python.nix` from the
[NixOS manual](https://nixos.org/nixos/manual/index.html#sec-nixos-tests),
except that the current application of Nixpkgs (`pkgs`) will be used, instead of
letting NixOS invoke Nixpkgs anew.
If a test machine needs to set NixOS options under `nixpkgs`, it must set only the
`nixpkgs.pkgs` option.
### Parameter
A [NixOS VM test network](https://nixos.org/nixos/manual/index.html#sec-nixos-tests), or path to it. Example:
```nix
{
name = "my-test";
nodes = {
machine1 = { lib, pkgs, nodes, ... }: {
environment.systemPackages = [ pkgs.hello ];
services.foo.enable = true;
};
# machine2 = ...;
};
testScript = ''
start_all()
machine1.wait_for_unit("foo.service")
machine1.succeed("hello | foo-send")
'';
}
```
### Result
A derivation that runs the VM test.
Notable attributes:
* `nodes`: the evaluated NixOS configurations. Useful for debugging and exploring the configuration.
* `driverInteractive`: a script that launches an interactive Python session in the context of the `testScript`.

View File

@@ -35,10 +35,10 @@ This works just like `runCommand`. The only difference is that it also provides
## `runCommandLocal` {#trivial-builder-runCommandLocal}
Variant of `runCommand` that forces the derivation to be built locally, it is not substituted. This is intended for very cheap commands (<1s execution time). It saves on the network round-trip and can speed up a build.
Variant of `runCommand` that forces the derivation to be built locally, it is not substituted. This is intended for very cheap commands (<1s execution time). It saves on the network roundrip and can speed up a build.
::: {.note}
This sets [`allowSubstitutes` to `false`](https://nixos.org/nix/manual/#adv-attr-allowSubstitutes), so only use `runCommandLocal` if you are certain the user will always have a builder for the `system` of the derivation. This should be true for most trivial use cases (e.g., just copying some files to a different location or adding symlinks) because there the `system` is usually the same as `builtins.currentSystem`.
::: note
This sets [`allowSubstitutes` to `false`](https://nixos.org/nix/manual/#adv-attr-allowSubstitutes), so only use `runCommandLocal` if you are certain the user will always have a builder for the `system` of the derivation. This should be true for most trivial use cases (e.g. just copying some files to a different location or adding symlinks), because there the `system` is usually the same as `builtins.currentSystem`.
:::
## `writeTextFile`, `writeText`, `writeTextDir`, `writeScript`, `writeScriptBin` {#trivial-builder-writeText}
@@ -47,133 +47,9 @@ These functions write `text` to the Nix store. This is useful for creating scrip
Many more commands wrap `writeTextFile` including `writeText`, `writeTextDir`, `writeScript`, and `writeScriptBin`. These are convenience functions over `writeTextFile`.
Here are a few examples:
```nix
# Writes my-file to /nix/store/<store path>
writeTextFile {
name = "my-file";
text = ''
Contents of File
'';
}
# See also the `writeText` helper function below.
# Writes executable my-file to /nix/store/<store path>/bin/my-file
writeTextFile {
name = "my-file";
text = ''
Contents of File
'';
executable = true;
destination = "/bin/my-file";
}
# Writes contents of file to /nix/store/<store path>
writeText "my-file"
''
Contents of File
'';
# Writes contents of file to /nix/store/<store path>/share/my-file
writeTextDir "share/my-file"
''
Contents of File
'';
# Writes my-file to /nix/store/<store path> and makes executable
writeScript "my-file"
''
Contents of File
'';
# Writes my-file to /nix/store/<store path>/bin/my-file and makes executable.
writeScriptBin "my-file"
''
Contents of File
'';
# Writes my-file to /nix/store/<store path> and makes executable.
writeShellScript "my-file"
''
Contents of File
'';
# Writes my-file to /nix/store/<store path>/bin/my-file and makes executable.
writeShellScriptBin "my-file"
''
Contents of File
'';
```
## `concatTextFile`, `concatText`, `concatScript` {#trivial-builder-concatText}
These functions concatenate `files` to the Nix store in a single file. This is useful for configuration files structured in lines of text. `concatTextFile` takes an attribute set and expects two arguments, `name` and `files`. `name` corresponds to the name used in the Nix store path. `files` will be the files to be concatenated. You can also set `executable` to true to make this file have the executable bit set.
`concatText` and`concatScript` are simple wrappers over `concatTextFile`.
Here are a few examples:
```nix
# Writes my-file to /nix/store/<store path>
concatTextFile {
name = "my-file";
files = [ drv1 "${drv2}/path/to/file" ];
}
# See also the `concatText` helper function below.
# Writes executable my-file to /nix/store/<store path>/bin/my-file
concatTextFile {
name = "my-file";
files = [ drv1 "${drv2}/path/to/file" ];
executable = true;
destination = "/bin/my-file";
}
# Writes contents of files to /nix/store/<store path>
concatText "my-file" [ file1 file2 ]
# Writes contents of files to /nix/store/<store path>
concatScript "my-file" [ file1 file2 ]
```
## `writeShellApplication` {#trivial-builder-writeShellApplication}
This can be used to easily produce a shell script that has some dependencies (`runtimeInputs`). It automatically sets the `PATH` of the script to contain all of the listed inputs, sets some sanity shellopts (`errexit`, `nounset`, `pipefail`), and checks the resulting script with [`shellcheck`](https://github.com/koalaman/shellcheck).
For example, look at the following code:
```nix
writeShellApplication {
name = "show-nixos-org";
runtimeInputs = [ curl w3m ];
text = ''
curl -s 'https://nixos.org' | w3m -dump -T text/html
'';
}
```
Unlike with normal `writeShellScriptBin`, there is no need to manually write out `${curl}/bin/curl`, setting the PATH
was handled by `writeShellApplication`. Moreover, the script is being checked with `shellcheck` for more strict
validation.
## `symlinkJoin` {#trivial-builder-symlinkJoin}
This can be used to put many derivations into the same directory structure. It works by creating a new derivation and adding symlinks to each of the paths listed. It expects two arguments, `name`, and `paths`. `name` is the name used in the Nix store path for the created derivation. `paths` is a list of paths that will be symlinked. These paths can be to Nix store derivations or any other subdirectory contained within.
Here is an example:
```nix
# adds symlinks of hello and stack to current build and prints "links added"
symlinkJoin { name = "myexample"; paths = [ pkgs.hello pkgs.stack ]; postBuild = "echo links added"; }
```
This creates a derivation with a directory structure like the following:
```
/nix/store/sglsr5g079a5235hy29da3mq3hv8sjmm-myexample
|-- bin
| |-- hello -> /nix/store/qy93dp4a3rqyn2mz63fbxjg228hffwyw-hello-2.10/bin/hello
| `-- stack -> /nix/store/6lzdpxshx78281vy056lbk553ijsdr44-stack-2.1.3.1/bin/stack
`-- share
|-- bash-completion
| `-- completions
| `-- stack -> /nix/store/6lzdpxshx78281vy056lbk553ijsdr44-stack-2.1.3.1/share/bash-completion/completions/stack
|-- fish
| `-- vendor_completions.d
| `-- stack.fish -> /nix/store/6lzdpxshx78281vy056lbk553ijsdr44-stack-2.1.3.1/share/fish/vendor_completions.d/stack.fish
...
```
## `writeReferencesToFile` {#trivial-builder-writeReferencesToFile}
@@ -219,5 +95,5 @@ produces an output path `/nix/store/<hash>-runtime-references` containing
/nix/store/<hash>-hello-2.10
```
but none of `hello`'s dependencies because those are not referenced directly
but none of `hello`'s dependencies, because those are not referenced directly
by `hi`'s output.

View File

@@ -6,7 +6,7 @@
- Do not use tab characters, i.e. configure your editor to use soft tabs. For instance, use `(setq-default indent-tabs-mode nil)` in Emacs. Everybody has different tab settings so its asking for trouble.
- Use `lowerCamelCase` for variable names, not `UpperCamelCase`. Note, this rule does not apply to package attribute names, which instead follow the rules in [](#sec-package-naming).
- Use `lowerCamelCase` for variable names, not `UpperCamelCase`. Note, this rule does not apply to package attribute names, which instead follow the rules in <xref linkend="sec-package-naming"/>.
- Function calls with attribute set arguments are written as
@@ -181,23 +181,11 @@
rev = "${version}";
```
- Building lists conditionally _should_ be done with `lib.optional(s)` instead of using `if cond then [ ... ] else null` or `if cond then [ ... ] else [ ]`.
```nix
buildInputs = lib.optional stdenv.isDarwin iconv;
```
instead of
```nix
buildInputs = if stdenv.isDarwin then [ iconv ] else null;
```
As an exception, an explicit conditional expression with null can be used when fixing a important bug without triggering a mass rebuild.
If this is done a follow up pull request _should_ be created to change the code to `lib.optional(s)`.
- Arguments should be listed in the order they are used, with the exception of `lib`, which always goes first.
- The top-level `lib` must be used in the master and 21.05 branch over its alias `stdenv.lib` as it now causes evaluation errors when aliases are disabled which is the case for ofborg.
`lib` is unrelated to `stdenv`, and so `stdenv.lib` should only be used as a convenience alias when developing locally to avoid having to modify the function inputs just to test something out.
## Package naming {#sec-package-naming}
The key words _must_, _must not_, _required_, _shall_, _shall not_, _should_, _should not_, _recommended_, _may_, and _optional_ in this section are to be interpreted as described in [RFC 2119](https://tools.ietf.org/html/rfc2119). Only _emphasized_ words are to be interpreted in this way.
@@ -214,17 +202,17 @@ Most of the time, these are the same. For instance, the package `e2fsprogs` has
There are a few naming guidelines:
- The `pname` attribute _should_ be identical to the upstream package name.
- The `name` attribute _should_ be identical to the upstream package name.
- The `pname` and the `version` attribute _must not_ contain uppercase letters — e.g., `"mplayer" instead of `"MPlayer"`.
- The `name` attribute _must not_ contain uppercase letters — e.g., `"mplayer-1.0rc2"` instead of `"MPlayer-1.0rc2"`.
- The `version` attribute _must_ start with a digit e.g`"0.3.1rc2".
- The version part of the `name` attribute _must_ start with a digit (following a dash) — e.g., `"hello-0.3.1rc2"`.
- If a package is not a release but a commit from a repository, then the `version` attribute _must_ be the date of that (fetched) commit. The date _must_ be in `"unstable-YYYY-MM-DD"` format.
- If a package is not a release but a commit from a repository, then the version part of the name _must_ be the date of that (fetched) commit. The date _must_ be in `"YYYY-MM-DD"` format. Also append `"unstable"` to the name - e.g., `"pkgname-unstable-2014-09-23"`.
- Dashes in the package `pname` _should_ be preserved in new variable names, rather than converted to underscores or camel cased — e.g., `http-parser` instead of `http_parser` or `httpParser`. The hyphenated style is preferred in all three package names.
- Dashes in the package name _should_ be preserved in new variable names, rather than converted to underscores or camel cased — e.g., `http-parser` instead of `http_parser` or `httpParser`. The hyphenated style is preferred in all three package names.
- If there are multiple versions of a package, this _should_ be reflected in the variable names in `all-packages.nix`, e.g. `json-c_0_9` and `json-c_0_11`. If there is an obvious “default” version, make an attribute like `json-c = json-c_0_9;`. See also [](#sec-versioning)
- If there are multiple versions of a package, this _should_ be reflected in the variable names in `all-packages.nix`, e.g. `json-c-0-9` and `json-c-0-11`. If there is an obvious “default” version, make an attribute like `json-c = json-c-0-9;`. See also <xref linkend="sec-versioning" />
## File naming and organisation {#sec-organisation}
@@ -338,10 +326,6 @@ A (typically large) program with a distinct user interface, primarily used inter
- `applications/terminal-emulators` (e.g. `alacritty` or `rxvt` or `termite`)
- **If its a _file manager_:**
- `applications/file-managers` (e.g. `mc` or `ranger` or `pcmanfm`)
- **If its for _video playback / editing_:**
- `applications/video` (e.g. `vlc`)
@@ -453,10 +437,7 @@ In the file `pkgs/top-level/all-packages.nix` you can find fetch helpers, these
}
```
When fetching from GitHub, commits must always be referenced by their full commit hash. This is because GitHub shares commit hashes among all forks and returns `404 Not Found` when a short commit hash is ambiguous. It already happens for some short, 6-character commit hashes in `nixpkgs`.
It is a practical vector for a denial-of-service attack by pushing large amounts of auto generated commits into forks and was already [demonstrated against GitHub Actions Beta](https://blog.teddykatz.com/2019/11/12/github-actions-dos.html).
Find the value to put as `sha256` by running `nix-shell -p nix-prefetch-github --run "nix-prefetch-github --rev 1f795f9f44607cc5bec70d1300150bfefcef2aae NixOS nix"`.
Find the value to put as `sha256` by running `nix run -f '<nixpkgs>' nix-prefetch-github -c nix-prefetch-github --rev 1f795f9f44607cc5bec70d1300150bfefcef2aae NixOS nix` or `nix-prefetch-url --unpack https://github.com/NixOS/nix/archive/1f795f9f44607cc5bec70d1300150bfefcef2aae.tar.gz`.
## Obtaining source hash {#sec-source-hashes}
@@ -480,23 +461,15 @@ Preferred source hash type is sha256. There are several ways to get it.
4. Extracting hash from local source tarball can be done with `sha256sum`. Use `nix-prefetch-url file:///path/to/tarball` if you want base32 hash.
5. Fake hash: set the hash to one of
5. Fake hash: set fake hash in package expression, perform build and extract correct hash from error Nix prints.
- `""`
- `lib.fakeHash`
- `lib.fakeSha256`
- `lib.fakeSha512`
in the package expression, attempt build and extract correct hash from error messages.
For package updates it is enough to change one symbol to make hash fake. For new packages, you can use `lib.fakeSha256`, `lib.fakeSha512` or any other fake hash.
:::{.warning}
You must use one of these four fake hashes and not some arbitrarily-chosen hash.
See [](#sec-source-hashes-security).
:::
This is last resort method when reconstructing source URL is non-trivial and `nix-prefetch-url -A` isnt applicable (for example, [one of `kodi` dependencies](https://github.com/NixOS/nixpkgs/blob/d2ab091dd308b99e4912b805a5eb088dd536adb9/pkgs/applications/video/kodi/default.nix#L73)). The easiest way then would be replace hash with a fake one and rebuild. Nix build will fail and error message will contain desired hash.
This is last resort method when reconstructing source URL is non-trivial and `nix-prefetch-url -A` isn't applicable (for example, [one of `kodi` dependencies](https://github.com/NixOS/nixpkgs/blob/d2ab091dd308b99e4912b805a5eb088dd536adb9/pkgs/applications/video/kodi/default.nix#L73")). The easiest way then would be replace hash with a fake one and rebuild. Nix build will fail and error message will contain desired hash.
::: warning
This method has security problems. Check below for details.
:::
### Obtaining hashes securely {#sec-source-hashes-security}
@@ -508,7 +481,7 @@ Let's say Man-in-the-Middle (MITM) sits close to your network. Then instead of f
- `https://` URLs are secure in methods 1, 2, 3;
- `https://` URLs are secure in method 5 *only if* you use one of the listed fake hashes. If you use any other hash, `fetchurl` will pass `--insecure` to `curl` and may then degrade to HTTP in case of TLS certificate expiration.
- `https://` URLs are not secure in method 5. When obtaining hashes with fake hash method, TLS checks are disabled. So refetch source hash from several different networks to exclude MITM scenario. Alternatively, use fake hash method to make Nix error, but instead of extracting hash from error, extract `https://` URL and prefetch it with method 1.
## Patches {#sec-patches}
@@ -526,8 +499,6 @@ patches = [
Otherwise, you can add a `.patch` file to the `nixpkgs` repository. In the interest of keeping our maintenance burden to a minimum, only patches that are unique to `nixpkgs` should be added in this way.
If a patch is available online but does not cleanly apply, it can be modified in some fixed ways by using additional optional arguments for `fetchpatch`. Check [](#fetchpatch) for details.
```nix
patches = [ ./0001-changes.patch ];
```
@@ -552,41 +523,16 @@ If you do need to do create this sort of patch file, one way to do so is with gi
4. Use git to create a diff, and pipe the output to a patch file:
```ShellSession
$ git diff -a > nixpkgs/pkgs/the/package/0001-changes.patch
$ git diff > nixpkgs/pkgs/the/package/0001-changes.patch
```
## Package tests {#sec-package-tests}
Tests are important to ensure quality and make reviews and automatic updates easy.
The following types of tests exists:
Nix package tests are a lightweight alternative to [NixOS module tests](https://nixos.org/manual/nixos/stable/#sec-nixos-tests). They can be used to create simple integration tests for packages while the module tests are used to test services or programs with a graphical user interface on a NixOS VM. Unittests that are included in the source code of a package should be executed in the `checkPhase`.
* [NixOS **module tests**](https://nixos.org/manual/nixos/stable/#sec-nixos-tests), which spawn one or more NixOS VMs. They exercise both NixOS modules and the packaged programs used within them. For example, a NixOS module test can start a web server VM running the `nginx` module, and a client VM running `curl` or a graphical `firefox`, and test that they can talk to each other and display the correct content.
* Nix **package tests** are a lightweight alternative to NixOS module tests. They should be used to create simple integration tests for packages, but cannot test NixOS services, and some programs with graphical user interfaces may also be difficult to test with them.
* The **`checkPhase` of a package**, which should execute the unit tests that are included in the source code of a package.
Here in the nixpkgs manual we describe mostly _package tests_; for _module tests_ head over to the corresponding [section in the NixOS manual](https://nixos.org/manual/nixos/stable/#sec-nixos-tests).
### Writing inline package tests {#ssec-inline-package-tests-writing}
For very simple tests, they can be written inline:
```nix
{ …, yq-go }:
buildGoModule rec {
passthru.tests = {
simple = runCommand "${pname}-test" {} ''
echo "test: 1" | ${yq-go}/bin/yq eval -j > $out
[ "$(cat $out | tr -d $'\n ')" = '{"test":1}' ]
'';
};
}
```
### Writing larger package tests {#ssec-package-tests-writing}
### Writing package tests {#ssec-package-tests-writing}
This is an example using the `phoronix-test-suite` package with the current best practices.
@@ -615,7 +561,7 @@ let
inherit (phoronix-test-suite) pname version;
in
runCommand "${pname}-tests" { meta.timeout = 60; }
runCommand "${pname}-tests" { meta.timeout = 3; }
''
# automatic initial setup to prevent interactive questions
${phoronix-test-suite}/bin/phoronix-test-suite enterprise-setup >/dev/null
@@ -649,23 +595,3 @@ Here are examples of package tests:
- [Spacy annotation test](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/python-modules/spacy/annotation-test/default.nix)
- [Libtorch test](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/libraries/science/math/libtorch/test/default.nix)
- [Multiple tests for nanopb](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/libraries/nanopb/default.nix)
### Linking NixOS module tests to a package {#ssec-nixos-tests-linking}
Like [package tests](#ssec-package-tests-writing) as shown above, [NixOS module tests](https://nixos.org/manual/nixos/stable/#sec-nixos-tests) can also be linked to a package, so that the tests can be easily run when changing the related package.
For example, assuming we're packaging `nginx`, we can link its module test via `passthru.tests`:
```nix
{ stdenv, lib, nixosTests }:
stdenv.mkDerivation {
...
passthru.tests = {
nginx = nixosTests.nginx;
};
...
}
```

View File

@@ -1,13 +1,13 @@
# Contributing to this documentation {#chap-contributing}
The sources of the Nixpkgs manual are in the [doc](https://github.com/NixOS/nixpkgs/tree/master/doc) subdirectory of the Nixpkgs repository. The manual is still partially written in DocBook but it is progressively being converted to [Markdown](#sec-contributing-markup).
The DocBook sources of the Nixpkgs manual are in the [doc](https://github.com/NixOS/nixpkgs/tree/master/doc) subdirectory of the Nixpkgs repository.
You can quickly check your edits with `make`:
```ShellSession
$ cd /path/to/nixpkgs/doc
$ nix-shell
[nix-shell]$ make
[nix-shell]$ make $makeFlags
```
If you experience problems, run `make debug` to help understand the docbook errors.
@@ -22,101 +22,3 @@ $ nix-shell
```
If the build succeeds, the manual will be in `./result/share/doc/nixpkgs/manual.html`.
## Syntax {#sec-contributing-markup}
As per [RFC 0072](https://github.com/NixOS/rfcs/pull/72), all new documentation content should be written in [CommonMark](https://commonmark.org/) Markdown dialect.
Additional syntax extensions are available, though not all extensions can be used in NixOS option documentation. The following extensions are currently used:
- []{#ssec-contributing-markup-anchors}
Explicitly defined **anchors** on headings, to allow linking to sections. These should be always used, to ensure the anchors can be linked even when the heading text changes, and to prevent conflicts between [automatically assigned identifiers](https://github.com/jgm/commonmark-hs/blob/master/commonmark-extensions/test/auto_identifiers.md).
It uses the widely compatible [header attributes](https://github.com/jgm/commonmark-hs/blob/master/commonmark-extensions/test/attributes.md) syntax:
```markdown
## Syntax {#sec-contributing-markup}
```
- []{#ssec-contributing-markup-anchors-inline}
**Inline anchors**, which allow linking arbitrary place in the text (e.g. individual list items, sentences…).
They are defined using a hybrid of the link syntax with the attributes syntax known from headings, called [bracketed spans](https://github.com/jgm/commonmark-hs/blob/master/commonmark-extensions/test/bracketed_spans.md):
```markdown
- []{#ssec-gnome-hooks-glib} `glib` setup hook will populate `GSETTINGS_SCHEMAS_PATH` and then `wrapGAppsHook` will prepend it to `XDG_DATA_DIRS`.
```
- []{#ssec-contributing-markup-automatic-links}
If you **omit a link text** for a link pointing to a section, the text will be substituted automatically. For example, `[](#chap-contributing)` will result in [](#chap-contributing).
This syntax is taken from [MyST](https://myst-parser.readthedocs.io/en/latest/using/syntax.html#targets-and-cross-referencing).
- []{#ssec-contributing-markup-inline-roles}
If you want to link to a man page, you can use `` {manpage}`nix.conf(5)` ``, which will turn into {manpage}`nix.conf(5)`. The references will turn into links when a mapping exists in {file}`doc/build-aux/pandoc-filters/link-unix-man-references.lua`.
A few markups for other kinds of literals are also available:
- `` {command}`rm -rfi` `` turns into {command}`rm -rfi`
- `` {env}`XDG_DATA_DIRS` `` turns into {env}`XDG_DATA_DIRS`
- `` {file}`/etc/passwd` `` turns into {file}`/etc/passwd`
- `` {option}`networking.useDHCP` `` turns into {option}`networking.useDHCP`
- `` {var}`/etc/passwd` `` turns into {var}`/etc/passwd`
These literal kinds are used mostly in NixOS option documentation.
This syntax is taken from [MyST](https://myst-parser.readthedocs.io/en/latest/syntax/syntax.html#roles-an-in-line-extension-point). Though, the feature originates from [reStructuredText](https://www.sphinx-doc.org/en/master/usage/restructuredtext/roles.html#role-manpage) with slightly different syntax.
::: {.note}
Inline roles are available for option documentation.
:::
- []{#ssec-contributing-markup-admonitions}
**Admonitions**, set off from the text to bring attention to something.
It uses pandocs [fenced `div`s syntax](https://github.com/jgm/commonmark-hs/blob/master/commonmark-extensions/test/fenced_divs.md):
```markdown
::: {.warning}
This is a warning
:::
```
which renders as
> ::: {.warning}
> This is a warning.
> :::
The following are supported:
- [`caution`](https://tdg.docbook.org/tdg/5.0/caution.html)
- [`important`](https://tdg.docbook.org/tdg/5.0/important.html)
- [`note`](https://tdg.docbook.org/tdg/5.0/note.html)
- [`tip`](https://tdg.docbook.org/tdg/5.0/tip.html)
- [`warning`](https://tdg.docbook.org/tdg/5.0/warning.html)
::: {.note}
Admonitions are available for option documentation.
:::
- []{#ssec-contributing-markup-definition-lists}
[**Definition lists**](https://github.com/jgm/commonmark-hs/blob/master/commonmark-extensions/test/definition_lists.md), for defining a group of terms:
```markdown
pear
: green or yellow bulbous fruit
watermelon
: green fruit with red flesh
```
which renders as
> pear
> : green or yellow bulbous fruit
>
> watermelon
> : green fruit with red flesh
For contributing to the legacy parts, please see [DocBook: The Definitive Guide](https://tdg.docbook.org/) or the [DocBook rocks! primer](https://web.archive.org/web/20200816233747/https://docbook.rocks/).

View File

@@ -9,7 +9,7 @@ To add a package to Nixpkgs:
$ cd nixpkgs
```
2. Find a good place in the Nixpkgs tree to add the Nix expression for your package. For instance, a library package typically goes into `pkgs/development/libraries/pkgname`, while a web browser goes into `pkgs/applications/networking/browsers/pkgname`. See [](#sec-organisation) for some hints on the tree organisation. Create a directory for your package, e.g.
2. Find a good place in the Nixpkgs tree to add the Nix expression for your package. For instance, a library package typically goes into `pkgs/development/libraries/pkgname`, while a web browser goes into `pkgs/applications/networking/browsers/pkgname`. See <xref linkend="sec-organisation" /> for some hints on the tree organisation. Create a directory for your package, e.g.
```ShellSession
$ mkdir pkgs/development/libraries/libfoo

View File

@@ -1,6 +1,6 @@
# Reviewing contributions {#chap-reviewing-contributions}
::: {.warning}
::: warning
The following section is a draft, and the policy for reviewing is still being discussed in issues such as [#11166](https://github.com/NixOS/nixpkgs/issues/11166) and [#20836](https://github.com/NixOS/nixpkgs/issues/20836).
:::
@@ -35,18 +35,15 @@ Reviewing process:
- Building the package locally.
- pull requests are often targeted to the master or staging branch, and building the pull request locally when it is submitted can trigger many source builds.
- It is possible to rebase the changes on nixos-unstable or nixpkgs-unstable for easier review by running the following commands from a nixpkgs clone.
```ShellSession
$ git fetch origin nixos-unstable
$ git fetch origin pull/PRNUMBER/head
$ git rebase --onto nixos-unstable BASEBRANCH FETCH_HEAD
```
- The first command fetches the nixos-unstable branch.
- The second command fetches the pull request changes, `PRNUMBER` is the number at the end of the pull request title and `BASEBRANCH` the base branch of the pull request.
- The third command rebases the pull request changes to the nixos-unstable branch.
- The [nixpkgs-review](https://github.com/Mic92/nixpkgs-review) tool can be used to review a pull request content in a single command. `PRNUMBER` should be replaced by the number at the end of the pull request title. You can also provide the full github pull request url.
```ShellSession
$ nix-shell -p nixpkgs-review --run "nixpkgs-review pr PRNUMBER"
```
@@ -103,8 +100,7 @@ Sample template for a new package review is provided below.
- [ ] `meta.maintainers` is set
- [ ] build time only dependencies are declared in `nativeBuildInputs`
- [ ] source is fetched using the appropriate function
- [ ] the list of `phases` is not overridden
- [ ] when a phase (like `installPhase`) is overridden it starts with `runHook preInstall` and ends with `runHook postInstall`.
- [ ] phases are respected
- [ ] patches that are remotely available are fetched with `fetchpatch`
##### Possible improvements
@@ -122,10 +118,10 @@ Reviewing process:
- [CODEOWNERS](https://help.github.com/articles/about-codeowners/) will make GitHub notify users based on the submitted changes, but it can happen that it misses some of the package maintainers.
- Ensure that the module tests, if any, are succeeding.
- Ensure that the introduced options are correct.
- Type should be appropriate (string related types differs in their merging capabilities, `loaOf` and `string` types are deprecated).
- Type should be appropriate (string related types differs in their merging capabilities, `optionSet` and `string` types are deprecated).
- Description, default and example should be provided.
- Ensure that option changes are backward compatible.
- `mkRenamedOptionModuleWith` provides a way to make option changes backward compatible.
- `mkRenamedOptionModule` and `mkAliasOptionModule` functions provide way to make option changes backward compatible.
- Ensure that removed options are declared with `mkRemovedOptionModule`
- Ensure that changes that are not backward compatible are mentioned in release notes.
- Ensure that documentations affected by the change is updated.
@@ -157,7 +153,7 @@ Reviewing process:
- Ensure that the module tests, if any, are succeeding.
- Ensure that the introduced options are correct.
- Type should be appropriate (string related types differs in their merging capabilities, `loaOf` and `string` types are deprecated).
- Type should be appropriate (string related types differs in their merging capabilities, `optionSet` and `string` types are deprecated).
- Description, default and example should be provided.
- Ensure that module `meta` field is present
- Maintainers should be declared in `meta.maintainers`.
@@ -185,111 +181,6 @@ Sample template for a new module review is provided below.
##### Comments
```
## Individual maintainer list {#reviewing-contributions-indvidual-maintainer-list}
When adding users to `maintainers/maintainer-list.nix`, the following
checks should be performed:
- If the user has specified a GPG key, verify that the commit is
signed by their key.
First, validate that the commit adding the maintainer is signed by
the key the maintainer listed. Check out the pull request and
compare its signing key with the listed key in the commit.
If the commit is not signed or it is signed by a different user, ask
them to either recommit using that key or to remove their key
information.
Given a maintainter entry like this:
``` nix
{
example = {
email = "user@example.com";
name = "Example User";
keys = [{
fingerprint = "0000 0000 2A70 6423 0AED 3C11 F04F 7A19 AAA6 3AFE";
}];
}
};
```
First receive their key from a keyserver:
$ gpg --recv-keys 0xF04F7A19AAA63AFE
gpg: key 0xF04F7A19AAA63AFE: public key "Example <user@example.com>" imported
gpg: Total number processed: 1
gpg: imported: 1
Then check the commit is signed by that key:
$ git log --show-signature
commit b87862a4f7d32319b1de428adb6cdbdd3a960153
gpg: Signature made Wed Mar 12 13:32:24 2003 +0000
gpg: using RSA key 000000002A7064230AED3C11F04F7A19AAA63AFE
gpg: Good signature from "Example User <user@example.com>
Author: Example User <user@example.com>
Date: Wed Mar 12 13:32:24 2003 +0000
maintainers: adding example
and validate that there is a `Good signature` and the printed key
matches the user's submitted key.
Note: GitHub's "Verified" label does not display the user's full key
fingerprint, and should not be used for validating the key matches.
- If the user has specified a `github` account name, ensure they have
also specified a `githubId` and verify the two match.
Maintainer entries that include a `github` field must also include
their `githubId`. People can and do change their GitHub name
frequently, and the ID is used as the official and stable identity
of the maintainer.
Given a maintainer entry like this:
``` nix
{
example = {
email = "user@example.com";
name = "Example User";
github = "ghost";
githubId = 10137;
}
};
```
First, make sure that the listed GitHub handle matches the author of
the commit.
Then, visit the URL `https://api.github.com/users/ghost` and
validate that the `id` field matches the provided `githubId`.
## Maintainer teams {#reviewing-contributions-maintainer-teams}
Feel free to create a new maintainer team in `maintainers/team-list.nix`
when a group is collectively responsible for a collection of packages.
Use taste and personal judgement when deciding if a team is warranted.
Teams are allowed to define their own rules about membership.
For example, some teams will represent a business or other group which
wants to carefully track its members. Other teams may be very open about
who can join, and allow anybody to participate.
When reviewing changes to a team, read the team's scope and the context
around the member list for indications about the team's membership
policy.
In any case, request reviews from the existing team members. If the team
lists no specific membership policy, feel free to merge changes to the
team after giving the existing members a few days to respond.
*Important:* If a team says it is a closed group, do not merge additions
to the team without an approval by at least one existing member.
## Other submissions {#reviewing-contributions-other-submissions}
Other type of submissions requires different reviewing steps.

View File

@@ -43,13 +43,13 @@
- nixpkgs:
- update pkg
- `nix-env -iA pkg-attribute-name -f <path to your local nixpkgs folder>`
- `nix-env -i pkg-name -f <path to your local nixpkgs folder>`
- add pkg
- Make sure its in `pkgs/top-level/all-packages.nix`
- `nix-env -iA pkg-attribute-name -f <path to your local nixpkgs folder>`
- `nix-env -i pkg-name -f <path to your local nixpkgs folder>`
- _If you dont want to install pkg in you profile_.
- `nix-build -A pkg-attribute-name <path to your local nixpkgs folder>` and check results in the folder `result`. It will appear in the same directory where you did `nix-build`.
- If you installed your package with `nix-env`, you can run `nix-env -e pkg-name` where `pkg-name` is as reported by `nix-env -q` to uninstall it from your system.
- `nix-build -A pkg-attribute-name <path to your local nixpkgs folder>/default.nix` and check results in the folder `result`. It will appear in the same directory where you did `nix-build`.
- If you did `nix-env -i pkg-name` you can do `nix-env -e pkg-name` to uninstall it from your system.
- NixOS and its modules:
- You can add new module to your NixOS configuration file (usually its `/etc/nixos/configuration.nix`). And do `sudo nixos-rebuild test -I nixpkgs=<path to your local nixpkgs folder> --fast`.
@@ -62,7 +62,7 @@
- Push your changes to your fork of nixpkgs.
- Create the pull request
- Follow [the contribution guidelines](https://github.com/NixOS/nixpkgs/blob/master/CONTRIBUTING.md#submitting-changes).
- Follow [the contribution guidelines](https://github.com/NixOS/nixpkgs/blob/master/.github/CONTRIBUTING.md#submitting-changes).
## Submitting security fixes {#submitting-changes-submitting-security-fixes}
@@ -71,7 +71,6 @@ Security fixes are submitted in the same way as other changes and thus the same
- If a new version fixing the vulnerability has been released, update the package;
- If the security fix comes in the form of a patch and a CVE is available, then add the patch to the Nixpkgs tree, and apply it to the package.
The name of the patch should be the CVE identifier, so e.g. `CVE-2019-13636.patch`; If a patch is fetched the name needs to be set as well, e.g.:
```nix
(fetchpatch {
name = "CVE-2019-11068.patch";
@@ -90,18 +89,17 @@ There is currently no policy when to remove a package.
Before removing a package, one should try to find a new maintainer or fix smaller issues first.
### Steps to remove a package from Nixpkgs {#steps-to-remove-a-package-from-nixpkgs}
### Steps to remove a package from Nixpkgs
We use jbidwatcher as an example for a discontinued project here.
1. Have Nixpkgs checked out locally and up to date.
1. Create a new branch for your change, e.g. `git checkout -b jbidwatcher`
1. Remove the actual package including its directory, e.g. `git rm -rf pkgs/applications/misc/jbidwatcher`
1. Remove the actual package including its directory, e.g. `rm -rf pkgs/applications/misc/jbidwatcher`
1. Remove the package from the list of all packages (`pkgs/top-level/all-packages.nix`).
1. Add an alias for the package name in `pkgs/top-level/aliases.nix` (There is also `pkgs/applications/editors/vim/plugins/aliases.nix`. Package sets typically do not have aliases, so we can't add them there.)
1. Add an alias for the package name in `pkgs/top-level/aliases.nix` (There is also `pkgs/misc/vim-plugins/aliases.nix`. Package sets typically do not have aliases, so we can't add them there.)
For example in this case:
```
jbidwatcher = throw "jbidwatcher was discontinued in march 2021"; # added 2021-03-15
```
@@ -167,30 +165,24 @@ Packages with automated tests are much more likely to be merged in a timely fash
### Tested compilation of all pkgs that depend on this change using `nixpkgs-review` {#submitting-changes-tested-compilation}
If you are updating a packages version, you can use `nixpkgs-review` to make sure all packages that depend on the updated package still compile correctly. The `nixpkgs-review` utility can look for and build all dependencies either based on uncommitted changes with the `wip` option or specifying a GitHub pull request number.
If you are updating a packages version, you can use nixpkgs-review to make sure all packages that depend on the updated package still compile correctly. The `nixpkgs-review` utility can look for and build all dependencies either based on uncommited changes with the `wip` option or specifying a github pull request number.
Review changes from pull request number 12345:
review changes from pull request number 12345:
```ShellSession
nix-shell -p nixpkgs-review --run "nixpkgs-review pr 12345"
nix run nixpkgs.nixpkgs-review -c nixpkgs-review pr 12345
```
Alternatively, with flakes (and analogously for the other commands below):
review uncommitted changes:
```ShellSession
nix run nixpkgs#nixpkgs-review -- pr 12345
nix run nixpkgs.nixpkgs-review -c nixpkgs-review wip
```
Review uncommitted changes:
review changes from last commit:
```ShellSession
nix-shell -p nixpkgs-review --run "nixpkgs-review wip"
```
Review changes from last commit:
```ShellSession
nix-shell -p nixpkgs-review --run "nixpkgs-review rev HEAD"
nix run nixpkgs.nixpkgs-review -c nixpkgs-review rev HEAD
```
### Tested execution of all binary files (usually in `./result/bin/`) {#submitting-changes-tested-execution}
@@ -199,7 +191,7 @@ Its important to test any executables generated by a build when you change or
### Meets Nixpkgs contribution standards {#submitting-changes-contribution-standards}
The last checkbox is fits [CONTRIBUTING.md](https://github.com/NixOS/nixpkgs/blob/master/CONTRIBUTING.md). The contributing document has detailed information on standards the Nix community has for commit messages, reviews, licensing of contributions you make to the project, etc\... Everyone should read and understand the standards the community has for contributing before submitting a pull request.
The last checkbox is fits [CONTRIBUTING.md](https://github.com/NixOS/nixpkgs/blob/master/.github/CONTRIBUTING.md). The contributing document has detailed information on standards the Nix community has for commit messages, reviews, licensing of contributions you make to the project, etc\... Everyone should read and understand the standards the community has for contributing before submitting a pull request.
## Hotfixing pull requests {#submitting-changes-hotfixing-pull-requests}
@@ -233,7 +225,7 @@ digraph {
}
```
[This GitHub Action](https://github.com/NixOS/nixpkgs/blob/master/.github/workflows/periodic-merge-6h.yml) brings changes from `master` to `staging-next` and from `staging-next` to `staging` every 6 hours; these are the blue arrows in the diagram above. The purple arrows in the diagram above are done manually and much less frequently. You can get an idea of how often these merges occur by looking at the git history.
[This GitHub Action](https://github.com/NixOS/nixpkgs/blob/master/.github/workflows/merge-staging.yml) brings changes from `master` to `staging-next` and from `staging-next` to `staging` every 6 hours.
### Master branch {#submitting-changes-master-branch}
@@ -242,31 +234,19 @@ The `master` branch is the main development branch. It should only see non-break
### Staging branch {#submitting-changes-staging-branch}
The `staging` branch is a development branch where mass-rebuilds go. Mass rebuilds are commits that cause rebuilds for many packages, like more than 500 (or perhaps, if it's 'light' packages, 1000). It should only see non-breaking mass-rebuild commits. That means it is not to be used for testing, and changes must have been well tested already. If the branch is already in a broken state, please refrain from adding extra new breakages.
The `staging` branch is a development branch where mass-rebuilds go. It should only see non-breaking mass-rebuild commits. That means it is not to be used for testing, and changes must have been well tested already. If the branch is already in a broken state, please refrain from adding extra new breakages.
### Staging-next branch {#submitting-changes-staging-next-branch}
The `staging-next` branch is for stabilizing mass-rebuilds submitted to the `staging` branch prior to merging them into `master`. Mass-rebuilds must go via the `staging` branch. It must only see non-breaking commits that are fixing issues blocking it from being merged into the `master` branch.
The `staging-next` branch is for stabilizing mass-rebuilds submitted to the `staging` branch prior to merging them into `master`. Mass-rebuilds should go via the `staging` branch. It should only see non-breaking commits that are fixing issues blocking it from being merged into the `master ` branch.
If the branch is already in a broken state, please refrain from adding extra new breakages. Stabilize it for a few days and then merge into master.
### Stable release branches {#submitting-changes-stable-release-branches}
The same staging workflow applies to stable release branches, but the main branch is called `release-*` instead of `master`.
For cherry-picking a commit to a stable release branch (“backporting”), use `git cherry-pick -x <original commit>` so that the original commit id is included in the commit.
Example branch names: `release-21.11`, `staging-21.11`, `staging-next-21.11`.
Most changes added to the stable release branches are cherry-picked (“backported”) from the `master` and staging branches.
#### Automatically backporting a Pull Request {#submitting-changes-stable-release-branches-automatic-backports}
Assign label `backport <branch>` (e.g. `backport release-21.11`) to the PR and a backport PR is automatically created after the PR is merged.
#### Manually backporting changes {#submitting-changes-stable-release-branches-manual-backports}
Cherry-pick changes via `git cherry-pick -x <original commit>` so that the original commit id is included in the commit message.
Add a reason for the backport when it is not obvious from the original commit message. You can do this by cherry picking with `git cherry-pick -xe <original commit>`, which allows editing the commit message. This is not needed for minor version updates that include security and bug fixes but don't add new features or when the commit fixes an otherwise broken package.
Add a reason for the backport by using `git cherry-pick -xe <original commit>` instead when it is not obvious from the original commit message. It is not needed when it's a minor version update that includes security and bug fixes but don't add new features or when the commit fixes an otherwise broken package.
Here is an example of a cherry-picked commit message with good reason description:
@@ -285,14 +265,3 @@ Other examples of reasons are:
- Previously the build would fail due to, e.g., `getaddrinfo` not being defined
- The previous download links were all broken
- Crash when starting on some X11 systems
#### Acceptable backport criteria
The stable branch does have some changes which cannot be backported. Most notable are breaking changes. The desire is to have stable users be uninterrupted when updating packages.
However, many changes are able to be backported, including:
- New Packages / Modules
- Security / Patch updates
- Version updates which include new functionality (but no breaking changes)
- Services which require a client to be up-to-date regardless. (E.g. `spotify`, `steam`, or `discord`)
- Security critical applications (E.g. `firefox`)

View File

@@ -17,6 +17,10 @@ in pkgs.stdenv.mkDerivation {
src = lib.cleanSource ./.;
makeFlags = [
"PANDOC_LUA_FILTERS_DIR=${pkgs.pandoc-lua-filters}/share/pandoc/filters"
];
postPatch = ''
ln -s ${doc-support} ./doc-support/result
'';
@@ -33,7 +37,4 @@ in pkgs.stdenv.mkDerivation {
echo "doc manual $dest manual.html" >> $out/nix-support/hydra-build-products
echo "doc manual $dest nixpkgs-manual.epub" >> $out/nix-support/hydra-build-products
'';
# Environment variables
PANDOC_LUA_FILTERS_DIR = "${pkgs.pandoc-lua-filters}/share/pandoc/filters";
}

View File

@@ -1,8 +1,5 @@
{ pkgs ? (import ../.. {}), nixpkgs ? { }}:
let
inherit (pkgs) lib;
inherit (lib) hasPrefix removePrefix;
locationsXml = import ./lib-function-locations.nix { inherit pkgs nixpkgs; };
functionDocs = import ./lib-function-docs.nix { inherit locationsXml pkgs; };
version = pkgs.lib.version;
@@ -26,26 +23,6 @@ let
<xsl:import href="${./parameters.xml}"/>
</xsl:stylesheet>
'';
# NB: This file describes the Nixpkgs manual, which happens to use module
# docs infra originally developed for NixOS.
optionsDoc = pkgs.nixosOptionsDoc {
inherit (pkgs.lib.evalModules { modules = [ ../../pkgs/top-level/config.nix ]; }) options;
documentType = "none";
transformOptions = opt:
opt // {
declarations =
map
(decl:
if hasPrefix (toString ../..) (toString decl)
then
let subpath = removePrefix "/" (removePrefix (toString ../..) (toString decl));
in { url = "https://github.com/NixOS/nixpkgs/blob/master/${subpath}"; name = subpath; }
else decl)
opt.declarations;
};
};
in pkgs.runCommand "doc-support" {}
''
mkdir result
@@ -53,7 +30,6 @@ in pkgs.runCommand "doc-support" {}
cd result
ln -s ${locationsXml} ./function-locations.xml
ln -s ${functionDocs} ./function-docs
ln -s ${optionsDoc.optionsDocBook} ./config-options.docbook.xml
ln -s ${pkgs.docbook5}/xml/rng/docbook/docbook.rng ./docbook.rng
ln -s ${pkgs.docbook_xsl_ns}/xml/xsl ./xsl

View File

@@ -22,6 +22,5 @@ with pkgs; stdenv.mkDerivation {
docgen lists 'List manipulation functions'
docgen debug 'Debugging functions'
docgen options 'NixOS / nixpkgs option handling'
docgen sources 'Source filtering functions'
'';
}

View File

@@ -11,5 +11,4 @@
<xsl:param name="toc.section.depth" select="0" />
<xsl:param name="admon.style" select="''" />
<xsl:param name="callout.graphics.extension" select="'.svg'" />
<xsl:param name="generate.consistent.ids" select="1" />
</xsl:stylesheet>

View File

@@ -7,8 +7,8 @@
The nixpkgs repository has several utility functions to manipulate Nix expressions.
</para>
<xi:include href="functions/library.xml" />
<xi:include href="functions/generators.section.xml" />
<xi:include href="functions/debug.section.xml" />
<xi:include href="functions/prefer-remote-fetch.section.xml" />
<xi:include href="functions/nix-gitignore.section.xml" />
<xi:include href="functions/generators.xml" />
<xi:include href="functions/debug.xml" />
<xi:include href="functions/prefer-remote-fetch.xml" />
<xi:include href="functions/nix-gitignore.xml" />
</chapter>

View File

@@ -1,5 +0,0 @@
# Debugging Nix Expressions {#sec-debug}
Nix is a unityped, dynamic language, this means every value can potentially appear anywhere. Since it is also non-strict, evaluation order and what ultimately is evaluated might surprise you. Therefore it is important to be able to debug nix expressions.
In the `lib/debug.nix` file you will find a number of functions that help (pretty-)printing values while evaluation is running. You can even specify how deep these values should be printed recursively, and transform them on the fly. Please consult the docstrings in `lib/debug.nix` for usage information.

14
doc/functions/debug.xml Normal file
View File

@@ -0,0 +1,14 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-debug">
<title>Debugging Nix Expressions</title>
<para>
Nix is a unityped, dynamic language, this means every value can potentially appear anywhere. Since it is also non-strict, evaluation order and what ultimately is evaluated might surprise you. Therefore it is important to be able to debug nix expressions.
</para>
<para>
In the <literal>lib/debug.nix</literal> file you will find a number of functions that help (pretty-)printing values while evaluation is runnnig. You can even specify how deep these values should be printed recursively, and transform them on the fly. Please consult the docstrings in <literal>lib/debug.nix</literal> for usage information.
</para>
</section>

View File

@@ -1,56 +0,0 @@
# Generators {#sec-generators}
Generators are functions that create file formats from nix data structures, e.g. for configuration files. There are generators available for: `INI`, `JSON` and `YAML`
All generators follow a similar call interface: `generatorName configFunctions data`, where `configFunctions` is an attrset of user-defined functions that format nested parts of the content. They each have common defaults, so often they do not need to be set manually. An example is `mkSectionName ? (name: libStr.escape [ "[" "]" ] name)` from the `INI` generator. It receives the name of a section and sanitizes it. The default `mkSectionName` escapes `[` and `]` with a backslash.
Generators can be fine-tuned to produce exactly the file format required by your application/service. One example is an INI-file format which uses `: ` as separator, the strings `"yes"`/`"no"` as boolean values and requires all string values to be quoted:
```nix
with lib;
let
customToINI = generators.toINI {
# specifies how to format a key/value pair
mkKeyValue = generators.mkKeyValueDefault {
# specifies the generated string for a subset of nix values
mkValueString = v:
if v == true then ''"yes"''
else if v == false then ''"no"''
else if isString v then ''"${v}"''
# and delegats all other values to the default generator
else generators.mkValueStringDefault {} v;
} ":";
};
# the INI file can now be given as plain old nix values
in customToINI {
main = {
pushinfo = true;
autopush = false;
host = "localhost";
port = 42;
};
mergetool = {
merge = "diff3";
};
}
```
This will produce the following INI file as nix string:
```INI
[main]
autopush:"no"
host:"localhost"
port:42
pushinfo:"yes"
str\:ange:"very::strange"
[mergetool]
merge:"diff3"
```
::: {.note}
Nix store paths can be converted to strings by enclosing a derivation attribute like so: `"${drv}"`.
:::
Detailed documentation for each generator can be found in `lib/generators.nix`.

View File

@@ -0,0 +1,74 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-generators">
<title>Generators</title>
<para>
Generators are functions that create file formats from nix data structures, e.g. for configuration files. There are generators available for: <literal>INI</literal>, <literal>JSON</literal> and <literal>YAML</literal>
</para>
<para>
All generators follow a similar call interface: <code>generatorName configFunctions data</code>, where <literal>configFunctions</literal> is an attrset of user-defined functions that format nested parts of the content. They each have common defaults, so often they do not need to be set manually. An example is <code>mkSectionName ? (name: libStr.escape [ "[" "]" ] name)</code> from the <literal>INI</literal> generator. It receives the name of a section and sanitizes it. The default <literal>mkSectionName</literal> escapes <literal>[</literal> and <literal>]</literal> with a backslash.
</para>
<para>
Generators can be fine-tuned to produce exactly the file format required by your application/service. One example is an INI-file format which uses <literal>: </literal> as separator, the strings <literal>"yes"</literal>/<literal>"no"</literal> as boolean values and requires all string values to be quoted:
</para>
<programlisting>
with lib;
let
customToINI = generators.toINI {
# specifies how to format a key/value pair
mkKeyValue = generators.mkKeyValueDefault {
# specifies the generated string for a subset of nix values
mkValueString = v:
if v == true then ''"yes"''
else if v == false then ''"no"''
else if isString v then ''"${v}"''
# and delegats all other values to the default generator
else generators.mkValueStringDefault {} v;
} ":";
};
# the INI file can now be given as plain old nix values
in customToINI {
main = {
pushinfo = true;
autopush = false;
host = "localhost";
port = 42;
};
mergetool = {
merge = "diff3";
};
}
</programlisting>
<para>
This will produce the following INI file as nix string:
</para>
<programlisting>
[main]
autopush:"no"
host:"localhost"
port:42
pushinfo:"yes"
str\:ange:"very::strange"
[mergetool]
merge:"diff3"
</programlisting>
<note>
<para>
Nix store paths can be converted to strings by enclosing a derivation attribute like so: <code>"${drv}"</code>.
</para>
</note>
<para>
Detailed documentation for each generator can be found in <literal>lib/generators.nix</literal>.
</para>
</section>

View File

@@ -25,6 +25,4 @@
<xi:include href="./library/generated/debug.xml" />
<xi:include href="./library/generated/options.xml" />
<xi:include href="./library/generated/sources.xml" />
</section>

View File

@@ -772,7 +772,7 @@ nameValuePair "some" 6
<title>Modifying each value of an attribute set</title>
<programlisting><![CDATA[
lib.attrsets.mapAttrs
(name: value: name + "-" + value)
(name: value: name + "-" value)
{ x = "foo"; y = "bar"; }
=> { x = "x-foo"; y = "y-bar"; }
]]></programlisting>
@@ -1474,7 +1474,7 @@ lib.attrsets.zipAttrsWith
<section xml:id="function-library-lib.attrsets.zipAttrs">
<title><function>lib.attrsets.zipAttrs</function></title>
<subtitle><literal>zipAttrs :: [ AttrSet ] -> AttrSet</literal>
<subtitle><literal>zipAttrsWith :: [ AttrSet ] -> AttrSet</literal>
</subtitle>
<xi:include href="./locations.xml" xpointer="lib.attrsets.zipAttrs" />

View File

@@ -1,49 +0,0 @@
# pkgs.nix-gitignore {#sec-pkgs-nix-gitignore}
`pkgs.nix-gitignore` is a function that acts similarly to `builtins.filterSource` but also allows filtering with the help of the gitignore format.
## Usage {#sec-pkgs-nix-gitignore-usage}
`pkgs.nix-gitignore` exports a number of functions, but you\'ll most likely need either `gitignoreSource` or `gitignoreSourcePure`. As their first argument, they both accept either 1. a file with gitignore lines or 2. a string with gitignore lines, or 3. a list of either of the two. They will be concatenated into a single big string.
```nix
{ pkgs ? import <nixpkgs> {} }:
nix-gitignore.gitignoreSource [] ./source
# Simplest version
nix-gitignore.gitignoreSource "supplemental-ignores\n" ./source
# This one reads the ./source/.gitignore and concats the auxiliary ignores
nix-gitignore.gitignoreSourcePure "ignore-this\nignore-that\n" ./source
# Use this string as gitignore, don't read ./source/.gitignore.
nix-gitignore.gitignoreSourcePure ["ignore-this\nignore-that\n", ~/.gitignore] ./source
# It also accepts a list (of strings and paths) that will be concatenated
# once the paths are turned to strings via readFile.
```
These functions are derived from the `Filter` functions by setting the first filter argument to `(_: _: true)`:
```nix
gitignoreSourcePure = gitignoreFilterSourcePure (_: _: true);
gitignoreSource = gitignoreFilterSource (_: _: true);
```
Those filter functions accept the same arguments the `builtins.filterSource` function would pass to its filters, thus `fn: gitignoreFilterSourcePure fn ""` should be extensionally equivalent to `filterSource`. The file is blacklisted if it\'s blacklisted by either your filter or the gitignoreFilter.
If you want to make your own filter from scratch, you may use
```nix
gitignoreFilter = ign: root: filterPattern (gitignoreToPatterns ign) root;
```
## gitignore files in subdirectories {#sec-pkgs-nix-gitignore-usage-recursive}
If you wish to use a filter that would search for .gitignore files in subdirectories, just like git does by default, use this function:
```nix
gitignoreFilterRecursiveSource = filter: patterns: root:
# OR
gitignoreRecursiveSource = gitignoreFilterSourcePure (_: _: true);
```

View File

@@ -0,0 +1,70 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-pkgs-nix-gitignore">
<title>pkgs.nix-gitignore</title>
<para>
<function>pkgs.nix-gitignore</function> is a function that acts similarly to <literal>builtins.filterSource</literal> but also allows filtering with the help of the gitignore format.
</para>
<section xml:id="sec-pkgs-nix-gitignore-usage">
<title>Usage</title>
<para>
<literal>pkgs.nix-gitignore</literal> exports a number of functions, but you'll most likely need either <literal>gitignoreSource</literal> or <literal>gitignoreSourcePure</literal>. As their first argument, they both accept either 1. a file with gitignore lines or 2. a string with gitignore lines, or 3. a list of either of the two. They will be concatenated into a single big string.
</para>
<programlisting><![CDATA[
{ pkgs ? import <nixpkgs> {} }:
nix-gitignore.gitignoreSource [] ./source
# Simplest version
nix-gitignore.gitignoreSource "supplemental-ignores\n" ./source
# This one reads the ./source/.gitignore and concats the auxiliary ignores
nix-gitignore.gitignoreSourcePure "ignore-this\nignore-that\n" ./source
# Use this string as gitignore, don't read ./source/.gitignore.
nix-gitignore.gitignoreSourcePure ["ignore-this\nignore-that\n", ~/.gitignore] ./source
# It also accepts a list (of strings and paths) that will be concatenated
# once the paths are turned to strings via readFile.
]]></programlisting>
<para>
These functions are derived from the <literal>Filter</literal> functions by setting the first filter argument to <literal>(_: _: true)</literal>:
</para>
<programlisting><![CDATA[
gitignoreSourcePure = gitignoreFilterSourcePure (_: _: true);
gitignoreSource = gitignoreFilterSource (_: _: true);
]]></programlisting>
<para>
Those filter functions accept the same arguments the <literal>builtins.filterSource</literal> function would pass to its filters, thus <literal>fn: gitignoreFilterSourcePure fn ""</literal> should be extensionally equivalent to <literal>filterSource</literal>. The file is blacklisted iff it's blacklisted by either your filter or the gitignoreFilter.
</para>
<para>
If you want to make your own filter from scratch, you may use
</para>
<programlisting><![CDATA[
gitignoreFilter = ign: root: filterPattern (gitignoreToPatterns ign) root;
]]></programlisting>
</section>
<section xml:id="sec-pkgs-nix-gitignore-usage-recursive">
<title>gitignore files in subdirectories</title>
<para>
If you wish to use a filter that would search for .gitignore files in subdirectories, just like git does by default, use this function:
</para>
<programlisting><![CDATA[
gitignoreFilterRecursiveSource = filter: patterns: root:
# OR
gitignoreRecursiveSource = gitignoreFilterSourcePure (_: _: true);
]]></programlisting>
</section>
</section>

View File

@@ -1,17 +0,0 @@
# prefer-remote-fetch overlay {#sec-prefer-remote-fetch}
`prefer-remote-fetch` is an overlay that download sources on remote builder. This is useful when the evaluating machine has a slow upload while the builder can fetch faster directly from the source. To use it, put the following snippet as a new overlay:
```nix
self: super:
(super.prefer-remote-fetch self super)
```
A full configuration example for that sets the overlay up for your own account, could look like this
```ShellSession
$ mkdir ~/.config/nixpkgs/overlays/
$ cat > ~/.config/nixpkgs/overlays/prefer-remote-fetch.nix <<EOF
self: super: super.prefer-remote-fetch self super
EOF
```

View File

@@ -0,0 +1,21 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/xinclude"
xml:id="sec-prefer-remote-fetch">
<title>prefer-remote-fetch overlay</title>
<para>
<function>prefer-remote-fetch</function> is an overlay that download sources on remote builder. This is useful when the evaluating machine has a slow upload while the builder can fetch faster directly from the source. To use it, put the following snippet as a new overlay:
<programlisting>
self: super:
(super.prefer-remote-fetch self super)
</programlisting>
A full configuration example for that sets the overlay up for your own account, could look like this
<screen>
<prompt>$ </prompt>mkdir ~/.config/nixpkgs/overlays/
<prompt>$ </prompt>cat &gt; ~/.config/nixpkgs/overlays/prefer-remote-fetch.nix &lt;&lt;EOF
self: super: super.prefer-remote-fetch self super
EOF
</screen>
</para>
</section>

View File

@@ -1,4 +0,0 @@
### Autoconf {#setup-hook-autoconf}
The `autoreconfHook` derivation adds `autoreconfPhase`, which runs autoreconf, libtoolize and automake, essentially preparing the configure script in autotools-based builds. Most autotools-based packages come with the configure script pre-generated, but this hook is necessary for a few packages and when you need to patch the packages configure scripts.

View File

@@ -1,4 +0,0 @@
### Automake {#setup-hook-automake}
Adds the `share/aclocal` subdirectory of each build input to the `ACLOCAL_PATH` environment variable.

View File

@@ -1,12 +0,0 @@
### autoPatchelfHook {#setup-hook-autopatchelfhook}
This is a special setup hook which helps in packaging proprietary software in that it automatically tries to find missing shared library dependencies of ELF files based on the given `buildInputs` and `nativeBuildInputs`.
You can also specify a `runtimeDependencies` variable which lists dependencies to be unconditionally added to rpath of all executables. This is useful for programs that use dlopen 3 to load libraries at runtime.
In certain situations you may want to run the main command (`autoPatchelf`) of the setup hook on a file or a set of directories instead of unconditionally patching all outputs. This can be done by setting the `dontAutoPatchelf` environment variable to a non-empty value.
By default `autoPatchelf` will fail as soon as any ELF file requires a dependency which cannot be resolved via the given build inputs. In some situations you might prefer to just leave missing dependencies unpatched and continue to patch the rest. This can be achieved by setting the `autoPatchelfIgnoreMissingDeps` environment variable to a non-empty value. `autoPatchelfIgnoreMissingDeps` can be set to a list like `autoPatchelfIgnoreMissingDeps = [ "libcuda.so.1" "libcudart.so.1" ];` or to simply `[ "*" ]` to ignore all missing dependencies.
The `autoPatchelf` command also recognizes a `--no-recurse` command line flag, which prevents it from recursing into subdirectories.

View File

@@ -1,18 +0,0 @@
### breakpointHook {#breakpointhook}
This hook will make a build pause instead of stopping when a failure happens. It prevents nix from cleaning up the build environment immediately and allows the user to attach to a build environment using the `cntr` command. Upon build error it will print instructions on how to use `cntr`, which can be used to enter the environment for debugging. Installing cntr and running the command will provide shell access to the build sandbox of failed build. At `/var/lib/cntr` the sandboxed filesystem is mounted. All commands and files of the system are still accessible within the shell. To execute commands from the sandbox use the cntr exec subcommand. `cntr` is only supported on Linux-based platforms. To use it first add `cntr` to your `environment.systemPackages` on NixOS or alternatively to the root user on non-NixOS systems. Then in the package that is supposed to be inspected, add `breakpointHook` to `nativeBuildInputs`.
```nix
nativeBuildInputs = [ breakpointHook ];
```
When a build failure happens there will be an instruction printed that shows how to attach with `cntr` to the build sandbox.
::: {.note}
::: {.title}
Caution with remote builds
:::
This wont work with remote builds as the build environment is on a different machine and cant be accessed by `cntr`. Remote builds can be turned off by setting `--option builders ''` for `nix-build` or `--builders ''` for `nix build`.
:::

Some files were not shown because too many files have changed in this diff Show More