Compare commits

..

1 Commits

Author SHA1 Message Date
Domen Kožar
13a7557079 patchelf: 0.9 -> 0.11 2020-06-09 16:29:20 +02:00
32053 changed files with 800082 additions and 1967268 deletions

View File

@@ -43,45 +43,32 @@ indent_style = space
# Disable file types or individual files
# some of these files may be auto-generated and/or require significant changes
[*.{c,h}]
insert_final_newline = unset
trim_trailing_whitespace = unset
[*.{asc,key,ovpn}]
insert_final_newline = unset
end_of_line = unset
trim_trailing_whitespace = unset
[*.lock]
indent_size = unset
# trailing whitespace is an actual syntax element of classic Markdown/
# CommonMark to enforce a line break
[*.md]
trim_trailing_whitespace = unset
[eggs.nix]
trim_trailing_whitespace = unset
[nixos/modules/services/networking/ircd-hybrid/*.{conf,in}]
trim_trailing_whitespace = unset
[pkgs/build-support/dotnetenv/Wrapper/**]
end_of_line = unset
indent_style = unset
[gemset.nix]
insert_final_newline = unset
trim_trailing_whitespace = unset
[pkgs/development/compilers/elm/registry.dat]
end_of_line = unset
insert_final_newline = unset
[pkgs/applications/editors/emacs-modes/recipes-archive-melpa.json]
indent_size = unset
[pkgs/development/lisp-modules/quicklisp-to-nix.nix]
indent_size = unset
[pkgs/development/haskell-modules/hackage-packages.nix]
indent_style = unset
indent_size = unset
trim_trailing_whitespace = unset
[pkgs/development/mobile/androidenv/generated/{addons,packages}.nix]
trim_trailing_whitespace = unset
[pkgs/development/node-packages/node-packages.nix]
insert_final_newline = unset
[pkgs/servers/dict/wordnet_structures.py]
indent_size = unset
trim_trailing_whitespace = unset
[pkgs/tools/misc/timidity/timidity.cfg]
trim_trailing_whitespace = unset
[pkgs/top-level/perl-packages.nix]
indent_size = unset

View File

@@ -1,38 +0,0 @@
# This file contains a list of commits that are not likely what you
# are looking for in a blame, such as mass reformatting or renaming.
# You can set this file as a default ignore file for blame by running
# the following command.
#
# $ git config blame.ignoreRevsFile .git-blame-ignore-revs
#
# To temporarily not use this file add
# --ignore-revs-file=""
# to your blame command.
#
# The ignoreRevsFile can't be set globally due to blame failing if the file isn't present.
# To not have to set the option in every repository it is needed in,
# save the following script in your path with the name "git-bblame"
# now you can run
# $ git bblame $FILE
# to use the .git-blame-ignore-revs file if it is present.
#
# #!/usr/bin/env bash
# repo_root=$(git rev-parse --show-toplevel)
# if [[ -e $repo_root/.git-blame-ignore-revs ]]; then
# git blame --ignore-revs-file="$repo_root/.git-blame-ignore-revs" $@
# else
# git blame $@
# fi
# nixos/modules/rename: Sort alphabetically
1f71224fe86605ef4cd23ed327b3da7882dad382
# manual: fix typos
feddd5e7f8c6f8167b48a077fa2a5394dc008999
# nixos: fix module paths in rename.nix
d08ede042b74b8199dc748323768227b88efcf7c
# fix indentation in mk-python-derivation.nix
d1c1a0c656ccd8bd3b25d3c4287f2d075faf3cf3

224
.github/CODEOWNERS vendored
View File

@@ -6,54 +6,34 @@
#
# For documentation on this file, see https://help.github.com/articles/about-codeowners/
# Mentioned users will get code review requests.
#
# IMPORTANT NOTE: in order to actually get pinged, commit access is required.
# This also holds true for GitHub teams. Since almost none of our teams have write
# permissions, you need to list all members of the team with commit access individually.
# This file
/.github/CODEOWNERS @edolstra
# GitHub actions
/.github/workflows @NixOS/Security @Mic92 @zowoq
/.github/workflows/merge-staging @FRidh
# EditorConfig
/.editorconfig @Mic92 @zowoq
# Libraries
/lib @edolstra @nbp @infinisil
/lib/systems @alyssais @nbp @ericson2314 @matthewbauer
/lib/systems @nbp @ericson2314 @matthewbauer
/lib/generators.nix @edolstra @nbp @Profpatsch
/lib/cli.nix @edolstra @nbp @Profpatsch
/lib/debug.nix @edolstra @nbp @Profpatsch
/lib/asserts.nix @edolstra @nbp @Profpatsch
# Nixpkgs Internals
/default.nix @nbp
/pkgs/top-level/default.nix @nbp @Ericson2314
/pkgs/top-level/impure.nix @nbp @Ericson2314
/pkgs/top-level/stage.nix @nbp @Ericson2314 @matthewbauer
/pkgs/top-level/splice.nix @Ericson2314 @matthewbauer
/pkgs/top-level/release-cross.nix @Ericson2314 @matthewbauer
/pkgs/stdenv/generic @Ericson2314 @matthewbauer
/pkgs/stdenv/cross @Ericson2314 @matthewbauer
/pkgs/build-support/cc-wrapper @Ericson2314
/pkgs/build-support/bintools-wrapper @Ericson2314
/pkgs/build-support/setup-hooks @Ericson2314
/pkgs/build-support/setup-hooks/auto-patchelf.sh @layus
/pkgs/build-support/setup-hooks/auto-patchelf.py @layus
/default.nix @nbp
/pkgs/top-level/default.nix @nbp @Ericson2314
/pkgs/top-level/impure.nix @nbp @Ericson2314
/pkgs/top-level/stage.nix @nbp @Ericson2314 @matthewbauer
/pkgs/top-level/splice.nix @Ericson2314 @matthewbauer
/pkgs/top-level/release-cross.nix @Ericson2314 @matthewbauer
/pkgs/stdenv/generic @Ericson2314 @matthewbauer
/pkgs/stdenv/cross @Ericson2314 @matthewbauer
/pkgs/build-support/cc-wrapper @Ericson2314 @orivej
/pkgs/build-support/bintools-wrapper @Ericson2314 @orivej
/pkgs/build-support/setup-hooks @Ericson2314
# Nixpkgs build-support
/pkgs/build-support/writers @lassulus @Profpatsch
# Nixpkgs documentation
/doc @fricklerhandwerk
/maintainers/scripts/db-to-md.sh @jtojnar @ryantm
/maintainers/scripts/doc @jtojnar @ryantm
/doc/build-aux/pandoc-filters @jtojnar
/doc/contributing/contributing-to-documentation.chapter.md @jtojnar
# NixOS Internals
/nixos/default.nix @nbp @infinisil
/nixos/lib/from-env.nix @nbp @infinisil
@@ -71,17 +51,10 @@
/nixos/doc/manual/development/writing-modules.xml @nbp
/nixos/doc/manual/man-nixos-option.xml @nbp
/nixos/modules/installer/tools/nixos-option.sh @nbp
/nixos/modules/system @dasJ
# NixOS integration test driver
/nixos/lib/test-driver @tfc
# Systemd
/nixos/modules/system/boot/systemd.nix @NixOS/systemd
/nixos/modules/system/boot/systemd @NixOS/systemd
/nixos/lib/systemd-*.nix @NixOS/systemd
/pkgs/os-specific/linux/systemd @NixOS/systemd
# Updaters
## update.nix
/maintainers/scripts/update.nix @jtojnar
@@ -90,40 +63,39 @@
/pkgs/common-updater/scripts/update-source-version @jtojnar
# Python-related code and docs
/maintainers/scripts/update-python-libraries @FRidh
/pkgs/top-level/python-packages.nix @FRidh @jonringer
/pkgs/development/interpreters/python @FRidh
/pkgs/development/python-modules @FRidh @jonringer
/doc/languages-frameworks/python.section.md @FRidh
/pkgs/development/tools/poetry2nix @adisbladis
/pkgs/development/interpreters/python/hooks @FRidh @jonringer
/maintainers/scripts/update-python-libraries @FRidh
/pkgs/top-level/python-packages.nix @FRidh @jonringer
/pkgs/development/interpreters/python @FRidh
/pkgs/development/python-modules @FRidh @jonringer
/doc/languages-frameworks/python.section.md @FRidh
# Haskell
/doc/languages-frameworks/haskell.section.md @cdepillabout @sternenseemann @maralorn
/maintainers/scripts/haskell @cdepillabout @sternenseemann @maralorn
/pkgs/development/compilers/ghc @cdepillabout @sternenseemann @maralorn
/pkgs/development/haskell-modules @cdepillabout @sternenseemann @maralorn
/pkgs/test/haskell @cdepillabout @sternenseemann @maralorn
/pkgs/top-level/release-haskell.nix @cdepillabout @sternenseemann @maralorn
/pkgs/top-level/haskell-packages.nix @cdepillabout @sternenseemann @maralorn
/pkgs/development/compilers/ghc @cdepillabout
/pkgs/development/haskell-modules @cdepillabout
/pkgs/development/haskell-modules/default.nix @cdepillabout
/pkgs/development/haskell-modules/generic-builder.nix @cdepillabout
/pkgs/development/haskell-modules/hoogle.nix @cdepillabout
# Perl
/pkgs/development/interpreters/perl @stigtsp @zakame
/pkgs/top-level/perl-packages.nix @stigtsp @zakame
/pkgs/development/perl-modules @stigtsp @zakame
/pkgs/development/interpreters/perl @volth
/pkgs/top-level/perl-packages.nix @volth
/pkgs/development/perl-modules @volth
# R
/pkgs/applications/science/math/R @jbedo
/pkgs/development/r-modules @jbedo
/pkgs/applications/science/math/R @peti
/pkgs/development/r-modules @peti
# Ruby
/pkgs/development/interpreters/ruby @marsam
/pkgs/development/ruby-modules @marsam
/pkgs/development/interpreters/ruby @alyssais
/pkgs/development/ruby-modules @alyssais
# Rust
/pkgs/development/compilers/rust @Mic92 @LnL7 @zowoq
/pkgs/build-support/rust @zowoq
/doc/languages-frameworks/rust.section.md @zowoq
/pkgs/development/compilers/rust @Mic92 @LnL7
/pkgs/build-support/rust @andir
# Darwin-related
/pkgs/stdenv/darwin @NixOS/darwin-maintainers
/pkgs/os-specific/darwin @NixOS/darwin-maintainers
# C compilers
/pkgs/development/compilers/gcc @matthewbauer
@@ -133,18 +105,21 @@
/pkgs/top-level/unix-tools.nix @matthewbauer
/pkgs/development/tools/xcbuild @matthewbauer
# Audio
/nixos/modules/services/audio/botamusique.nix @mweinelt
/nixos/modules/services/audio/snapserver.nix @mweinelt
/nixos/tests/modules/services/audio/botamusique.nix @mweinelt
/nixos/tests/snapcast.nix @mweinelt
# Browsers
/pkgs/applications/networking/browsers/firefox @mweinelt
# Beam-related (Erlang, Elixir, LFE, etc)
/pkgs/development/beam-modules @gleber
/pkgs/development/interpreters/erlang @gleber
/pkgs/development/interpreters/lfe @gleber
/pkgs/development/interpreters/elixir @gleber
/pkgs/development/tools/build-managers/rebar @gleber
/pkgs/development/tools/build-managers/rebar3 @gleber
/pkgs/development/tools/erlang @gleber
# Jetbrains
/pkgs/applications/editors/jetbrains @edwtjo
# Eclipse
/pkgs/applications/editors/eclipse @rycee
# Licenses
/lib/licenses.nix @alyssais
@@ -155,7 +130,7 @@
/pkgs/development/libraries/qt-5 @ttuegel
# PostgreSQL and related stuff
/pkgs/servers/sql/postgresql @thoughtpolice @marsam
/pkgs/servers/sql/postgresql @thoughtpolice
/nixos/modules/services/databases/postgresql.xml @thoughtpolice
/nixos/modules/services/databases/postgresql.nix @thoughtpolice
/nixos/tests/postgresql.nix @thoughtpolice
@@ -168,39 +143,21 @@
/nixos/tests/hardened.nix @joachifm
/pkgs/os-specific/linux/kernel/hardened-config.nix @joachifm
# Home Automation
/nixos/modules/services/misc/home-assistant.nix @mweinelt
/nixos/modules/services/misc/zigbee2mqtt.nix @mweinelt
/nixos/tests/home-assistant.nix @mweinelt
/nixos/tests/zigbee2mqtt.nix @mweinelt
/pkgs/servers/home-assistant @mweinelt
/pkgs/tools/misc/esphome @mweinelt
# Network Time Daemons
/pkgs/tools/networking/chrony @thoughtpolice
/pkgs/tools/networking/ntp @thoughtpolice
/pkgs/tools/networking/openntpd @thoughtpolice
/nixos/modules/services/networking/ntp @thoughtpolice
# Network
/pkgs/tools/networking/kea/default.nix @mweinelt
/pkgs/tools/networking/babeld/default.nix @mweinelt
/nixos/modules/services/networking/babeld.nix @mweinelt
/nixos/modules/services/networking/kea.nix @mweinelt
/nixos/modules/services/networking/knot.nix @mweinelt
/nixos/tests/babeld.nix @mweinelt
/nixos/tests/kea.nix @mweinelt
/nixos/tests/knot.nix @mweinelt
# Dhall
/pkgs/development/dhall-modules @Gabriella439 @Profpatsch @ehmry
/pkgs/development/interpreters/dhall @Gabriella439 @Profpatsch @ehmry
/pkgs/development/dhall-modules @Gabriel439 @Profpatsch
/pkgs/development/interpreters/dhall @Gabriel439 @Profpatsch
# Idris
/pkgs/development/idris-modules @Infinisil
# Bazel
/pkgs/development/tools/build-managers/bazel @Profpatsch
/pkgs/development/tools/build-managers/bazel @mboes @Profpatsch
# NixOS modules for e-mail and dns services
/nixos/modules/services/mail/mailman.nix @peti
@@ -209,18 +166,15 @@
/nixos/modules/services/mail/rspamd.nix @peti
# Emacs
/pkgs/applications/editors/emacs/elisp-packages @adisbladis
/pkgs/applications/editors/emacs @adisbladis
/pkgs/top-level/emacs-packages.nix @adisbladis
# Neovim
/pkgs/applications/editors/neovim @jonringer @teto
/pkgs/applications/editors/emacs-modes @adisbladis
/pkgs/applications/editors/emacs @adisbladis
/pkgs/top-level/emacs-packages.nix @adisbladis
# VimPlugins
/pkgs/applications/editors/vim/plugins @jonringer
/pkgs/misc/vim-plugins @jonringer @softinio
# VsCode Extensions
/pkgs/applications/editors/vscode/extensions @jonringer
/pkgs/misc/vscode-extensions @jonringer
# Prometheus exporter modules and tests
/nixos/modules/services/monitoring/prometheus/exporters.nix @WilliButz
@@ -228,64 +182,14 @@
/nixos/tests/prometheus-exporters.nix @WilliButz
# PHP interpreter, packages, extensions, tests and documentation
/doc/languages-frameworks/php.section.md @aanderse @etu @globin @ma27 @talyz
/nixos/tests/php @aanderse @etu @globin @ma27 @talyz
/pkgs/build-support/build-pecl.nix @aanderse @etu @globin @ma27 @talyz
/pkgs/development/interpreters/php @jtojnar @aanderse @etu @globin @ma27 @talyz
/pkgs/development/php-packages @aanderse @etu @globin @ma27 @talyz
/pkgs/top-level/php-packages.nix @jtojnar @aanderse @etu @globin @ma27 @talyz
/doc/languages-frameworks/php.section.md @NixOS/php
/nixos/tests/php @NixOS/php
/pkgs/build-support/build-pecl.nix @NixOS/php
/pkgs/development/interpreters/php @NixOS/php
/pkgs/top-level/php-packages.nix @NixOS/php
# Podman, CRI-O modules and related
/nixos/modules/virtualisation/containers.nix @zowoq @adisbladis
/nixos/modules/virtualisation/cri-o.nix @zowoq @adisbladis
/nixos/modules/virtualisation/podman @zowoq @adisbladis
/nixos/tests/cri-o.nix @zowoq @adisbladis
/nixos/tests/podman @zowoq @adisbladis
# Docker tools
/pkgs/build-support/docker @roberth
/nixos/tests/docker-tools* @roberth
/doc/builders/images/dockertools.section.md @roberth
# Blockchains
/pkgs/applications/blockchains @mmahut @RaghavSood
# Go
/doc/languages-frameworks/go.section.md @kalbasit @Mic92 @zowoq
/pkgs/build-support/go @kalbasit @Mic92 @zowoq
/pkgs/development/compilers/go @kalbasit @Mic92 @zowoq
# GNOME
/pkgs/desktops/gnome @jtojnar
/pkgs/desktops/gnome/extensions @piegamesde @jtojnar
# Cinnamon
/pkgs/desktops/cinnamon @mkg20001
# nim
/pkgs/development/compilers/nim @ehmry
/pkgs/development/nim-packages @ehmry
/pkgs/top-level/nim-packages.nix @ehmry
# terraform providers
/pkgs/applications/networking/cluster/terraform-providers @zowoq
# kubernetes
/nixos/doc/manual/configuration/kubernetes.chapter.md @zowoq
/nixos/modules/services/cluster/kubernetes @zowoq
/nixos/tests/kubernetes @zowoq
/pkgs/applications/networking/cluster/kubernetes @zowoq
# Matrix
/pkgs/servers/heisenbridge @piegamesde
/pkgs/servers/matrix-conduit @piegamesde
/pkgs/servers/matrix-synapse/matrix-appservice-irc @piegamesde
/nixos/modules/services/misc/heisenbridge.nix @piegamesde
/nixos/modules/services/misc/matrix-appservice-irc.nix @piegamesde
/nixos/modules/services/misc/matrix-conduit.nix @piegamesde
/nixos/tests/matrix-appservice-irc.nix @piegamesde
/nixos/tests/matrix-conduit.nix @piegamesde
# Dotnet
/pkgs/build-support/dotnet @IvarWithoutBones
/pkgs/development/compilers/dotnet @IvarWithoutBones
/nixos/modules/virtualisation/containers.nix @NixOS/podman
/nixos/modules/virtualisation/cri-o.nix @NixOS/podman
/nixos/modules/virtualisation/podman.nix @NixOS/podman
/nixos/tests/podman.nix @NixOS/podman

63
.github/CONTRIBUTING.md vendored Normal file
View File

@@ -0,0 +1,63 @@
# How to contribute
Note: contributing implies licensing those contributions
under the terms of [COPYING](../COPYING), which is an MIT-like license.
## Opening issues
* Make sure you have a [GitHub account](https://github.com/signup/free)
* Make sure there is no open issue on the topic
* [Submit a new issue](https://github.com/NixOS/nixpkgs/issues/new/choose) by choosing the kind of topic and fill out the template
## Submitting changes
* Format the commit messages in the following way:
```
(pkg-name | nixos/<module>): (from -> to | init at version | refactor | etc)
(Motivation for change. Additional information.)
```
For consistency, there should not be a period at the end of the commit message's summary line (the first line of the commit message).
Examples:
* nginx: init at 2.0.1
* firefox: 54.0.1 -> 55.0
* nixos/hydra: add bazBaz option
Dual baz behavior is needed to do foo.
* nixos/nginx: refactor config generation
The old config generation system used impure shell scripts and could break in specific circumstances (see #1234).
* `meta.description` should:
* Be capitalized.
* Not start with the package name.
* Not have a period at the end.
* `meta.license` must be set and fit the upstream license.
* If there is no upstream license, `meta.license` should default to `stdenv.lib.licenses.unfree`.
* `meta.maintainers` must be set.
See the nixpkgs manual for more details on [standard meta-attributes](https://nixos.org/nixpkgs/manual/#sec-standard-meta-attributes) and on how to [submit changes to nixpkgs](https://nixos.org/nixpkgs/manual/#chap-submitting-changes).
## Writing good commit messages
In addition to writing properly formatted commit messages, it's important to include relevant information so other developers can later understand *why* a change was made. While this information usually can be found by digging code, mailing list/Discourse archives, pull request discussions or upstream changes, it may require a lot of work.
For package version upgrades and such a one-line commit message is usually sufficient.
## Backporting changes
Follow these steps to backport a change into a release branch in compliance with the [commit policy](https://nixos.org/nixpkgs/manual/#submitting-changes-stable-release-branches).
1. Take note of the commits in which the change was introduced into `master` branch.
2. Check out the target _release branch_, e.g. `release-20.03`. Do not use a _channel branch_ like `nixos-20.03` or `nixpkgs-20.03`.
3. Create a branch for your change, e.g. `git checkout -b backport`.
4. When the reason to backport is not obvious from the original commit message, use `git cherry-pick -xe <original commit>` and add a reason. Otherwise use `git cherry-pick -x <original commit>`. That's fine for minor version updates that only include security and bug fixes, commits that fixes an otherwise broken package or similar.
5. Push to GitHub and open a backport pull request. Make sure to select the release branch (e.g. `release-20.03`) as the target branch of the pull request, and link to the pull request in which the original change was comitted to `master`. The pull request title should be the commit title with the release version as prefix, e.g. `[20.03]`.
## Reviewing contributions
See the nixpkgs manual for more details on how to [Review contributions](https://nixos.org/nixpkgs/manual/#chap-reviewing-contributions).

View File

@@ -7,34 +7,37 @@ assignees: ''
---
### Describe the bug
**Describe the bug**
A clear and concise description of what the bug is.
### Steps To Reproduce
**To Reproduce**
Steps to reproduce the behavior:
1. ...
2. ...
3. ...
### Expected behavior
**Expected behavior**
A clear and concise description of what you expected to happen.
### Screenshots
**Screenshots**
If applicable, add screenshots to help explain your problem.
### Additional context
**Additional context**
Add any other context about the problem here.
### Notify maintainers
**Notify maintainers**
<!--
Please @ people who are in the `meta.maintainers` list of the offending package or module.
If in doubt, check `git blame` for whoever last touched something.
-->
### Metadata
**Metadata**
Please run `nix-shell -p nix-info --run "nix-info -m"` and paste the result.
```console
[user@system:~]$ nix-shell -p nix-info --run "nix-info -m"
output here
Maintainer information:
```yaml
# a list of nixpkgs attributes affected by the problem
attribute:
# a list of nixos modules affected by the problem
module:
```

View File

@@ -1,34 +0,0 @@
---
name: Build failure
about: Create a report to help us improve
title: ''
labels: '0.kind: build failure'
assignees: ''
---
### Steps To Reproduce
Steps to reproduce the behavior:
1. build *X*
### Build log
```
log here if short otherwise a link to a gist
```
### Additional context
Add any other context about the problem here.
### Notify maintainers
<!--
Please @ people who are in the `meta.maintainers` list of the offending package or module.
If in doubt, check `git blame` for whoever last touched something.
-->
### Metadata
Please run `nix-shell -p nix-info --run "nix-info -m"` and paste the result.
```console
[user@system:~]$ nix-shell -p nix-info --run "nix-info -m"
output here
```

View File

@@ -1,48 +0,0 @@
---
name: Out-of-date package reports
about: For packages that are out-of-date
title: ''
labels: '9.needs: package (update)'
assignees: ''
---
###### Checklist
<!-- Note that these are hard requirements -->
<!--
You can use the "Go to file" functionality on GitHub to find the package
Then you can go to the history for this package
Find the latest "package_name: old_version -> new_version" commit
The "new_version" is the current version of the package
-->
- [ ] Checked the [nixpkgs master branch](https://github.com/NixOS/nixpkgs)
<!--
Type the name of your package and try to find an open pull request for the package
If you find an open pull request, you can review it!
There's a high chance that you'll have the new version right away while helping the community!
-->
- [ ] Checked the [nixpkgs pull requests](https://github.com/NixOS/nixpkgs/pulls)
###### Project name
`nix search` name:
<!--
The current version can be found easily with the same process as above for checking the master branch
If an open PR is present for the package, take this version as the current one and link to the PR
-->
current version:
desired version:
###### Notify maintainers
<!--
Search your package here: https://search.nixos.org/packages?channel=unstable
If no maintainer is listed for your package, tag the person that last updated the package
-->
maintainers:
###### Note for maintainers
Please tag this issue in your PR.

View File

@@ -1,41 +1,28 @@
###### Description of changes
<!--
For package updates please link to a changelog or describe changes, this helps your fellow maintainers discover breaking updates.
For new packages please briefly describe the package or provide a link to its homepage.
-->
###### Things done
<!-- Please check what applies. Note that these are not hard requirements but merely serve as information for reviewers. -->
- Built on platform(s)
- [ ] x86_64-linux
- [ ] aarch64-linux
- [ ] x86_64-darwin
- [ ] aarch64-darwin
- [ ] For non-Linux: Is `sandbox = true` set in `nix.conf`? (See [Nix manual](https://nixos.org/manual/nix/stable/command-ref/conf-file.html))
- [ ] Tested, as applicable:
- [NixOS test(s)](https://nixos.org/manual/nixos/unstable/index.html#sec-nixos-tests) (look inside [nixos/tests](https://github.com/NixOS/nixpkgs/blob/master/nixos/tests))
- and/or [package tests](https://nixos.org/manual/nixpkgs/unstable/#sec-package-tests)
- or, for functions and "core" functionality, tests in [lib/tests](https://github.com/NixOS/nixpkgs/blob/master/lib/tests) or [pkgs/test](https://github.com/NixOS/nixpkgs/blob/master/pkgs/test)
- made sure NixOS tests are [linked](https://nixos.org/manual/nixpkgs/unstable/#ssec-nixos-tests-linking) to the relevant packages
- [ ] Tested compilation of all packages that depend on this change using `nix-shell -p nixpkgs-review --run "nixpkgs-review rev HEAD"`. Note: all changes have to be committed, also see [nixpkgs-review usage](https://github.com/Mic92/nixpkgs-review#usage)
- [ ] Tested basic functionality of all binary files (usually in `./result/bin/`)
- [22.11 Release Notes (or backporting 22.05 Release notes)](https://github.com/NixOS/nixpkgs/blob/master/CONTRIBUTING.md#generating-2211-release-notes)
- [ ] (Package updates) Added a release notes entry if the change is major or breaking
- [ ] (Module updates) Added a release notes entry if the change is significant
- [ ] (Module addition) Added a release notes entry if adding a new NixOS module
- [ ] (Release notes changes) Ran `nixos/doc/manual/md-to-db.sh` to update generated release notes
- [ ] Fits [CONTRIBUTING.md](https://github.com/NixOS/nixpkgs/blob/master/CONTRIBUTING.md).
<!--
To help with the large amounts of pull requests, we would appreciate your
reviews of other pull requests, especially simple package updates. Just leave a
comment describing what you have tested in the relevant package/service.
Reviewing helps to reduce the average time-to-merge for everyone.
Thanks a lot if you do!
List of open PRs: https://github.com/NixOS/nixpkgs/pulls
Reviewing guidelines: https://nixos.org/manual/nixpkgs/unstable/#chap-reviewing-contributions
Reviewing guidelines: https://hydra.nixos.org/job/nixpkgs/trunk/manual/latest/download/1/nixpkgs/manual.html#chap-reviewing-contributions
-->
###### Motivation for this change
###### Things done
<!-- Please check what applies. Note that these are not hard requirements but merely serve as information for reviewers. -->
- [ ] Tested using sandboxing ([nix.useSandbox](https://nixos.org/nixos/manual/options.html#opt-nix.useSandbox) on NixOS, or option `sandbox` in [`nix.conf`](https://nixos.org/nix/manual/#sec-conf-file) on non-NixOS linux)
- Built on platform(s)
- [ ] NixOS
- [ ] macOS
- [ ] other Linux distributions
- [ ] Tested via one or more NixOS test(s) if existing and applicable for the change (look inside [nixos/tests](https://github.com/NixOS/nixpkgs/blob/master/nixos/tests))
- [ ] Tested compilation of all pkgs that depend on this change using `nix-shell -p nixpkgs-review --run "nixpkgs-review wip"`
- [ ] Tested execution of all binary files (usually in `./result/bin/`)
- [ ] Determined the impact on package closure size (by running `nix path-info -S` before and after)
- [ ] Ensured that relevant documentation is up to date
- [ ] Fits [CONTRIBUTING.md](https://github.com/NixOS/nixpkgs/blob/master/.github/CONTRIBUTING.md).

36
.github/STALE-BOT.md vendored
View File

@@ -1,36 +0,0 @@
# Stale bot information
- Thanks for your contribution!
- Our stale bot will never close an issue or PR.
- To remove the stale label, just leave a new comment.
- _How to find the right people to ping?_ &rarr; [`git blame`](https://git-scm.com/docs/git-blame) to the rescue! (or GitHub's history and blame buttons.)
- You can always ask for help on [our Discourse Forum](https://discourse.nixos.org/), [our Matrix room](https://matrix.to/#/#nix:nixos.org), or on the [#nixos IRC channel](https://web.libera.chat/#nixos).
## Suggestions for PRs
1. GitHub sometimes doesn't notify people who commented / reviewed a PR previously, when you (force) push commits. If you have addressed the reviews you can [officially ask for a review](https://docs.github.com/en/free-pro-team@latest/github/collaborating-with-issues-and-pull-requests/requesting-a-pull-request-review) from those who commented to you or anyone else.
2. If it is unfinished but you plan to finish it, please mark it as a draft.
3. If you don't expect to work on it any time soon, closing it with a short comment may encourage someone else to pick up your work.
4. To get things rolling again, rebase the PR against the target branch and address valid comments.
5. If you need a review to move forward, ask in [the Discourse thread for PRs that need help](https://discourse.nixos.org/t/prs-in-distress/3604).
6. If all you need is a merge, check the git history to find and [request reviews](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/requesting-a-pull-request-review) from people who usually merge related contributions.
## Suggestions for issues
1. If it is resolved (either for you personally, or in general), please consider closing it.
2. If this might still be an issue, but you are not interested in promoting its resolution, please consider closing it while encouraging others to take over and reopen an issue if they care enough.
3. If you still have interest in resolving it, try to ping somebody who you believe might have an interest in the topic. Consider discussing the problem in [our Discourse Forum](https://discourse.nixos.org/).
4. As with all open source projects, your best option is to submit a Pull Request that addresses this issue. We :heart: this attitude!
**Memorandum on closing issues**
Don't be afraid to close an issue that holds valuable information. Closed issues stay in the system for people to search, read, cross-reference, or even reopen--nothing is lost! Closing obsolete issues is an important way to help maintainers focus their time and effort.
## Useful GitHub search queries
- [Open PRs with any stale-bot interaction](https://github.com/NixOS/nixpkgs/pulls?q=is%3Apr+is%3Aopen+commenter%3Aapp%2Fstale+)
- [Open PRs with any stale-bot interaction and `2.status: stale`](https://github.com/NixOS/nixpkgs/pulls?q=is%3Apr+is%3Aopen+commenter%3Aapp%2Fstale+label%3A%222.status%3A+stale%22)
- [Open PRs with any stale-bot interaction and NOT `2.status: stale`](https://github.com/NixOS/nixpkgs/pulls?q=is%3Apr+is%3Aopen+commenter%3Aapp%2Fstale+-label%3A%222.status%3A+stale%22+)
- [Open Issues with any stale-bot interaction](https://github.com/NixOS/nixpkgs/issues?q=is%3Aissue+is%3Aopen+commenter%3Aapp%2Fstale+)
- [Open Issues with any stale-bot interaction and `2.status: stale`](https://github.com/NixOS/nixpkgs/issues?q=is%3Aissue+is%3Aopen+commenter%3Aapp%2Fstale+label%3A%222.status%3A+stale%22+)
- [Open Issues with any stale-bot interaction and NOT `2.status: stale`](https://github.com/NixOS/nixpkgs/issues?q=is%3Aissue+is%3Aopen+commenter%3Aapp%2Fstale+-label%3A%222.status%3A+stale%22+)

158
.github/labeler.yml vendored
View File

@@ -1,158 +0,0 @@
"6.topic: agda":
- doc/languages-frameworks/agda.section.md
- nixos/tests/agda.nix
- pkgs/build-support/agda/**/*
- pkgs/development/libraries/agda/**/*
- pkgs/top-level/agda-packages.nix
"6.topic: cinnamon":
- pkgs/desktops/cinnamon/**/*
"6.topic: emacs":
- nixos/modules/services/editors/emacs.nix
- nixos/modules/services/editors/emacs.xml
- nixos/tests/emacs-daemon.nix
- pkgs/applications/editors/emacs/elisp-packages/**/*
- pkgs/applications/editors/emacs/**/*
- pkgs/build-support/emacs/**/*
- pkgs/top-level/emacs-packages.nix
"6.topic: erlang":
- doc/languages-frameworks/beam.section.md
- pkgs/development/beam-modules/**/*
- pkgs/development/interpreters/elixir/**/*
- pkgs/development/interpreters/erlang/**/*
- pkgs/development/tools/build-managers/rebar/**/*
- pkgs/development/tools/build-managers/rebar3/**/*
- pkgs/development/tools/erlang/**/*
- pkgs/top-level/beam-packages.nix
"6.topic: fetch":
- pkgs/build-support/fetch*/**/*
"6.topic: GNOME":
- doc/languages-frameworks/gnome.section.md
- nixos/modules/services/desktops/gnome/**/*
- nixos/modules/services/x11/desktop-managers/gnome.nix
- nixos/tests/gnome-xorg.nix
- nixos/tests/gnome.nix
- pkgs/desktops/gnome/**/*
"6.topic: golang":
- doc/languages-frameworks/go.section.md
- pkgs/build-support/go/**/*
- pkgs/development/compilers/go/**/*
"6.topic: haskell":
- doc/languages-frameworks/haskell.section.md
- maintainers/scripts/haskell/**/*
- pkgs/development/compilers/ghc/**/*
- pkgs/development/haskell-modules/**/*
- pkgs/development/tools/haskell/**/*
- pkgs/test/haskell/**/*
- pkgs/top-level/haskell-packages.nix
- pkgs/top-level/release-haskell.nix
"6.topic: kernel":
- pkgs/build-support/kernel/**/*
- pkgs/os-specific/linux/kernel/**/*
"6.topic: lua":
- pkgs/development/interpreters/lua-5/**/*
- pkgs/development/interpreters/luajit/**/*
- pkgs/development/lua-modules/**/*
- pkgs/top-level/lua-packages.nix
"6.topic: nixos":
- nixos/**/*
- pkgs/os-specific/linux/nixos-rebuild/**/*
"6.topic: nim":
- doc/languages-frameworks/nim.section.md
- pkgs/development/compilers/nim/*
- pkgs/development/nim-packages/**/*
- pkgs/top-level/nim-packages.nix
"6.topic: ocaml":
- doc/languages-frameworks/ocaml.section.md
- pkgs/development/compilers/ocaml/**/*
- pkgs/development/compilers/reason/**/*
- pkgs/development/ocaml-modules/**/*
- pkgs/development/tools/ocaml/**/*
- pkgs/top-level/ocaml-packages.nix
"6.topic: pantheon":
- nixos/modules/services/desktops/pantheon/**/*
- nixos/modules/services/x11/desktop-managers/pantheon.nix
- nixos/modules/services/x11/display-managers/lightdm-greeters/pantheon.nix
- nixos/tests/pantheon.nix
- pkgs/desktops/pantheon/**/*
"6.topic: policy discussion":
- .github/**/*
"6.topic: printing":
- nixos/modules/services/printing/cupsd.nix
- pkgs/misc/cups/**/*
"6.topic: python":
- doc/languages-frameworks/python.section.md
- pkgs/development/interpreters/python/**/*
- pkgs/development/python-modules/**/*
- pkgs/top-level/python-packages.nix
"6.topic: qt/kde":
- doc/languages-frameworks/qt.section.md
- nixos/modules/services/x11/desktop-managers/plasma5.nix
- nixos/tests/plasma5.nix
- pkgs/applications/kde/**/*
- pkgs/desktops/plasma-5/**/*
- pkgs/development/libraries/kde-frameworks/**/*
- pkgs/development/libraries/qt-5/**/*
"6.topic: ruby":
- doc/languages-frameworks/ruby.section.md
- pkgs/development/interpreters/ruby/**/*
- pkgs/development/ruby-modules/**/*
"6.topic: rust":
- doc/languages-frameworks/rust.section.md
- pkgs/build-support/rust/**/*
- pkgs/development/compilers/rust/**/*
"6.topic: stdenv":
- pkgs/stdenv/**/*
"6.topic: steam":
- pkgs/games/steam/**/*
"6.topic: systemd":
- pkgs/os-specific/linux/systemd/**/*
- nixos/modules/system/boot/systemd*/**/*
"6.topic: TeX":
- doc/languages-frameworks/texlive.section.md
- pkgs/tools/typesetting/tex/**/*
"6.topic: vim":
- doc/languages-frameworks/vim.section.md
- pkgs/applications/editors/vim/**/*
- pkgs/applications/editors/vim/plugins/**/*
- nixos/modules/programs/neovim.nix
- pkgs/applications/editors/neovim/**/*
"6.topic: xfce":
- nixos/doc/manual/configuration/xfce.xml
- nixos/modules/services/x11/desktop-managers/xfce.nix
- nixos/tests/xfce.nix
- pkgs/desktops/xfce/**/*
"8.has: changelog":
- nixos/doc/manual/release-notes/**/*
"8.has: documentation":
- doc/**/*
- nixos/doc/**/*
"8.has: module (update)":
- nixos/modules/**/*

20
.github/stale.yml vendored
View File

@@ -1,9 +1,25 @@
# Configuration for probot-stale - https://github.com/probot/stale
# Number of days of inactivity before an issue becomes stale
daysUntilStale: 180
# Number of days of inactivity before a stale issue is closed
daysUntilClose: false
# Issues with these labels will never be considered stale
exemptLabels:
- "1.severity: security"
- "2.status: never-stale"
# Label to use when marking an issue as stale
staleLabel: "2.status: stale"
markComment: false
# Comment to post when marking an issue as stale. Set to `false` to disable
markComment: |
Thank you for your contributions.
This has been automatically marked as stale because it has had no activity for 180 days.
If this is still important to you, we ask that you leave a comment below. Your comment can be as simple as "still important to me". This lets people see that at least one person still cares about this. Someone will have to do this at most twice a year if there is no other activity.
Here are suggestions that might help resolve this more quickly:
1. Search for maintainers and people that previously touched the related code and @ mention them in a comment.
2. Ask on the [NixOS Discourse](https://discourse.nixos.org/).
3. Ask on the [#nixos channel](irc://irc.freenode.net/#nixos) on [irc.freenode.net](https://freenode.net).
# Comment to post when closing a stale issue. Set to `false` to disable
closeComment: false

View File

@@ -1,41 +0,0 @@
name: Backport
on:
pull_request_target:
types: [closed, labeled]
# WARNING:
# When extending this action, be aware that $GITHUB_TOKEN allows write access to
# the GitHub repository. This means that it should not evaluate user input in a
# way that allows code injection.
permissions:
contents: read
jobs:
backport:
permissions:
contents: write # for zeebe-io/backport-action to create branch
pull-requests: write # for zeebe-io/backport-action to create PR to backport
name: Backport Pull Request
if: github.repository_owner == 'NixOS' && github.event.pull_request.merged == true && (github.event_name != 'labeled' || startsWith('backport', github.event.label.name))
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
# required to find all branches
fetch-depth: 0
ref: ${{ github.event.pull_request.head.sha }}
- name: Create backport PRs
# should be kept in sync with `version`
uses: zeebe-io/backport-action@v0.0.5
with:
# Config README: https://github.com/zeebe-io/backport-action#backport-action
github_token: ${{ secrets.GITHUB_TOKEN }}
github_workspace: ${{ github.workspace }}
# should be kept in sync with `uses`
version: v0.0.5
pull_description: |-
Bot-based backport to `${target_branch}`, triggered by a label in #${pull_number}.
* [ ] Before merging, ensure that this backport complies with the [Criteria for Backporting](https://github.com/NixOS/nixpkgs/blob/master/CONTRIBUTING.md#criteria-for-backporting-changes).
* Even as a non-commiter, if you find that it does not comply, leave a comment.

View File

@@ -1,29 +0,0 @@
name: Basic evaluation checks
on:
workflow_dispatch
# pull_request:
# branches:
# - master
# - release-**
# push:
# branches:
# - master
# - release-**
permissions:
contents: read
jobs:
tests:
runs-on: ubuntu-latest
# we don't limit this action to only NixOS repo since the checks are cheap and useful developer feedback
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v17
- uses: cachix/cachix-action@v10
with:
# This cache is for the nixpkgs repo checks and should not be trusted or used elsewhere.
name: nixpkgs-ci
signingKey: '${{ secrets.CACHIX_SIGNING_KEY }}'
# explicit list of supportedSystems is needed until aarch64-darwin becomes part of the trunk jobset
- run: nix-build pkgs/top-level/release.nix -A tarball.nixpkgs-basic-release-checks --arg supportedSystems '[ "aarch64-darwin" "aarch64-linux" "x86_64-linux" "x86_64-darwin" ]'

View File

@@ -1,37 +0,0 @@
name: "Direct Push Warning"
on:
push:
branches:
- master
- release-**
permissions:
contents: read
jobs:
build:
permissions:
contents: write # for peter-evans/commit-comment to comment on commit
runs-on: ubuntu-latest
if: github.repository_owner == 'NixOS'
env:
GITHUB_SHA: ${{ github.sha }}
GITHUB_REPOSITORY: ${{ github.repository }}
steps:
- name: Check if commit is a merge commit
id: ismerge
run: |
ISMERGE=$(curl -H 'Accept: application/vnd.github.groot-preview+json' -H "authorization: Bearer ${{ secrets.GITHUB_TOKEN }}" https://api.github.com/repos/${{ env.GITHUB_REPOSITORY }}/commits/${{ env.GITHUB_SHA }}/pulls | jq -r '.[] | select(.merge_commit_sha == "${{ env.GITHUB_SHA }}") | any')
echo "::set-output name=ismerge::$ISMERGE"
# github events are eventually consistent, so wait until changes propagate to thier DB
- run: sleep 60
if: steps.ismerge.outputs.ismerge != 'true'
- name: Warn if the commit was a direct push
if: steps.ismerge.outputs.ismerge != 'true'
uses: peter-evans/commit-comment@v2
with:
body: |
@${{ github.actor }}, you pushed a commit directly to master/release branch
instead of going through a Pull Request.
That's highly discouraged beyond the few exceptions listed
on https://github.com/NixOS/nixpkgs/issues/118661

View File

@@ -1,43 +0,0 @@
name: "Checking EditorConfig"
permissions: read-all
on:
# avoids approving first time contributors
pull_request_target:
branches-ignore:
- 'release-**'
jobs:
tests:
runs-on: ubuntu-latest
if: "github.repository_owner == 'NixOS' && !contains(github.event.pull_request.title, '[skip editorconfig]')"
steps:
- name: Get list of changed files from PR
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
gh api \
repos/NixOS/nixpkgs/pulls/${{github.event.number}}/files --paginate \
| jq '.[] | select(.status != "removed") | .filename' \
> "$HOME/changed_files"
- name: print list of changed files
run: |
cat "$HOME/changed_files"
- uses: actions/checkout@v3
with:
# pull_request_target checks out the base branch by default
ref: refs/pull/${{ github.event.pull_request.number }}/merge
- uses: cachix/install-nix-action@v17
with:
# nixpkgs commit is pinned so that it doesn't break
# editorconfig-checker 2.4.0
nix_path: nixpkgs=https://github.com/NixOS/nixpkgs/archive/c473cc8714710179df205b153f4e9fa007107ff9.tar.gz
- name: install editorconfig-checker
run: nix-env -iA editorconfig-checker -f '<nixpkgs>'
- name: Checking EditorConfig
run: |
cat "$HOME/changed_files" | xargs -r editorconfig-checker -disable-indent-size
- if: ${{ failure() }}
run: |
echo "::error :: Hey! It looks like your changes don't follow our editorconfig settings. Read https://editorconfig.org/#download to configure your editor so you never see this error again."

View File

@@ -1,24 +0,0 @@
name: "Label PR"
on:
pull_request_target:
types: [edited, opened, synchronize, reopened]
# WARNING:
# When extending this action, be aware that $GITHUB_TOKEN allows some write
# access to the GitHub API. This means that it should not evaluate user input in
# a way that allows code injection.
permissions:
contents: read
pull-requests: write
jobs:
labels:
runs-on: ubuntu-latest
if: github.repository_owner == 'NixOS'
steps:
- uses: actions/labeler@v4
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
sync-labels: true

View File

@@ -1,31 +0,0 @@
name: "Build NixOS manual"
permissions: read-all
on:
pull_request_target:
branches:
- master
paths:
- 'nixos/**'
jobs:
nixos:
runs-on: ubuntu-latest
if: github.repository_owner == 'NixOS'
steps:
- uses: actions/checkout@v3
with:
# pull_request_target checks out the base branch by default
ref: refs/pull/${{ github.event.pull_request.number }}/merge
- uses: cachix/install-nix-action@v17
with:
# explicitly enable sandbox
extra_nix_config: sandbox = true
- uses: cachix/cachix-action@v10
with:
# This cache is for the nixpkgs repo checks and should not be trusted or used elsewhere.
name: nixpkgs-ci
signingKey: '${{ secrets.CACHIX_SIGNING_KEY }}'
- name: Building NixOS manual
run: NIX_PATH=nixpkgs=$(pwd) nix-build --option restrict-eval true nixos/release.nix -A manual.x86_64-linux

View File

@@ -1,31 +0,0 @@
name: "Build Nixpkgs manual"
permissions: read-all
on:
pull_request_target:
branches:
- master
paths:
- 'doc/**'
jobs:
nixpkgs:
runs-on: ubuntu-latest
if: github.repository_owner == 'NixOS'
steps:
- uses: actions/checkout@v3
with:
# pull_request_target checks out the base branch by default
ref: refs/pull/${{ github.event.pull_request.number }}/merge
- uses: cachix/install-nix-action@v17
with:
# explicitly enable sandbox
extra_nix_config: sandbox = true
- uses: cachix/cachix-action@v10
with:
# This cache is for the nixpkgs repo checks and should not be trusted or used elsewhere.
name: nixpkgs-ci
signingKey: '${{ secrets.CACHIX_SIGNING_KEY }}'
- name: Building Nixpkgs manual
run: NIX_PATH=nixpkgs=$(pwd) nix-build --option restrict-eval true pkgs/top-level/release.nix -A manual

View File

@@ -1,34 +0,0 @@
name: NixOS manual checks
permissions: read-all
on:
pull_request_target:
branches-ignore:
- 'release-**'
paths:
- 'nixos/**/*.xml'
- 'nixos/**/*.md'
jobs:
tests:
runs-on: ubuntu-latest
if: github.repository_owner == 'NixOS'
steps:
- uses: actions/checkout@v3
with:
# pull_request_target checks out the base branch by default
ref: refs/pull/${{ github.event.pull_request.number }}/merge
- uses: cachix/install-nix-action@v17
- name: Check DocBook files generated from Markdown are consistent
run: |
nixos/doc/manual/md-to-db.sh
git diff --exit-code || {
echo
echo 'Generated manual files are out of date.'
echo 'Please run'
echo
echo ' nixos/doc/manual/md-to-db.sh'
echo
exit 1
}

View File

@@ -1,26 +0,0 @@
name: "No channel PR"
on:
pull_request:
branches:
- 'nixos-**'
- 'nixpkgs-**'
permissions:
contents: read
jobs:
fail:
permissions:
contents: none
name: "This PR is is targeting a channel branch"
runs-on: ubuntu-latest
steps:
- run: |
cat <<EOF
The nixos-* and nixpkgs-* branches are pushed to by the channel
release script and should not be merged into directly.
Please target the equivalent release-* branch or master instead.
EOF
exit 1

View File

@@ -1,26 +0,0 @@
name: "clear pending status"
on:
check_suite:
types: [ completed ]
permissions:
contents: read
jobs:
action:
permissions:
statuses: write
runs-on: ubuntu-latest
steps:
- name: clear pending status
if: github.repository_owner == 'NixOS' && github.event.check_suite.app.name == 'OfBorg'
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
curl \
-X POST \
-H "Accept: application/vnd.github.v3+json" \
-H "Authorization: token $GITHUB_TOKEN" \
-d '{"state": "success", "target_url": " ", "description": " ", "context": "Wait for ofborg"}' \
"https://api.github.com/repos/NixOS/nixpkgs/statuses/${{ github.event.check_suite.head_sha }}"

View File

@@ -1,30 +0,0 @@
name: "set pending status"
on:
pull_request_target:
# WARNING:
# When extending this action, be aware that $GITHUB_TOKEN allows write access to
# the GitHub repository. This means that it should not evaluate user input in a
# way that allows code injection.
permissions:
contents: read
jobs:
action:
permissions:
statuses: write
runs-on: ubuntu-latest
steps:
- name: set pending status
if: github.repository_owner == 'NixOS'
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
curl \
-X POST \
-H "Accept: application/vnd.github.v3+json" \
-H "Authorization: token $GITHUB_TOKEN" \
-d '{"state": "pending", "target_url": " ", "description": "This pending status will be cleared when ofborg starts eval.", "context": "Wait for ofborg"}' \
"https://api.github.com/repos/NixOS/nixpkgs/statuses/${{ github.event.pull_request.head.sha }}"

View File

@@ -1,59 +0,0 @@
# This action periodically merges base branches into staging branches.
# This is done to
# * prevent conflicts or rather resolve them early
# * make all potential breakage happen on the staging branch
# * and make sure that all major rebuilds happen before the staging
# branch gets merged back into its base branch.
name: "Periodic Merges (24h)"
on:
schedule:
# * is a special character in YAML so you have to quote this string
# Merge every 24 hours
- cron: '0 0 * * *'
permissions:
contents: read
jobs:
periodic-merge:
permissions:
contents: write # for devmasx/merge-branch to merge branches
issues: write # for peter-evans/create-or-update-comment to create or update comment
if: github.repository_owner == 'NixOS'
runs-on: ubuntu-latest
strategy:
# don't fail fast, so that all pairs are tried
fail-fast: false
# certain branches need to be merged in order, like master->staging-next->staging
# and disabling parallelism ensures the order of the pairs below.
max-parallel: 1
matrix:
pairs:
- from: master
into: haskell-updates
- from: release-22.05
into: staging-next-22.05
- from: staging-next-22.05
into: staging-22.05
name: ${{ matrix.pairs.from }} → ${{ matrix.pairs.into }}
steps:
- uses: actions/checkout@v3
- name: ${{ matrix.pairs.from }} → ${{ matrix.pairs.into }}
uses: devmasx/merge-branch@1.4.0
with:
type: now
from_branch: ${{ matrix.pairs.from }}
target_branch: ${{ matrix.pairs.into }}
github_token: ${{ secrets.GITHUB_TOKEN }}
- name: Comment on failure
uses: peter-evans/create-or-update-comment@v2
if: ${{ failure() }}
with:
issue-number: 105153
body: |
Periodic merge from `${{ matrix.pairs.from }}` into `${{ matrix.pairs.into }}` has [failed](https://github.com/NixOS/nixpkgs/actions/runs/${{ github.run_id }}).

View File

@@ -1,57 +0,0 @@
# This action periodically merges base branches into staging branches.
# This is done to
# * prevent conflicts or rather resolve them early
# * make all potential breakage happen on the staging branch
# * and make sure that all major rebuilds happen before the staging
# branch gets merged back into its base branch.
name: "Periodic Merges (6h)"
on:
schedule:
# * is a special character in YAML so you have to quote this string
# Merge every 6 hours
- cron: '0 */6 * * *'
permissions:
contents: read
jobs:
periodic-merge:
permissions:
contents: write # for devmasx/merge-branch to merge branches
issues: write # for peter-evans/create-or-update-comment to create or update comment
if: github.repository_owner == 'NixOS'
runs-on: ubuntu-latest
strategy:
# don't fail fast, so that all pairs are tried
fail-fast: false
# certain branches need to be merged in order, like master->staging-next->staging
# and disabling parallelism ensures the order of the pairs below.
max-parallel: 1
matrix:
pairs:
- from: master
into: staging-next
- from: staging-next
into: staging
name: ${{ matrix.pairs.from }} → ${{ matrix.pairs.into }}
steps:
- uses: actions/checkout@v3
- name: ${{ matrix.pairs.from }} → ${{ matrix.pairs.into }}
uses: devmasx/merge-branch@1.4.0
with:
type: now
from_branch: ${{ matrix.pairs.from }}
target_branch: ${{ matrix.pairs.into }}
github_token: ${{ secrets.GITHUB_TOKEN }}
- name: Comment on failure
uses: peter-evans/create-or-update-comment@v2
if: ${{ failure() }}
with:
issue-number: 105153
body: |
Periodic merge from `${{ matrix.pairs.from }}` into `${{ matrix.pairs.into }}` has [failed](https://github.com/NixOS/nixpkgs/actions/runs/${{ github.run_id }}).

View File

@@ -1,55 +0,0 @@
name: "Update terraform-providers"
on:
schedule:
- cron: "14 3 * * 0"
workflow_dispatch:
permissions:
contents: read
jobs:
tf-providers:
permissions:
contents: write # for peter-evans/create-pull-request to create branch
issues: write # for peter-evans/create-or-update-comment to create or update comment
pull-requests: write # for peter-evans/create-pull-request to create a PR
if: github.repository_owner == 'NixOS' && github.ref == 'refs/heads/master' # ensure workflow_dispatch only runs on master
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v17
- name: setup
id: setup
run: |
echo ::set-output name=title::"terraform-providers: update $(date -u +"%Y-%m-%d")"
- name: update terraform-providers
run: |
git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
git config user.name "github-actions[bot]"
pushd pkgs/applications/networking/cluster/terraform-providers
./update-all-providers --no-build
git commit -m "${{ steps.setup.outputs.title }}" providers.json
popd
- name: create PR
uses: peter-evans/create-pull-request@v4
with:
body: |
Automatic update by [update-terraform-providers](https://github.com/NixOS/nixpkgs/blob/master/.github/workflows/update-terraform-providers.yml) action.
Check that all providers build with:
```
@ofborg build terraform.full
```
branch: terraform-providers-update
delete-branch: false
labels: "2.status: work-in-progress"
title: ${{ steps.setup.outputs.title }}
token: ${{ secrets.GITHUB_TOKEN }}
- name: comment on failure
uses: peter-evans/create-or-update-comment@v2
if: ${{ failure() }}
with:
issue-number: 153416
body: |
Automatic update of terraform providers [failed](https://github.com/NixOS/nixpkgs/actions/runs/${{ github.run_id }}).

10
.gitignore vendored
View File

@@ -2,21 +2,16 @@
,*
.*.swp
.*.swo
.idea/
.vscode/
outputs/
result
result-*
/doc/NEWS.html
/doc/NEWS.txt
/doc/manual.html
/doc/manual.pdf
/result
/source/
.version-suffix
.DS_Store
.mypy_cache
__pycache__
/pkgs/development/libraries/qt-5/*/tmp/
/pkgs/desktops/kde-5/*/tmp/
@@ -24,6 +19,3 @@ __pycache__
# generated by pkgs/common-updater/update-script.nix
update-git-commits.txt
# JetBrains IDEA module declaration file
/nixpkgs.iml

View File

@@ -1 +1 @@
22.11
20.09

View File

@@ -1,134 +0,0 @@
# How to contribute
Note: contributing implies licensing those contributions
under the terms of [COPYING](COPYING), which is an MIT-like license.
## Opening issues
* Make sure you have a [GitHub account](https://github.com/signup/free)
* Make sure there is no open issue on the topic
* [Submit a new issue](https://github.com/NixOS/nixpkgs/issues/new/choose) by choosing the kind of topic and fill out the template
## Submitting changes
Read the ["Submitting changes"](https://nixos.org/nixpkgs/manual/#chap-submitting-changes) section of the nixpkgs manual. It explains how to write, test, and iterate on your change, and which branch to base your pull request against.
Below is a short excerpt of some points in there:
* Format the commit messages in the following way:
```
(pkg-name | nixos/<module>): (from -> to | init at version | refactor | etc)
(Motivation for change. Link to release notes. Additional information.)
```
For consistency, there should not be a period at the end of the commit message's summary line (the first line of the commit message).
Examples:
* nginx: init at 2.0.1
* firefox: 54.0.1 -> 55.0
https://www.mozilla.org/en-US/firefox/55.0/releasenotes/
* nixos/hydra: add bazBaz option
Dual baz behavior is needed to do foo.
* nixos/nginx: refactor config generation
The old config generation system used impure shell scripts and could break in specific circumstances (see #1234).
* `meta.description` should:
* Be capitalized.
* Not start with the package name.
* Not have a period at the end.
* `meta.license` must be set and fit the upstream license.
* If there is no upstream license, `meta.license` should default to `lib.licenses.unfree`.
* `meta.maintainers` must be set.
See the nixpkgs manual for more details on [standard meta-attributes](https://nixos.org/nixpkgs/manual/#sec-standard-meta-attributes).
## Writing good commit messages
In addition to writing properly formatted commit messages, it's important to include relevant information so other developers can later understand *why* a change was made. While this information usually can be found by digging code, mailing list/Discourse archives, pull request discussions or upstream changes, it may require a lot of work.
For package version upgrades and such a one-line commit message is usually sufficient.
## Rebasing between branches (i.e. from master to staging)
From time to time, changes between branches must be rebased, for example, if the
number of new rebuilds they would cause is too large for the target branch. When
rebasing, care must be taken to include only the intended changes, otherwise
many CODEOWNERS will be inadvertently requested for review. To achieve this,
rebasing should not be performed directly on the target branch, but on the merge
base between the current and target branch.
In the following example, we see a rebase from `master` onto the merge base
between `master` and `staging`, so that a change can eventually be retargeted to
`staging`. The example uses `upstream` as the remote for `NixOS/nixpkgs.git`
while the `origin` remote is used for the remote you are pushing to.
```console
# Find the common base between two branches
common=$(git merge-base upstream/master upstream/staging)
# Find the common base between your feature branch and master
commits=$(git merge-base $(git branch --show-current) upstream/master)
# Rebase all commits onto the common base
git rebase --onto=$common $commits
# Force push your changes
git push origin $(git branch --show-current) --force-with-lease
```
Then change the base branch in the GitHub PR using the *Edit* button in the upper
right corner, and switch from `master` to `staging`. After the PR has been
retargeted it might be necessary to do a final rebase onto the target branch, to
resolve any outstanding merge conflicts.
```console
# Rebase onto target branch
git rebase upstream/staging
# Review and fixup possible conflicts
git status
# Force push your changes
git push origin $(git branch --show-current) --force-with-lease
```
## Backporting changes
Follow these steps to backport a change into a release branch in compliance with the [commit policy](https://nixos.org/nixpkgs/manual/#submitting-changes-stable-release-branches).
You can add a label such as `backport release-22.05` to a PR, so that merging it will
automatically create a backport (via [a GitHub Action](.github/workflows/backport.yml)).
This also works for PR's that have already been merged, and might take a couple of minutes to trigger.
You can also create the backport manually:
1. Take note of the commits in which the change was introduced into `master` branch.
2. Check out the target _release branch_, e.g. `release-22.05`. Do not use a _channel branch_ like `nixos-22.05` or `nixpkgs-22.05-darwin`.
3. Create a branch for your change, e.g. `git checkout -b backport`.
4. When the reason to backport is not obvious from the original commit message, use `git cherry-pick -xe <original commit>` and add a reason. Otherwise use `git cherry-pick -x <original commit>`. That's fine for minor version updates that only include security and bug fixes, commits that fixes an otherwise broken package or similar. Please also ensure the commits exists on the master branch; in the case of squashed or rebased merges, the commit hash will change and the new commits can be found in the merge message at the bottom of the master pull request.
5. Push to GitHub and open a backport pull request. Make sure to select the release branch (e.g. `release-22.05`) as the target branch of the pull request, and link to the pull request in which the original change was comitted to `master`. The pull request title should be the commit title with the release version as prefix, e.g. `[22.05]`.
6. When the backport pull request is merged and you have the necessary privileges you can also replace the label `9.needs: port to stable` with `8.has: port to stable` on the original pull request. This way maintainers can keep track of missing backports easier.
## Criteria for Backporting changes
Anything that does not cause user or downstream dependency regressions can be backported. This includes:
- New Packages / Modules
- Security / Patch updates
- Version updates which include new functionality (but no breaking changes)
- Services which require a client to be up-to-date regardless. (E.g. `spotify`, `steam`, or `discord`)
- Security critical applications (E.g. `firefox`)
## Generating 22.11 Release Notes
Documentation in nixpkgs is transitioning to a markdown-centric workflow. Release notes now require a translation step to convert from markdown to a compatible docbook document.
Steps for updating 22.11 Release notes:
1. Edit `nixos/doc/manual/release-notes/rl-2211.section.md` with the desired changes
2. Run `./nixos/doc/manual/md-to-db.sh` to render `nixos/doc/manual/from_md/release-notes/rl-2211.section.xml`
3. Include changes to `rl-2211.section.md` and `rl-2211.section.xml` in the same commit.
## Reviewing contributions
See the nixpkgs manual for more details on how to [Review contributions](https://nixos.org/nixpkgs/manual/#chap-reviewing-contributions).

View File

@@ -1,4 +1,4 @@
Copyright (c) 2003-2022 Eelco Dolstra and the Nixpkgs/NixOS contributors
Copyright (c) 2003-2020 Eelco Dolstra and the Nixpkgs/NixOS contributors
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the

View File

@@ -1,19 +1,14 @@
<p align="center">
<a href="https://nixos.org#gh-light-mode-only">
<img src="https://raw.githubusercontent.com/NixOS/nixos-homepage/master/logo/nixos-hires.png" width="500px" alt="NixOS logo"/>
</a>
<a href="https://nixos.org#gh-dark-mode-only">
<img src="https://raw.githubusercontent.com/NixOS/nixos-artwork/master/logo/nixos-white.png" width="500px" alt="NixOS logo"/>
</a>
<a href="https://nixos.org/nixos"><img src="https://nixos.org/logo/nixos-hires.png" width="500px" alt="NixOS logo" /></a>
</p>
<p align="center">
<a href="https://github.com/NixOS/nixpkgs/blob/master/CONTRIBUTING.md"><img src="https://img.shields.io/github/contributors-anon/NixOS/nixpkgs" alt="Contributors badge" /></a>
<a href="https://opencollective.com/nixos"><img src="https://opencollective.com/nixos/tiers/supporter/badge.svg?label=supporters&color=brightgreen" alt="Open Collective supporters" /></a>
<a href="https://www.codetriage.com/nixos/nixpkgs"><img src="https://www.codetriage.com/nixos/nixpkgs/badges/users.svg" alt="Code Triagers badge" /></a>
<a href="https://opencollective.com/nixos"><img src="https://opencollective.com/nixos/tiers/supporter/badge.svg?label=Supporter&color=brightgreen" alt="Open Collective supporters" /></a>
</p>
[Nixpkgs](https://github.com/nixos/nixpkgs) is a collection of over
80,000 software packages that can be installed with the
40,000 software packages that can be installed with the
[Nix](https://nixos.org/nix/) package manager. It also implements
[NixOS](https://nixos.org/nixos/), a purely-functional Linux distribution.
@@ -26,10 +21,10 @@
# Community
* [Discourse Forum](https://discourse.nixos.org/)
* [Matrix Chat](https://matrix.to/#/#community:nixos.org)
* [IRC - #nixos on freenode.net](irc://irc.freenode.net/#nixos)
* [NixOS Weekly](https://weekly.nixos.org/)
* [Community-maintained wiki](https://nixos.wiki/)
* [Community-maintained list of ways to get in touch](https://nixos.wiki/wiki/Get_In_Touch#Chat) (Discord, Telegram, IRC, etc.)
* [Community-maintained list of ways to get in touch](https://nixos.wiki/wiki/Get_In_Touch#Chat) (Discord, Matrix, Telegram, other IRC channels, etc.)
# Other Project Repositories
@@ -39,7 +34,6 @@ the main ones:
* [Nix](https://github.com/NixOS/nix) - the purely functional package manager
* [NixOps](https://github.com/NixOS/nixops) - the tool to remotely deploy NixOS machines
* [nixos-hardware](https://github.com/NixOS/nixos-hardware) - NixOS profiles to optimize settings for different hardware
* [Nix RFCs](https://github.com/NixOS/rfcs) - the formal process for making substantial changes to the community
* [NixOS homepage](https://github.com/NixOS/nixos-homepage) - the [NixOS.org](https://nixos.org) website
* [hydra](https://github.com/NixOS/hydra) - our continuous integration system
@@ -51,21 +45,21 @@ Nixpkgs and NixOS are built and tested by our continuous integration
system, [Hydra](https://hydra.nixos.org/).
* [Continuous package builds for unstable/master](https://hydra.nixos.org/jobset/nixos/trunk-combined)
* [Continuous package builds for the NixOS 22.05 release](https://hydra.nixos.org/jobset/nixos/release-22.05)
* [Continuous package builds for the NixOS 20.03 release](https://hydra.nixos.org/jobset/nixos/release-20.03)
* [Tests for unstable/master](https://hydra.nixos.org/job/nixos/trunk-combined/tested#tabs-constituents)
* [Tests for the NixOS 22.05 release](https://hydra.nixos.org/job/nixos/release-22.05/tested#tabs-constituents)
* [Tests for the NixOS 20.03 release](https://hydra.nixos.org/job/nixos/release-20.03/tested#tabs-constituents)
Artifacts successfully built with Hydra are published to cache at
https://cache.nixos.org/. When successful build and test criteria are
met, the Nixpkgs expressions are distributed via [Nix
channels](https://nixos.org/manual/nix/stable/package-management/channels.html).
channels](https://nixos.org/nix/manual/#sec-channels).
# Contributing
Nixpkgs is among the most active projects on GitHub. While thousands
of open issues and pull requests might seem a lot at first, it helps
consider it in the context of the scope of the project. Nixpkgs
describes how to build tens of thousands of pieces of software and implements a
describes how to build over 40,000 pieces of software and implements a
Linux distribution. The [GitHub Insights](https://github.com/NixOS/nixpkgs/pulse)
page gives a sense of the project activity.
@@ -92,7 +86,7 @@ Most contributions are based on and merged into these branches:
deemed of sufficiently high quality
For more information about contributing to the project, please visit
the [contributing page](https://github.com/NixOS/nixpkgs/blob/master/CONTRIBUTING.md).
the [contributing page](https://github.com/NixOS/nixpkgs/blob/master/.github/CONTRIBUTING.md).
# Donations
@@ -102,8 +96,7 @@ Foundation](https://nixos.org/nixos/foundation.html). To ensure the
continuity and expansion of the NixOS infrastructure, we are looking
for donations to our organization.
You can donate to the NixOS foundation through [SEPA bank
transfers](https://nixos.org/donate.html) or by using Open Collective:
You can donate to the NixOS foundation by using Open Collective:
<a href="https://opencollective.com/nixos#support"><img src="https://opencollective.com/nixos/tiers/supporter.svg?width=890" /></a>

View File

@@ -14,7 +14,7 @@ if ! builtins ? nixVersion || builtins.compareVersions requiredVersion builtins.
- If you installed Nix using the install script (https://nixos.org/nix/install),
it is safe to upgrade by running it again:
curl -L https://nixos.org/nix/install | sh
curl https://nixos.org/nix/install | sh
For more information, please see the NixOS release notes at
https://nixos.org/nixos/manual or locally at

View File

@@ -1,20 +1,4 @@
MD_TARGETS=$(addsuffix .xml, $(basename $(shell find . -type f -regex '.*\.md$$' -not -name README.md)))
PANDOC ?= pandoc
pandoc_media_dir = media
# NOTE: Keep in sync with NixOS manual (/nixos/doc/manual/md-to-db.sh) and conversion script (/maintainers/scripts/db-to-md.sh).
# TODO: Remove raw-attribute when we can get rid of DocBook altogether.
pandoc_commonmark_enabled_extensions = +attributes+fenced_divs+footnotes+bracketed_spans+definition_lists+pipe_tables+raw_attribute
# Not needed:
# - docbook-reader/citerefentry-to-rst-role.lua (only relevant for DocBook → MarkDown/rST/MyST)
pandoc_flags = --extract-media=$(pandoc_media_dir) \
--lua-filter=$(PANDOC_LUA_FILTERS_DIR)/diagram-generator.lua \
--lua-filter=build-aux/pandoc-filters/myst-reader/roles.lua \
--lua-filter=build-aux/pandoc-filters/link-unix-man-references.lua \
--lua-filter=build-aux/pandoc-filters/docbook-writer/rst-roles.lua \
--lua-filter=build-aux/pandoc-filters/docbook-writer/labelless-link-is-xref.lua \
-f commonmark$(pandoc_commonmark_enabled_extensions)+smart
MD_TARGETS=$(addsuffix .xml, $(basename $(wildcard ./*.md ./**/*.md)))
.PHONY: all
all: validate format out/html/index.html out/epub/manual.epub
@@ -38,7 +22,7 @@ fix-misc-xml:
.PHONY: clean
clean:
rm -f ${MD_TARGETS} doc-support/result .version manual-full.xml functions/library/locations.xml functions/library/generated
rm -rf ./out/ ./highlightjs ./media
rm -rf ./out/ ./highlightjs
.PHONY: validate
validate: manual-full.xml doc-support/result
@@ -55,7 +39,6 @@ out/html/index.html: doc-support/result manual-full.xml style.css highlightjs
mkdir -p out/html/highlightjs/
cp -r highlightjs out/html/
cp -r $(pandoc_media_dir) out/html/
cp ./overrides.css out/html/
cp ./style.css out/html/style.css
@@ -70,7 +53,6 @@ out/epub/manual.epub: manual-full.xml
doc-support/result/epub.xsl \
./manual-full.xml
cp -r $(pandoc_media_dir) out/epub/scratch/OEBPS
cp ./overrides.css out/epub/scratch/OEBPS
cp ./style.css out/epub/scratch/OEBPS
mkdir -p out/epub/scratch/OEBPS/images/callouts/
@@ -105,12 +87,24 @@ functions/library/generated: doc-support/result
ln -rfs ./doc-support/result/function-docs functions/library/generated
%.section.xml: %.section.md
$(PANDOC) $^ -t docbook \
$(pandoc_flags) \
-o $@
pandoc $^ -w docbook \
-f markdown+smart \
| sed -e 's|<ulink url=|<link xlink:href=|' \
-e 's|</ulink>|</link>|' \
-e 's|<sect. id=|<section xml:id=|' \
-e 's|</sect[0-9]>|</section>|' \
-e '1s| id=| xml:id=|' \
-e '1s|\(<[^ ]* \)|\1xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" |' \
| cat > $@
%.chapter.xml: %.chapter.md
$(PANDOC) $^ -t docbook \
pandoc $^ -w docbook \
--top-level-division=chapter \
$(pandoc_flags) \
-o $@
-f markdown+smart \
| sed -e 's|<ulink url=|<link xlink:href=|' \
-e 's|</ulink>|</link>|' \
-e 's|<sect. id=|<section xml:id=|' \
-e 's|</sect[0-9]>|</section>|' \
-e '1s| id=| xml:id=|' \
-e '1s|\(<[^ ]* \)|\1|' \
| cat > $@

View File

@@ -1,12 +0,0 @@
# Nixpkgs/doc
This directory houses the sources files for the Nixpkgs manual.
You can find the [rendered documentation for Nixpkgs `unstable` on nixos.org](https://nixos.org/manual/nixpkgs/unstable/).
[Docs for Nixpkgs stable](https://nixos.org/manual/nixpkgs/stable/) are also available.
If you want to contribute to the documentation, [here's how to do it](https://nixos.org/manual/nixpkgs/unstable/#chap-contributing).
If you're only getting started with Nix, go to [nixos.org/learn](https://nixos.org/learn).

View File

@@ -1,23 +0,0 @@
--[[
Converts Code AST nodes produced by pandocs DocBook reader
from citerefentry elements into AST for corresponding role
for reStructuredText.
We use subset of MyST syntax (CommonMark with features from rST)
so lets use the rST AST for rST features.
Reference: https://www.sphinx-doc.org/en/master/usage/restructuredtext/roles.html#role-manpage
]]
function Code(elem)
elem.classes = elem.classes:map(function (x)
if x == 'citerefentry' then
elem.attributes['role'] = 'manpage'
return 'interpreted-text'
else
return x
end
end)
return elem
end

View File

@@ -1,34 +0,0 @@
--[[
Converts Link AST nodes with empty label to DocBook xref elements.
This is a temporary script to be able use cross-references conveniently
using syntax taken from MyST, while we still use docbook-xsl
for generating the documentation.
Reference: https://myst-parser.readthedocs.io/en/latest/using/syntax.html#targets-and-cross-referencing
]]
local function starts_with(start, str)
return str:sub(1, #start) == start
end
local function escape_xml_arg(arg)
amps = arg:gsub('&', '&amp;')
amps_quotes = amps:gsub('"', '&quot;')
amps_quotes_lt = amps_quotes:gsub('<', '&lt;')
return amps_quotes_lt
end
function Link(elem)
has_no_content = #elem.content == 0
targets_anchor = starts_with('#', elem.target)
has_no_attributes = elem.title == '' and elem.identifier == '' and #elem.classes == 0 and #elem.attributes == 0
if has_no_content and targets_anchor and has_no_attributes then
-- xref expects idref without the pound-sign
target_without_hash = elem.target:sub(2, #elem.target)
return pandoc.RawInline('docbook', '<xref linkend="' .. escape_xml_arg(target_without_hash) .. '" />')
end
end

View File

@@ -1,40 +0,0 @@
--[[
Converts AST for reStructuredText roles into corresponding
DocBook elements.
Currently, only a subset of roles is supported.
Reference:
List of roles:
https://www.sphinx-doc.org/en/master/usage/restructuredtext/roles.html
manpage:
https://tdg.docbook.org/tdg/5.1/citerefentry.html
file:
https://tdg.docbook.org/tdg/5.1/filename.html
]]
function Code(elem)
if elem.classes:includes('interpreted-text') then
local tag = nil
local content = elem.text
if elem.attributes['role'] == 'manpage' then
tag = 'citerefentry'
local title, volnum = content:match('^(.+)%((%w+)%)$')
if title == nil then
-- No volnum in parentheses.
title = content
end
content = '<refentrytitle>' .. title .. '</refentrytitle>' .. (volnum ~= nil and ('<manvolnum>' .. volnum .. '</manvolnum>') or '')
elseif elem.attributes['role'] == 'file' then
tag = 'filename'
elseif elem.attributes['role'] == 'command' then
tag = 'command'
elseif elem.attributes['role'] == 'option' then
tag = 'option'
end
if tag ~= nil then
return pandoc.RawInline('docbook', '<' .. tag .. '>' .. content .. '</' .. tag .. '>')
end
end
end

View File

@@ -1,17 +0,0 @@
--[[
Turns a manpage reference into a link, when a mapping is defined below.
]]
local man_urls = {
["tmpfiles.d(5)"] = "https://www.freedesktop.org/software/systemd/man/tmpfiles.d.html",
["nix.conf(5)"] = "https://nixos.org/manual/nix/stable/#sec-conf-file",
["systemd.time(7)"] = "https://www.freedesktop.org/software/systemd/man/systemd.time.html",
["systemd.timer(5)"] = "https://www.freedesktop.org/software/systemd/man/systemd.timer.html",
}
function Code(elem)
local is_man_role = elem.classes:includes('interpreted-text') and elem.attributes['role'] == 'manpage'
if is_man_role and man_urls[elem.text] ~= nil then
return pandoc.Link(elem, man_urls[elem.text])
end
end

View File

@@ -1,29 +0,0 @@
--[[
Replaces Str AST nodes containing {role}, followed by a Code node
by a Code node with attrs that would be produced by rST reader
from the role syntax.
This is to emulate MyST syntax in Pandoc.
(MyST is a CommonMark flavour with rST features mixed in.)
Reference: https://myst-parser.readthedocs.io/en/latest/syntax/syntax.html#roles-an-in-line-extension-point
]]
function Inlines(inlines)
for i = #inlines-1,1,-1 do
local first = inlines[i]
local second = inlines[i+1]
local correct_tags = first.tag == 'Str' and second.tag == 'Code'
if correct_tags then
-- docutils supports alphanumeric strings separated by [-._:]
-- We are slightly more liberal for simplicity.
local role = first.text:match('^{([-._+:%w]+)}$')
if role ~= nil then
inlines:remove(i)
second.attributes['role'] = role
second.classes:insert('interpreted-text')
end
end
end
return inlines
end

View File

@@ -1,25 +0,0 @@
--[[
Replaces Code nodes with attrs that would be produced by rST reader
from the role syntax by a Str AST node containing {role}, followed by a Code node.
This is to emulate MyST syntax in Pandoc.
(MyST is a CommonMark flavour with rST features mixed in.)
Reference: https://myst-parser.readthedocs.io/en/latest/syntax/syntax.html#roles-an-in-line-extension-point
]]
function Code(elem)
local role = elem.attributes['role']
if elem.classes:includes('interpreted-text') and role ~= nil then
elem.classes = elem.classes:filter(function (c)
return c ~= 'interpreted-text'
end)
elem.attributes['role'] = nil
return {
pandoc.Str('{' .. role .. '}'),
elem,
}
end
end

View File

@@ -1,165 +0,0 @@
# Fetchers {#chap-pkgs-fetchers}
Building software with Nix often requires downloading source code and other files from the internet.
`nixpkgs` provides *fetchers* for different protocols and services. Fetchers are functions that simplify downloading files.
## Caveats
Fetchers create [fixed output derivations](https://nixos.org/manual/nix/stable/#fixed-output-drvs) from downloaded files.
Nix can reuse the downloaded files via the hash of the resulting derivation.
The fact that the hash belongs to the Nix derivation output and not the file itself can lead to confusion.
For example, consider the following fetcher:
```nix
fetchurl {
url = "http://www.example.org/hello-1.0.tar.gz";
sha256 = "0v6r3wwnsk5pdjr188nip3pjgn1jrn5pc5ajpcfy6had6b3v4dwm";
};
```
A common mistake is to update a fetchers URL, or a version parameter, without updating the hash.
```nix
fetchurl {
url = "http://www.example.org/hello-1.1.tar.gz";
sha256 = "0v6r3wwnsk5pdjr188nip3pjgn1jrn5pc5ajpcfy6had6b3v4dwm";
};
```
**This will reuse the old contents**.
Remember to invalidate the hash argument, in this case by setting the `sha256` attribute to an empty string.
```nix
fetchurl {
url = "http://www.example.org/hello-1.1.tar.gz";
sha256 = "";
};
```
Use the resulting error message to determine the correct hash.
```
error: hash mismatch in fixed-output derivation '/path/to/my.drv':
specified: sha256-AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
got: sha256-RApQUm78dswhBLC/rfU9y0u6pSAzHceIJqgmetRD24E=
```
A similar problem arises while testing changes to a fetcher's implementation. If the output of the derivation already exists in the Nix store, test failures can go undetected. The [`invalidateFetcherByDrvHash`](#tester-invalidateFetcherByDrvHash) function helps prevent reusing cached derivations.
## `fetchurl` and `fetchzip` {#fetchurl}
Two basic fetchers are `fetchurl` and `fetchzip`. Both of these have two required arguments, a URL and a hash. The hash is typically `sha256`, although many more hash algorithms are supported. Nixpkgs contributors are currently recommended to use `sha256`. This hash will be used by Nix to identify your source. A typical usage of `fetchurl` is provided below.
```nix
{ stdenv, fetchurl }:
stdenv.mkDerivation {
name = "hello";
src = fetchurl {
url = "http://www.example.org/hello.tar.gz";
sha256 = "1111111111111111111111111111111111111111111111111111";
};
}
```
The main difference between `fetchurl` and `fetchzip` is in how they store the contents. `fetchurl` will store the unaltered contents of the URL within the Nix store. `fetchzip` on the other hand, will decompress the archive for you, making files and directories directly accessible in the future. `fetchzip` can only be used with archives. Despite the name, `fetchzip` is not limited to .zip files and can also be used with any tarball.
## `fetchpatch` {#fetchpatch}
`fetchpatch` works very similarly to `fetchurl` with the same arguments expected. It expects patch files as a source and performs normalization on them before computing the checksum. For example, it will remove comments or other unstable parts that are sometimes added by version control systems and can change over time.
- `relative`: Similar to using `git-diff`'s `--relative` flag, only keep changes inside the specified directory, making paths relative to it.
- `stripLen`: Remove the first `stripLen` components of pathnames in the patch.
- `extraPrefix`: Prefix pathnames by this string.
- `excludes`: Exclude files matching these patterns (applies after the above arguments).
- `includes`: Include only files matching these patterns (applies after the above arguments).
- `revert`: Revert the patch.
Note that because the checksum is computed after applying these effects, using or modifying these arguments will have no effect unless the `sha256` argument is changed as well.
Most other fetchers return a directory rather than a single file.
## `fetchsvn` {#fetchsvn}
Used with Subversion. Expects `url` to a Subversion directory, `rev`, and `sha256`.
## `fetchgit` {#fetchgit}
Used with Git. Expects `url` to a Git repo, `rev`, and `sha256`. `rev` in this case can be full the git commit id (SHA1 hash) or a tag name like `refs/tags/v1.0`.
Additionally, the following optional arguments can be given: `fetchSubmodules = true` makes `fetchgit` also fetch the submodules of a repository. If `deepClone` is set to true, the entire repository is cloned as opposing to just creating a shallow clone. `deepClone = true` also implies `leaveDotGit = true` which means that the `.git` directory of the clone won't be removed after checkout.
If only parts of the repository are needed, `sparseCheckout` can be used. This will prevent git from fetching unnecessary blobs from server, see [git sparse-checkout](https://git-scm.com/docs/git-sparse-checkout) and [git clone --filter](https://git-scm.com/docs/git-clone#Documentation/git-clone.txt---filterltfilter-specgt) for more information:
```nix
{ stdenv, fetchgit }:
stdenv.mkDerivation {
name = "hello";
src = fetchgit {
url = "https://...";
sparseCheckout = ''
path/to/be/included
another/path
'';
sha256 = "0000000000000000000000000000000000000000000000000000";
};
}
```
## `fetchfossil` {#fetchfossil}
Used with Fossil. Expects `url` to a Fossil archive, `rev`, and `sha256`.
## `fetchcvs` {#fetchcvs}
Used with CVS. Expects `cvsRoot`, `tag`, and `sha256`.
## `fetchhg` {#fetchhg}
Used with Mercurial. Expects `url`, `rev`, and `sha256`.
A number of fetcher functions wrap part of `fetchurl` and `fetchzip`. They are mainly convenience functions intended for commonly used destinations of source code in Nixpkgs. These wrapper fetchers are listed below.
## `fetchFromGitea` {#fetchfromgitea}
`fetchFromGitea` expects five arguments. `domain` is the gitea server name. `owner` is a string corresponding to the Gitea user or organization that controls this repository. `repo` corresponds to the name of the software repository. These are located at the top of every Gitea HTML page as `owner`/`repo`. `rev` corresponds to the Git commit hash or tag (e.g `v1.0`) that will be downloaded from Git. Finally, `sha256` corresponds to the hash of the extracted directory. Again, other hash algorithms are also available but `sha256` is currently preferred.
## `fetchFromGitHub` {#fetchfromgithub}
`fetchFromGitHub` expects four arguments. `owner` is a string corresponding to the GitHub user or organization that controls this repository. `repo` corresponds to the name of the software repository. These are located at the top of every GitHub HTML page as `owner`/`repo`. `rev` corresponds to the Git commit hash or tag (e.g `v1.0`) that will be downloaded from Git. Finally, `sha256` corresponds to the hash of the extracted directory. Again, other hash algorithms are also available, but `sha256` is currently preferred.
`fetchFromGitHub` uses `fetchzip` to download the source archive generated by GitHub for the specified revision. If `leaveDotGit`, `deepClone` or `fetchSubmodules` are set to `true`, `fetchFromGitHub` will use `fetchgit` instead. Refer to its section for documentation of these options.
## `fetchFromGitLab` {#fetchfromgitlab}
This is used with GitLab repositories. The arguments expected are very similar to `fetchFromGitHub` above.
## `fetchFromGitiles` {#fetchfromgitiles}
This is used with Gitiles repositories. The arguments expected are similar to `fetchgit`.
## `fetchFromBitbucket` {#fetchfrombitbucket}
This is used with BitBucket repositories. The arguments expected are very similar to fetchFromGitHub above.
## `fetchFromSavannah` {#fetchfromsavannah}
This is used with Savannah repositories. The arguments expected are very similar to `fetchFromGitHub` above.
## `fetchFromRepoOrCz` {#fetchfromrepoorcz}
This is used with repo.or.cz repositories. The arguments expected are very similar to `fetchFromGitHub` above.
## `fetchFromSourcehut` {#fetchfromsourcehut}
This is used with sourcehut repositories. Similar to `fetchFromGitHub` above,
it expects `owner`, `repo`, `rev` and `sha256`, but don't forget the tilde (~)
in front of the username! Expected arguments also include `vc` ("git" (default)
or "hg"), `domain` and `fetchSubmodules`.
If `fetchSubmodules` is `true`, `fetchFromSourcehut` uses `fetchgit`
or `fetchhg` with `fetchSubmodules` or `fetchSubrepos` set to `true`,
respectively. Otherwise, the fetcher uses `fetchzip`.

150
doc/builders/fetchers.xml Normal file
View File

@@ -0,0 +1,150 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="chap-pkgs-fetchers">
<title>Fetchers</title>
<para>
When using Nix, you will frequently need to download source code and other files from the internet. Nixpkgs comes with a few helper functions that allow you to fetch fixed-output derivations in a structured way.
</para>
<para>
The two fetcher primitives are <function>fetchurl</function> and <function>fetchzip</function>. Both of these have two required arguments, a URL and a hash. The hash is typically <literal>sha256</literal>, although many more hash algorithms are supported. Nixpkgs contributors are currently recommended to use <literal>sha256</literal>. This hash will be used by Nix to identify your source. A typical usage of fetchurl is provided below.
</para>
<programlisting><![CDATA[
{ stdenv, fetchurl }:
stdenv.mkDerivation {
name = "hello";
src = fetchurl {
url = "http://www.example.org/hello.tar.gz";
sha256 = "1111111111111111111111111111111111111111111111111111";
};
}
]]></programlisting>
<para>
The main difference between <function>fetchurl</function> and <function>fetchzip</function> is in how they store the contents. <function>fetchurl</function> will store the unaltered contents of the URL within the Nix store. <function>fetchzip</function> on the other hand will decompress the archive for you, making files and directories directly accessible in the future. <function>fetchzip</function> can only be used with archives. Despite the name, <function>fetchzip</function> is not limited to .zip files and can also be used with any tarball.
</para>
<para>
<function>fetchpatch</function> works very similarly to <function>fetchurl</function> with the same arguments expected. It expects patch files as a source and and performs normalization on them before computing the checksum. For example it will remove comments or other unstable parts that are sometimes added by version control systems and can change over time.
</para>
<para>
Other fetcher functions allow you to add source code directly from a VCS such as subversion or git. These are mostly straightforward names based on the name of the command used with the VCS system. Because they give you a working repository, they act most like <function>fetchzip</function>.
</para>
<variablelist>
<varlistentry>
<term>
<literal>fetchsvn</literal>
</term>
<listitem>
<para>
Used with Subversion. Expects <literal>url</literal> to a Subversion directory, <literal>rev</literal>, and <literal>sha256</literal>.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchgit</literal>
</term>
<listitem>
<para>
Used with Git. Expects <literal>url</literal> to a Git repo, <literal>rev</literal>, and <literal>sha256</literal>. <literal>rev</literal> in this case can be full the git commit id (SHA1 hash) or a tag name like <literal>refs/tags/v1.0</literal>.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchfossil</literal>
</term>
<listitem>
<para>
Used with Fossil. Expects <literal>url</literal> to a Fossil archive, <literal>rev</literal>, and <literal>sha256</literal>.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchcvs</literal>
</term>
<listitem>
<para>
Used with CVS. Expects <literal>cvsRoot</literal>, <literal>tag</literal>, and <literal>sha256</literal>.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchhg</literal>
</term>
<listitem>
<para>
Used with Mercurial. Expects <literal>url</literal>, <literal>rev</literal>, and <literal>sha256</literal>.
</para>
</listitem>
</varlistentry>
</variablelist>
<para>
A number of fetcher functions wrap part of <function>fetchurl</function> and <function>fetchzip</function>. They are mainly convenience functions intended for commonly used destinations of source code in Nixpkgs. These wrapper fetchers are listed below.
</para>
<variablelist>
<varlistentry>
<term>
<literal>fetchFromGitHub</literal>
</term>
<listitem>
<para>
<function>fetchFromGitHub</function> expects four arguments. <literal>owner</literal> is a string corresponding to the GitHub user or organization that controls this repository. <literal>repo</literal> corresponds to the name of the software repository. These are located at the top of every GitHub HTML page as <literal>owner</literal>/<literal>repo</literal>. <literal>rev</literal> corresponds to the Git commit hash or tag (e.g <literal>v1.0</literal>) that will be downloaded from Git. Finally, <literal>sha256</literal> corresponds to the hash of the extracted directory. Again, other hash algorithms are also available but <literal>sha256</literal> is currently preferred.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchFromGitLab</literal>
</term>
<listitem>
<para>
This is used with GitLab repositories. The arguments expected are very similar to fetchFromGitHub above.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchFromGitiles</literal>
</term>
<listitem>
<para>
This is used with Gitiles repositories. The arguments expected
are similar to fetchgit.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchFromBitbucket</literal>
</term>
<listitem>
<para>
This is used with BitBucket repositories. The arguments expected are very similar to fetchFromGitHub above.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchFromSavannah</literal>
</term>
<listitem>
<para>
This is used with Savannah repositories. The arguments expected are very similar to fetchFromGitHub above.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>fetchFromRepoOrCz</literal>
</term>
<listitem>
<para>
This is used with repo.or.cz repositories. The arguments expected are very similar to fetchFromGitHub above.
</para>
</listitem>
</varlistentry>
</variablelist>
</chapter>

View File

@@ -5,8 +5,8 @@
<para>
This chapter describes tools for creating various types of images.
</para>
<xi:include href="images/appimagetools.section.xml" />
<xi:include href="images/dockertools.section.xml" />
<xi:include href="images/ocitools.section.xml" />
<xi:include href="images/snaptools.section.xml" />
<xi:include href="images/appimagetools.xml" />
<xi:include href="images/dockertools.xml" />
<xi:include href="images/ocitools.xml" />
<xi:include href="images/snaptools.xml" />
</chapter>

View File

@@ -1,48 +0,0 @@
# pkgs.appimageTools {#sec-pkgs-appimageTools}
`pkgs.appimageTools` is a set of functions for extracting and wrapping [AppImage](https://appimage.org/) files. They are meant to be used if traditional packaging from source is infeasible, or it would take too long. To quickly run an AppImage file, `pkgs.appimage-run` can be used as well.
::: {.warning}
The `appimageTools` API is unstable and may be subject to backwards-incompatible changes in the future.
:::
## AppImage formats {#ssec-pkgs-appimageTools-formats}
There are different formats for AppImages, see [the specification](https://github.com/AppImage/AppImageSpec/blob/74ad9ca2f94bf864a4a0dac1f369dd4f00bd1c28/draft.md#image-format) for details.
- Type 1 images are ISO 9660 files that are also ELF executables.
- Type 2 images are ELF executables with an appended filesystem.
They can be told apart with `file -k`:
```ShellSession
$ file -k type1.AppImage
type1.AppImage: ELF 64-bit LSB executable, x86-64, version 1 (SYSV) ISO 9660 CD-ROM filesystem data 'AppImage' (Lepton 3.x), scale 0-0,
spot sensor temperature 0.000000, unit celsius, color scheme 0, calibration: offset 0.000000, slope 0.000000, dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.18, BuildID[sha1]=d629f6099d2344ad82818172add1d38c5e11bc6d, stripped\012- data
$ file -k type2.AppImage
type2.AppImage: ELF 64-bit LSB executable, x86-64, version 1 (SYSV) (Lepton 3.x), scale 232-60668, spot sensor temperature -4.187500, color scheme 15, show scale bar, calibration: offset -0.000000, slope 0.000000 (Lepton 2.x), scale 4111-45000, spot sensor temperature 412442.250000, color scheme 3, minimum point enabled, calibration: offset -75402534979642766821519867692934234112.000000, slope 5815371847733706829839455140374904832.000000, dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.18, BuildID[sha1]=79dcc4e55a61c293c5e19edbd8d65b202842579f, stripped\012- data
```
Note how the type 1 AppImage is described as an `ISO 9660 CD-ROM filesystem`, and the type 2 AppImage is not.
## Wrapping {#ssec-pkgs-appimageTools-wrapping}
Depending on the type of AppImage you're wrapping, you'll have to use `wrapType1` or `wrapType2`.
```nix
appimageTools.wrapType2 { # or wrapType1
name = "patchwork";
src = fetchurl {
url = "https://github.com/ssbc/patchwork/releases/download/v3.11.4/Patchwork-3.11.4-linux-x86_64.AppImage";
sha256 = "1blsprpkvm0ws9b96gb36f0rbf8f5jgmw4x6dsb1kswr4ysf591s";
};
extraPkgs = pkgs: with pkgs; [ ];
}
```
- `name` specifies the name of the resulting image.
- `src` specifies the AppImage file to extract.
- `extraPkgs` allows you to pass a function to include additional packages inside the FHS environment your AppImage is going to run in. There are a few ways to learn which dependencies an application needs:
- Looking through the extracted AppImage files, reading its scripts and running `patchelf` and `ldd` on its executables. This can also be done in `appimage-run`, by setting `APPIMAGE_DEBUG_EXEC=bash`.
- Running `strace -vfefile` on the wrapped executable, looking for libraries that can't be found.

View File

@@ -0,0 +1,102 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-pkgs-appimageTools">
<title>pkgs.appimageTools</title>
<para>
<varname>pkgs.appimageTools</varname> is a set of functions for extracting and wrapping <link xlink:href="https://appimage.org/">AppImage</link> files. They are meant to be used if traditional packaging from source is infeasible, or it would take too long. To quickly run an AppImage file, <literal>pkgs.appimage-run</literal> can be used as well.
</para>
<warning>
<para>
The <varname>appimageTools</varname> API is unstable and may be subject to backwards-incompatible changes in the future.
</para>
</warning>
<section xml:id="ssec-pkgs-appimageTools-formats">
<title>AppImage formats</title>
<para>
There are different formats for AppImages, see <link xlink:href="https://github.com/AppImage/AppImageSpec/blob/74ad9ca2f94bf864a4a0dac1f369dd4f00bd1c28/draft.md#image-format">the specification</link> for details.
</para>
<itemizedlist>
<listitem>
<para>
Type 1 images are ISO 9660 files that are also ELF executables.
</para>
</listitem>
<listitem>
<para>
Type 2 images are ELF executables with an appended filesystem.
</para>
</listitem>
</itemizedlist>
<para>
They can be told apart with <command>file -k</command>:
</para>
<screen>
<prompt>$ </prompt>file -k type1.AppImage
type1.AppImage: ELF 64-bit LSB executable, x86-64, version 1 (SYSV) ISO 9660 CD-ROM filesystem data 'AppImage' (Lepton 3.x), scale 0-0,
spot sensor temperature 0.000000, unit celsius, color scheme 0, calibration: offset 0.000000, slope 0.000000, dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.18, BuildID[sha1]=d629f6099d2344ad82818172add1d38c5e11bc6d, stripped\012- data
<prompt>$ </prompt>file -k type2.AppImage
type2.AppImage: ELF 64-bit LSB executable, x86-64, version 1 (SYSV) (Lepton 3.x), scale 232-60668, spot sensor temperature -4.187500, color scheme 15, show scale bar, calibration: offset -0.000000, slope 0.000000 (Lepton 2.x), scale 4111-45000, spot sensor temperature 412442.250000, color scheme 3, minimum point enabled, calibration: offset -75402534979642766821519867692934234112.000000, slope 5815371847733706829839455140374904832.000000, dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.18, BuildID[sha1]=79dcc4e55a61c293c5e19edbd8d65b202842579f, stripped\012- data
</screen>
<para>
Note how the type 1 AppImage is described as an <literal>ISO 9660 CD-ROM filesystem</literal>, and the type 2 AppImage is not.
</para>
</section>
<section xml:id="ssec-pkgs-appimageTools-wrapping">
<title>Wrapping</title>
<para>
Depending on the type of AppImage you're wrapping, you'll have to use <varname>wrapType1</varname> or <varname>wrapType2</varname>.
</para>
<programlisting>
appimageTools.wrapType2 { # or wrapType1
name = "patchwork"; <co xml:id='ex-appimageTools-wrapping-1' />
src = fetchurl { <co xml:id='ex-appimageTools-wrapping-2' />
url = "https://github.com/ssbc/patchwork/releases/download/v3.11.4/Patchwork-3.11.4-linux-x86_64.AppImage";
sha256 = "1blsprpkvm0ws9b96gb36f0rbf8f5jgmw4x6dsb1kswr4ysf591s";
};
extraPkgs = pkgs: with pkgs; [ ]; <co xml:id='ex-appimageTools-wrapping-3' />
}</programlisting>
<calloutlist>
<callout arearefs='ex-appimageTools-wrapping-1'>
<para>
<varname>name</varname> specifies the name of the resulting image.
</para>
</callout>
<callout arearefs='ex-appimageTools-wrapping-2'>
<para>
<varname>src</varname> specifies the AppImage file to extract.
</para>
</callout>
<callout arearefs='ex-appimageTools-wrapping-3'>
<para>
<varname>extraPkgs</varname> allows you to pass a function to include additional packages inside the FHS environment your AppImage is going to run in. There are a few ways to learn which dependencies an application needs:
<itemizedlist>
<listitem>
<para>
Looking through the extracted AppImage files, reading its scripts and running <command>patchelf</command> and <command>ldd</command> on its executables. This can also be done in <command>appimage-run</command>, by setting <command>APPIMAGE_DEBUG_EXEC=bash</command>.
</para>
</listitem>
<listitem>
<para>
Running <command>strace -vfefile</command> on the wrapped executable, looking for libraries that can't be found.
</para>
</listitem>
</itemizedlist>
</para>
</callout>
</calloutlist>
</section>
</section>

View File

@@ -1,352 +0,0 @@
# pkgs.dockerTools {#sec-pkgs-dockerTools}
`pkgs.dockerTools` is a set of functions for creating and manipulating Docker images according to the [Docker Image Specification v1.2.0](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#docker-image-specification-v120). Docker itself is not used to perform any of the operations done by these functions.
## buildImage {#ssec-pkgs-dockerTools-buildImage}
This function is analogous to the `docker build` command, in that it can be used to build a Docker-compatible repository tarball containing a single image with one or multiple layers. As such, the result is suitable for being loaded in Docker with `docker load`.
The parameters of `buildImage` with relative example values are described below:
[]{#ex-dockerTools-buildImage}
[]{#ex-dockerTools-buildImage-runAsRoot}
```nix
buildImage {
name = "redis";
tag = "latest";
fromImage = someBaseImage;
fromImageName = null;
fromImageTag = "latest";
copyToRoot = pkgs.buildEnv {
name = "image-root";
paths = [ pkgs.redis ];
pathsToLink = [ "/bin" ];
};
runAsRoot = ''
#!${pkgs.runtimeShell}
mkdir -p /data
'';
config = {
Cmd = [ "/bin/redis-server" ];
WorkingDir = "/data";
Volumes = { "/data" = { }; };
};
}
```
The above example will build a Docker image `redis/latest` from the given base image. Loading and running this image in Docker results in `redis-server` being started automatically.
- `name` specifies the name of the resulting image. This is the only required argument for `buildImage`.
- `tag` specifies the tag of the resulting image. By default it's `null`, which indicates that the nix output hash will be used as tag.
- `fromImage` is the repository tarball containing the base image. It must be a valid Docker image, such as exported by `docker save`. By default it's `null`, which can be seen as equivalent to `FROM scratch` of a `Dockerfile`.
- `fromImageName` can be used to further specify the base image within the repository, in case it contains multiple images. By default it's `null`, in which case `buildImage` will peek the first image available in the repository.
- `fromImageTag` can be used to further specify the tag of the base image within the repository, in case an image contains multiple tags. By default it's `null`, in which case `buildImage` will peek the first tag available for the base image.
- `copyToRoot` is a derivation that will be copied in the new layer of the resulting image. This can be similarly seen as `ADD contents/ /` in a `Dockerfile`. By default it's `null`.
- `runAsRoot` is a bash script that will run as root in an environment that overlays the existing layers of the base image with the new resulting layer, including the previously copied `contents` derivation. This can be similarly seen as `RUN ...` in a `Dockerfile`.
> **_NOTE:_** Using this parameter requires the `kvm` device to be available.
- `config` is used to specify the configuration of the containers that will be started off the built image in Docker. The available options are listed in the [Docker Image Specification v1.2.0](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions).
After the new layer has been created, its closure (to which `contents`, `config` and `runAsRoot` contribute) will be copied in the layer itself. Only new dependencies that are not already in the existing layers will be copied.
At the end of the process, only one new single layer will be produced and added to the resulting image.
The resulting repository will only list the single image `image/tag`. In the case of [the `buildImage` example](#ex-dockerTools-buildImage), it would be `redis/latest`.
It is possible to inspect the arguments with which an image was built using its `buildArgs` attribute.
> **_NOTE:_** If you see errors similar to `getProtocolByName: does not exist (no such protocol name: tcp)` you may need to add `pkgs.iana-etc` to `contents`.
> **_NOTE:_** If you see errors similar to `Error_Protocol ("certificate has unknown CA",True,UnknownCa)` you may need to add `pkgs.cacert` to `contents`.
By default `buildImage` will use a static date of one second past the UNIX Epoch. This allows `buildImage` to produce binary reproducible images. When listing images with `docker images`, the newly created images will be listed like this:
```ShellSession
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello latest 08c791c7846e 48 years ago 25.2MB
```
You can break binary reproducibility but have a sorted, meaningful `CREATED` column by setting `created` to `now`.
```nix
pkgs.dockerTools.buildImage {
name = "hello";
tag = "latest";
created = "now";
copyToRoot = pkgs.buildEnv {
name = "image-root";
paths = [ pkgs.hello ];
pathsToLink = [ "/bin" ];
};
config.Cmd = [ "/bin/hello" ];
}
```
Now the Docker CLI will display a reasonable date and sort the images as expected:
```ShellSession
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello latest de2bf4786de6 About a minute ago 25.2MB
```
However, the produced images will not be binary reproducible.
## buildLayeredImage {#ssec-pkgs-dockerTools-buildLayeredImage}
Create a Docker image with many of the store paths being on their own layer to improve sharing between images. The image is realized into the Nix store as a gzipped tarball. Depending on the intended usage, many users might prefer to use `streamLayeredImage` instead, which this function uses internally.
`name`
: The name of the resulting image.
`tag` _optional_
: Tag of the generated image.
*Default:* the output path's hash
`fromImage` _optional_
: The repository tarball containing the base image. It must be a valid Docker image, such as one exported by `docker save`.
*Default:* `null`, which can be seen as equivalent to `FROM scratch` of a `Dockerfile`.
`contents` _optional_
: Top-level paths in the container. Either a single derivation, or a list of derivations.
*Default:* `[]`
`config` _optional_
: Run-time configuration of the container. A full list of the options are available at in the [Docker Image Specification v1.2.0](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions).
*Default:* `{}`
`created` _optional_
: Date and time the layers were created. Follows the same `now` exception supported by `buildImage`.
*Default:* `1970-01-01T00:00:01Z`
`maxLayers` _optional_
: Maximum number of layers to create.
*Default:* `100`
*Maximum:* `125`
`extraCommands` _optional_
: Shell commands to run while building the final layer, without access to most of the layer contents. Changes to this layer are "on top" of all the other layers, so can create additional directories and files.
`fakeRootCommands` _optional_
: Shell commands to run while creating the archive for the final layer in a fakeroot environment. Unlike `extraCommands`, you can run `chown` to change the owners of the files in the archive, changing fakeroot's state instead of the real filesystem. The latter would require privileges that the build user does not have. Static binaries do not interact with the fakeroot environment. By default all files in the archive will be owned by root.
`enableFakechroot` _optional_
: Whether to run in `fakeRootCommands` in `fakechroot`, making programs behave as though `/` is the root of the image being created, while files in the Nix store are available as usual. This allows scripts that perform installation in `/` to work as expected. Considering that `fakechroot` is implemented via the same mechanism as `fakeroot`, the same caveats apply.
*Default:* `false`
### Behavior of `contents` in the final image {#dockerTools-buildLayeredImage-arg-contents}
Each path directly listed in `contents` will have a symlink in the root of the image.
For example:
```nix
pkgs.dockerTools.buildLayeredImage {
name = "hello";
contents = [ pkgs.hello ];
}
```
will create symlinks for all the paths in the `hello` package:
```ShellSession
/bin/hello -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/bin/hello
/share/info/hello.info -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/share/info/hello.info
/share/locale/bg/LC_MESSAGES/hello.mo -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/share/locale/bg/LC_MESSAGES/hello.mo
```
### Automatic inclusion of `config` references {#dockerTools-buildLayeredImage-arg-config}
The closure of `config` is automatically included in the closure of the final image.
This allows you to make very simple Docker images with very little code. This container will start up and run `hello`:
```nix
pkgs.dockerTools.buildLayeredImage {
name = "hello";
config.Cmd = [ "${pkgs.hello}/bin/hello" ];
}
```
### Adjusting `maxLayers` {#dockerTools-buildLayeredImage-arg-maxLayers}
Increasing the `maxLayers` increases the number of layers which have a chance to be shared between different images.
Modern Docker installations support up to 128 layers, but older versions support as few as 42.
If the produced image will not be extended by other Docker builds, it is safe to set `maxLayers` to `128`. However, it will be impossible to extend the image further.
The first (`maxLayers-2`) most "popular" paths will have their own individual layers, then layer \#`maxLayers-1` will contain all the remaining "unpopular" paths, and finally layer \#`maxLayers` will contain the Image configuration.
Docker's Layers are not inherently ordered, they are content-addressable and are not explicitly layered until they are composed in to an Image.
## streamLayeredImage {#ssec-pkgs-dockerTools-streamLayeredImage}
Builds a script which, when run, will stream an uncompressed tarball of a Docker image to stdout. The arguments to this function are as for `buildLayeredImage`. This method of constructing an image does not realize the image into the Nix store, so it saves on IO and disk/cache space, particularly with large images.
The image produced by running the output script can be piped directly into `docker load`, to load it into the local docker daemon:
```ShellSession
$(nix-build) | docker load
```
Alternatively, the image be piped via `gzip` into `skopeo`, e.g., to copy it into a registry:
```ShellSession
$(nix-build) | gzip --fast | skopeo copy docker-archive:/dev/stdin docker://some_docker_registry/myimage:tag
```
## pullImage {#ssec-pkgs-dockerTools-fetchFromRegistry}
This function is analogous to the `docker pull` command, in that it can be used to pull a Docker image from a Docker registry. By default [Docker Hub](https://hub.docker.com/) is used to pull images.
Its parameters are described in the example below:
```nix
pullImage {
imageName = "nixos/nix";
imageDigest =
"sha256:20d9485b25ecfd89204e843a962c1bd70e9cc6858d65d7f5fadc340246e2116b";
finalImageName = "nix";
finalImageTag = "1.11";
sha256 = "0mqjy3zq2v6rrhizgb9nvhczl87lcfphq9601wcprdika2jz7qh8";
os = "linux";
arch = "x86_64";
}
```
- `imageName` specifies the name of the image to be downloaded, which can also include the registry namespace (e.g. `nixos`). This argument is required.
- `imageDigest` specifies the digest of the image to be downloaded. This argument is required.
- `finalImageName`, if specified, this is the name of the image to be created. Note it is never used to fetch the image since we prefer to rely on the immutable digest ID. By default it's equal to `imageName`.
- `finalImageTag`, if specified, this is the tag of the image to be created. Note it is never used to fetch the image since we prefer to rely on the immutable digest ID. By default it's `latest`.
- `sha256` is the checksum of the whole fetched image. This argument is required.
- `os`, if specified, is the operating system of the fetched image. By default it's `linux`.
- `arch`, if specified, is the cpu architecture of the fetched image. By default it's `x86_64`.
`nix-prefetch-docker` command can be used to get required image parameters:
```ShellSession
$ nix run nixpkgs.nix-prefetch-docker -c nix-prefetch-docker --image-name mysql --image-tag 5
```
Since a given `imageName` may transparently refer to a manifest list of images which support multiple architectures and/or operating systems, you can supply the `--os` and `--arch` arguments to specify exactly which image you want. By default it will match the OS and architecture of the host the command is run on.
```ShellSession
$ nix-prefetch-docker --image-name mysql --image-tag 5 --arch x86_64 --os linux
```
Desired image name and tag can be set using `--final-image-name` and `--final-image-tag` arguments:
```ShellSession
$ nix-prefetch-docker --image-name mysql --image-tag 5 --final-image-name eu.gcr.io/my-project/mysql --final-image-tag prod
```
## exportImage {#ssec-pkgs-dockerTools-exportImage}
This function is analogous to the `docker export` command, in that it can be used to flatten a Docker image that contains multiple layers. It is in fact the result of the merge of all the layers of the image. As such, the result is suitable for being imported in Docker with `docker import`.
> **_NOTE:_** Using this function requires the `kvm` device to be available.
The parameters of `exportImage` are the following:
```nix
exportImage {
fromImage = someLayeredImage;
fromImageName = null;
fromImageTag = null;
name = someLayeredImage.name;
}
```
The parameters relative to the base image have the same synopsis as described in [buildImage](#ssec-pkgs-dockerTools-buildImage), except that `fromImage` is the only required argument in this case.
The `name` argument is the name of the derivation output, which defaults to `fromImage.name`.
## shadowSetup {#ssec-pkgs-dockerTools-shadowSetup}
This constant string is a helper for setting up the base files for managing users and groups, only if such files don't exist already. It is suitable for being used in a [`buildImage` `runAsRoot`](#ex-dockerTools-buildImage-runAsRoot) script for cases like in the example below:
```nix
buildImage {
name = "shadow-basic";
runAsRoot = ''
#!${pkgs.runtimeShell}
${pkgs.dockerTools.shadowSetup}
groupadd -r redis
useradd -r -g redis redis
mkdir /data
chown redis:redis /data
'';
}
```
Creating base files like `/etc/passwd` or `/etc/login.defs` is necessary for shadow-utils to manipulate users and groups.
## fakeNss {#ssec-pkgs-dockerTools-fakeNss}
If your primary goal is providing a basic skeleton for user lookups to work,
and/or a lesser privileged user, adding `pkgs.fakeNss` to
the container image root might be the better choice than a custom script
running `useradd` and friends.
It provides a `/etc/passwd` and `/etc/group`, containing `root` and `nobody`
users and groups.
It also provides a `/etc/nsswitch.conf`, configuring NSS host resolution to
first check `/etc/hosts`, before checking DNS, as the default in the absence of
a config file (`dns [!UNAVAIL=return] files`) is quite unexpected.
You can pair it with `binSh`, which provides `bin/sh` as a symlink
to `bashInteractive` (as `/bin/sh` is configured as a shell).
```nix
buildImage {
name = "shadow-basic";
copyToRoot = pkgs.buildEnv {
name = "image-root";
paths = [ binSh pkgs.fakeNss ];
pathsToLink = [ "/bin" "/etc" "/var" ];
};
}
```

View File

@@ -0,0 +1,478 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-pkgs-dockerTools">
<title>pkgs.dockerTools</title>
<para>
<varname>pkgs.dockerTools</varname> is a set of functions for creating and manipulating Docker images according to the <link xlink:href="https://github.com/moby/moby/blob/master/image/spec/v1.2.md#docker-image-specification-v120"> Docker Image Specification v1.2.0 </link>. Docker itself is not used to perform any of the operations done by these functions.
</para>
<section xml:id="ssec-pkgs-dockerTools-buildImage">
<title>buildImage</title>
<para>
This function is analogous to the <command>docker build</command> command, in that it can be used to build a Docker-compatible repository tarball containing a single image with one or multiple layers. As such, the result is suitable for being loaded in Docker with <command>docker load</command>.
</para>
<para>
The parameters of <varname>buildImage</varname> with relative example values are described below:
</para>
<example xml:id='ex-dockerTools-buildImage'>
<title>Docker build</title>
<programlisting>
buildImage {
name = "redis"; <co xml:id='ex-dockerTools-buildImage-1' />
tag = "latest"; <co xml:id='ex-dockerTools-buildImage-2' />
fromImage = someBaseImage; <co xml:id='ex-dockerTools-buildImage-3' />
fromImageName = null; <co xml:id='ex-dockerTools-buildImage-4' />
fromImageTag = "latest"; <co xml:id='ex-dockerTools-buildImage-5' />
contents = pkgs.redis; <co xml:id='ex-dockerTools-buildImage-6' />
runAsRoot = '' <co xml:id='ex-dockerTools-buildImage-runAsRoot' />
#!${pkgs.runtimeShell}
mkdir -p /data
'';
config = { <co xml:id='ex-dockerTools-buildImage-8' />
Cmd = [ "/bin/redis-server" ];
WorkingDir = "/data";
Volumes = {
"/data" = {};
};
};
}
</programlisting>
</example>
<para>
The above example will build a Docker image <literal>redis/latest</literal> from the given base image. Loading and running this image in Docker results in <literal>redis-server</literal> being started automatically.
</para>
<calloutlist>
<callout arearefs='ex-dockerTools-buildImage-1'>
<para>
<varname>name</varname> specifies the name of the resulting image. This is the only required argument for <varname>buildImage</varname>.
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-2'>
<para>
<varname>tag</varname> specifies the tag of the resulting image. By default it's <literal>null</literal>, which indicates that the nix output hash will be used as tag.
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-3'>
<para>
<varname>fromImage</varname> is the repository tarball containing the base image. It must be a valid Docker image, such as exported by <command>docker save</command>. By default it's <literal>null</literal>, which can be seen as equivalent to <literal>FROM scratch</literal> of a <filename>Dockerfile</filename>.
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-4'>
<para>
<varname>fromImageName</varname> can be used to further specify the base image within the repository, in case it contains multiple images. By default it's <literal>null</literal>, in which case <varname>buildImage</varname> will peek the first image available in the repository.
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-5'>
<para>
<varname>fromImageTag</varname> can be used to further specify the tag of the base image within the repository, in case an image contains multiple tags. By default it's <literal>null</literal>, in which case <varname>buildImage</varname> will peek the first tag available for the base image.
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-6'>
<para>
<varname>contents</varname> is a derivation that will be copied in the new layer of the resulting image. This can be similarly seen as <command>ADD contents/ /</command> in a <filename>Dockerfile</filename>. By default it's <literal>null</literal>.
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-runAsRoot'>
<para>
<varname>runAsRoot</varname> is a bash script that will run as root in an environment that overlays the existing layers of the base image with the new resulting layer, including the previously copied <varname>contents</varname> derivation. This can be similarly seen as <command>RUN ...</command> in a <filename>Dockerfile</filename>.
<note>
<para>
Using this parameter requires the <literal>kvm</literal> device to be available.
</para>
</note>
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-8'>
<para>
<varname>config</varname> is used to specify the configuration of the containers that will be started off the built image in Docker. The available options are listed in the <link xlink:href="https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions"> Docker Image Specification v1.2.0 </link>.
</para>
</callout>
</calloutlist>
<para>
After the new layer has been created, its closure (to which <varname>contents</varname>, <varname>config</varname> and <varname>runAsRoot</varname> contribute) will be copied in the layer itself. Only new dependencies that are not already in the existing layers will be copied.
</para>
<para>
At the end of the process, only one new single layer will be produced and added to the resulting image.
</para>
<para>
The resulting repository will only list the single image <varname>image/tag</varname>. In the case of <xref linkend='ex-dockerTools-buildImage'/> it would be <varname>redis/latest</varname>.
</para>
<para>
It is possible to inspect the arguments with which an image was built using its <varname>buildArgs</varname> attribute.
</para>
<note>
<para>
If you see errors similar to <literal>getProtocolByName: does not exist (no such protocol name: tcp)</literal> you may need to add <literal>pkgs.iana-etc</literal> to <varname>contents</varname>.
</para>
</note>
<note>
<para>
If you see errors similar to <literal>Error_Protocol ("certificate has unknown CA",True,UnknownCa)</literal> you may need to add <literal>pkgs.cacert</literal> to <varname>contents</varname>.
</para>
</note>
<example xml:id="example-pkgs-dockerTools-buildImage-creation-date">
<title>Impurely Defining a Docker Layer's Creation Date</title>
<para>
By default <function>buildImage</function> will use a static date of one second past the UNIX Epoch. This allows <function>buildImage</function> to produce binary reproducible images. When listing images with <command>docker images</command>, the newly created images will be listed like this:
</para>
<screen><![CDATA[
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello latest 08c791c7846e 48 years ago 25.2MB
]]></screen>
<para>
You can break binary reproducibility but have a sorted, meaningful <literal>CREATED</literal> column by setting <literal>created</literal> to <literal>now</literal>.
</para>
<programlisting><![CDATA[
pkgs.dockerTools.buildImage {
name = "hello";
tag = "latest";
created = "now";
contents = pkgs.hello;
config.Cmd = [ "/bin/hello" ];
}
]]></programlisting>
<para>
and now the Docker CLI will display a reasonable date and sort the images as expected:
<screen><![CDATA[
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello latest de2bf4786de6 About a minute ago 25.2MB
]]></screen>
however, the produced images will not be binary reproducible.
</para>
</example>
</section>
<section xml:id="ssec-pkgs-dockerTools-buildLayeredImage">
<title>buildLayeredImage</title>
<para>
Create a Docker image with many of the store paths being on their own layer to improve sharing between images.
</para>
<variablelist>
<varlistentry>
<term>
<varname>name</varname>
</term>
<listitem>
<para>
The name of the resulting image.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>tag</varname> <emphasis>optional</emphasis>
</term>
<listitem>
<para>
Tag of the generated image.
</para>
<para>
<emphasis>Default:</emphasis> the output path's hash
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>contents</varname> <emphasis>optional</emphasis>
</term>
<listitem>
<para>
Top level paths in the container. Either a single derivation, or a list of derivations.
</para>
<para>
<emphasis>Default:</emphasis> <literal>[]</literal>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>config</varname> <emphasis>optional</emphasis>
</term>
<listitem>
<para>
Run-time configuration of the container. A full list of the options are available at in the <link xlink:href="https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions"> Docker Image Specification v1.2.0 </link>.
</para>
<para>
<emphasis>Default:</emphasis> <literal>{}</literal>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>created</varname> <emphasis>optional</emphasis>
</term>
<listitem>
<para>
Date and time the layers were created. Follows the same <literal>now</literal> exception supported by <literal>buildImage</literal>.
</para>
<para>
<emphasis>Default:</emphasis> <literal>1970-01-01T00:00:01Z</literal>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>maxLayers</varname> <emphasis>optional</emphasis>
</term>
<listitem>
<para>
Maximum number of layers to create.
</para>
<para>
<emphasis>Default:</emphasis> <literal>100</literal>
</para>
<para>
<emphasis>Maximum:</emphasis> <literal>125</literal>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>extraCommands</varname> <emphasis>optional</emphasis>
</term>
<listitem>
<para>
Shell commands to run while building the final layer, without access to most of the layer contents. Changes to this layer are "on top" of all the other layers, so can create additional directories and files.
</para>
</listitem>
</varlistentry>
</variablelist>
<section xml:id="dockerTools-buildLayeredImage-arg-contents">
<title>Behavior of <varname>contents</varname> in the final image</title>
<para>
Each path directly listed in <varname>contents</varname> will have a symlink in the root of the image.
</para>
<para>
For example:
<programlisting><![CDATA[
pkgs.dockerTools.buildLayeredImage {
name = "hello";
contents = [ pkgs.hello ];
}
]]></programlisting>
will create symlinks for all the paths in the <literal>hello</literal> package:
<screen><![CDATA[
/bin/hello -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/bin/hello
/share/info/hello.info -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/share/info/hello.info
/share/locale/bg/LC_MESSAGES/hello.mo -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/share/locale/bg/LC_MESSAGES/hello.mo
]]></screen>
</para>
</section>
<section xml:id="dockerTools-buildLayeredImage-arg-config">
<title>Automatic inclusion of <varname>config</varname> references</title>
<para>
The closure of <varname>config</varname> is automatically included in the closure of the final image.
</para>
<para>
This allows you to make very simple Docker images with very little code. This container will start up and run <command>hello</command>:
<programlisting><![CDATA[
pkgs.dockerTools.buildLayeredImage {
name = "hello";
config.Cmd = [ "${pkgs.hello}/bin/hello" ];
}
]]></programlisting>
</para>
</section>
<section xml:id="dockerTools-buildLayeredImage-arg-maxLayers">
<title>Adjusting <varname>maxLayers</varname></title>
<para>
Increasing the <varname>maxLayers</varname> increases the number of layers which have a chance to be shared between different images.
</para>
<para>
Modern Docker installations support up to 128 layers, however older versions support as few as 42.
</para>
<para>
If the produced image will not be extended by other Docker builds, it is safe to set <varname>maxLayers</varname> to <literal>128</literal>. However it will be impossible to extend the image further.
</para>
<para>
The first (<literal>maxLayers-2</literal>) most "popular" paths will have their own individual layers, then layer #<literal>maxLayers-1</literal> will contain all the remaining "unpopular" paths, and finally layer #<literal>maxLayers</literal> will contain the Image configuration.
</para>
<para>
Docker's Layers are not inherently ordered, they are content-addressable and are not explicitly layered until they are composed in to an Image.
</para>
</section>
</section>
<section xml:id="ssec-pkgs-dockerTools-fetchFromRegistry">
<title>pullImage</title>
<para>
This function is analogous to the <command>docker pull</command> command, in that it can be used to pull a Docker image from a Docker registry. By default <link xlink:href="https://hub.docker.com/">Docker Hub</link> is used to pull images.
</para>
<para>
Its parameters are described in the example below:
</para>
<example xml:id='ex-dockerTools-pullImage'>
<title>Docker pull</title>
<programlisting>
pullImage {
imageName = "nixos/nix"; <co xml:id='ex-dockerTools-pullImage-1' />
imageDigest = "sha256:20d9485b25ecfd89204e843a962c1bd70e9cc6858d65d7f5fadc340246e2116b"; <co xml:id='ex-dockerTools-pullImage-2' />
finalImageName = "nix"; <co xml:id='ex-dockerTools-pullImage-3' />
finalImageTag = "1.11"; <co xml:id='ex-dockerTools-pullImage-4' />
sha256 = "0mqjy3zq2v6rrhizgb9nvhczl87lcfphq9601wcprdika2jz7qh8"; <co xml:id='ex-dockerTools-pullImage-5' />
os = "linux"; <co xml:id='ex-dockerTools-pullImage-6' />
arch = "x86_64"; <co xml:id='ex-dockerTools-pullImage-7' />
}
</programlisting>
</example>
<calloutlist>
<callout arearefs='ex-dockerTools-pullImage-1'>
<para>
<varname>imageName</varname> specifies the name of the image to be downloaded, which can also include the registry namespace (e.g. <literal>nixos</literal>). This argument is required.
</para>
</callout>
<callout arearefs='ex-dockerTools-pullImage-2'>
<para>
<varname>imageDigest</varname> specifies the digest of the image to be downloaded. This argument is required.
</para>
</callout>
<callout arearefs='ex-dockerTools-pullImage-3'>
<para>
<varname>finalImageName</varname>, if specified, this is the name of the image to be created. Note it is never used to fetch the image since we prefer to rely on the immutable digest ID. By default it's equal to <varname>imageName</varname>.
</para>
</callout>
<callout arearefs='ex-dockerTools-pullImage-4'>
<para>
<varname>finalImageTag</varname>, if specified, this is the tag of the image to be created. Note it is never used to fetch the image since we prefer to rely on the immutable digest ID. By default it's <literal>latest</literal>.
</para>
</callout>
<callout arearefs='ex-dockerTools-pullImage-5'>
<para>
<varname>sha256</varname> is the checksum of the whole fetched image. This argument is required.
</para>
</callout>
<callout arearefs='ex-dockerTools-pullImage-6'>
<para>
<varname>os</varname>, if specified, is the operating system of the fetched image. By default it's <literal>linux</literal>.
</para>
</callout>
<callout arearefs='ex-dockerTools-pullImage-7'>
<para>
<varname>arch</varname>, if specified, is the cpu architecture of the fetched image. By default it's <literal>x86_64</literal>.
</para>
</callout>
</calloutlist>
<para>
<literal>nix-prefetch-docker</literal> command can be used to get required image parameters:
<screen>
<prompt>$ </prompt>nix run nixpkgs.nix-prefetch-docker -c nix-prefetch-docker --image-name mysql --image-tag 5
</screen>
Since a given <varname>imageName</varname> may transparently refer to a manifest list of images which support multiple architectures and/or operating systems, you can supply the <option>--os</option> and <option>--arch</option> arguments to specify exactly which image you want. By default it will match the OS and architecture of the host the command is run on.
<screen>
<prompt>$ </prompt>nix-prefetch-docker --image-name mysql --image-tag 5 --arch x86_64 --os linux
</screen>
Desired image name and tag can be set using <option>--final-image-name</option> and <option>--final-image-tag</option> arguments:
<screen>
<prompt>$ </prompt>nix-prefetch-docker --image-name mysql --image-tag 5 --final-image-name eu.gcr.io/my-project/mysql --final-image-tag prod
</screen>
</para>
</section>
<section xml:id="ssec-pkgs-dockerTools-exportImage">
<title>exportImage</title>
<para>
This function is analogous to the <command>docker export</command> command, in that it can be used to flatten a Docker image that contains multiple layers. It is in fact the result of the merge of all the layers of the image. As such, the result is suitable for being imported in Docker with <command>docker import</command>.
</para>
<note>
<para>
Using this function requires the <literal>kvm</literal> device to be available.
</para>
</note>
<para>
The parameters of <varname>exportImage</varname> are the following:
</para>
<example xml:id='ex-dockerTools-exportImage'>
<title>Docker export</title>
<programlisting>
exportImage {
fromImage = someLayeredImage;
fromImageName = null;
fromImageTag = null;
name = someLayeredImage.name;
}
</programlisting>
</example>
<para>
The parameters relative to the base image have the same synopsis as described in <xref linkend='ssec-pkgs-dockerTools-buildImage'/>, except that <varname>fromImage</varname> is the only required argument in this case.
</para>
<para>
The <varname>name</varname> argument is the name of the derivation output, which defaults to <varname>fromImage.name</varname>.
</para>
</section>
<section xml:id="ssec-pkgs-dockerTools-shadowSetup">
<title>shadowSetup</title>
<para>
This constant string is a helper for setting up the base files for managing users and groups, only if such files don't exist already. It is suitable for being used in a <varname>runAsRoot</varname> <xref linkend='ex-dockerTools-buildImage-runAsRoot'/> script for cases like in the example below:
</para>
<example xml:id='ex-dockerTools-shadowSetup'>
<title>Shadow base files</title>
<programlisting>
buildImage {
name = "shadow-basic";
runAsRoot = ''
#!${pkgs.runtimeShell}
${shadowSetup}
groupadd -r redis
useradd -r -g redis redis
mkdir /data
chown redis:redis /data
'';
}
</programlisting>
</example>
<para>
Creating base files like <literal>/etc/passwd</literal> or <literal>/etc/login.defs</literal> is necessary for shadow-utils to manipulate users and groups.
</para>
</section>
</section>

View File

@@ -1,37 +0,0 @@
# pkgs.ociTools {#sec-pkgs-ociTools}
`pkgs.ociTools` is a set of functions for creating containers according to the [OCI container specification v1.0.0](https://github.com/opencontainers/runtime-spec). Beyond that, it makes no assumptions about the container runner you choose to use to run the created container.
## buildContainer {#ssec-pkgs-ociTools-buildContainer}
This function creates a simple OCI container that runs a single command inside of it. An OCI container consists of a `config.json` and a rootfs directory. The nix store of the container will contain all referenced dependencies of the given command.
The parameters of `buildContainer` with an example value are described below:
```nix
buildContainer {
args = [
(with pkgs;
writeScript "run.sh" ''
#!${bash}/bin/bash
exec ${bash}/bin/bash
'').outPath
];
mounts = {
"/data" = {
type = "none";
source = "/var/lib/mydata";
options = [ "bind" ];
};
};
readonly = false;
}
```
- `args` specifies a set of arguments to run inside the container. This is the only required argument for `buildContainer`. All referenced packages inside the derivation will be made available inside the container.
- `mounts` specifies additional mount points chosen by the user. By default only a minimal set of necessary filesystems are mounted into the container (e.g procfs, cgroupfs)
- `readonly` makes the container\'s rootfs read-only if it is set to true. The default value is false `false`.

View File

@@ -0,0 +1,62 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-pkgs-ociTools">
<title>pkgs.ociTools</title>
<para>
<varname>pkgs.ociTools</varname> is a set of functions for creating containers according to the <link xlink:href="https://github.com/opencontainers/runtime-spec">OCI container specification v1.0.0</link>. Beyond that it makes no assumptions about the container runner you choose to use to run the created container.
</para>
<section xml:id="ssec-pkgs-ociTools-buildContainer">
<title>buildContainer</title>
<para>
This function creates a simple OCI container that runs a single command inside of it. An OCI container consists of a <varname>config.json</varname> and a rootfs directory.The nix store of the container will contain all referenced dependencies of the given command.
</para>
<para>
The parameters of <varname>buildContainer</varname> with an example value are described below:
</para>
<example xml:id='ex-ociTools-buildContainer'>
<title>Build Container</title>
<programlisting>
buildContainer {
args = [ (with pkgs; writeScript "run.sh" ''
#!${bash}/bin/bash
exec ${bash}/bin/bash
'').outPath ]; <co xml:id='ex-ociTools-buildContainer-1' />
mounts = {
"/data" = {
type = "none";
source = "/var/lib/mydata";
options = [ "bind" ];
};
};<co xml:id='ex-ociTools-buildContainer-2' />
readonly = false; <co xml:id='ex-ociTools-buildContainer-3' />
}
</programlisting>
<calloutlist>
<callout arearefs='ex-ociTools-buildContainer-1'>
<para>
<varname>args</varname> specifies a set of arguments to run inside the container. This is the only required argument for <varname>buildContainer</varname>. All referenced packages inside the derivation will be made available inside the container
</para>
</callout>
<callout arearefs='ex-ociTools-buildContainer-2'>
<para>
<varname>mounts</varname> specifies additional mount points chosen by the user. By default only a minimal set of necessary filesystems are mounted into the container (e.g procfs, cgroupfs)
</para>
</callout>
<callout arearefs='ex-ociTools-buildContainer-3'>
<para>
<varname>readonly</varname> makes the container's rootfs read-only if it is set to true. The default value is false <literal>false</literal>.
</para>
</callout>
</calloutlist>
</example>
</section>
</section>

View File

@@ -0,0 +1,28 @@
let
inherit (import <nixpkgs> { }) snapTools firefox;
in snapTools.makeSnap {
meta = {
name = "nix-example-firefox";
summary = firefox.meta.description;
architectures = [ "amd64" ];
apps.nix-example-firefox = {
command = "${firefox}/bin/firefox";
plugs = [
"pulseaudio"
"camera"
"browser-support"
"avahi-observe"
"cups-control"
"desktop"
"desktop-legacy"
"gsettings"
"home"
"network"
"mount-observe"
"removable-media"
"x11"
];
};
confinement = "strict";
};
}

View File

@@ -0,0 +1,12 @@
let
inherit (import <nixpkgs> { }) snapTools hello;
in snapTools.makeSnap {
meta = {
name = "hello";
summary = hello.meta.description;
description = hello.meta.longDescription;
architectures = [ "amd64" ];
confinement = "strict";
apps.hello.command = "${hello}/bin/hello";
};
}

View File

@@ -1,71 +0,0 @@
# pkgs.snapTools {#sec-pkgs-snapTools}
`pkgs.snapTools` is a set of functions for creating Snapcraft images. Snap and Snapcraft is not used to perform these operations.
## The makeSnap Function {#ssec-pkgs-snapTools-makeSnap-signature}
`makeSnap` takes a single named argument, `meta`. This argument mirrors [the upstream `snap.yaml` format](https://docs.snapcraft.io/snap-format) exactly.
The `base` should not be specified, as `makeSnap` will force set it.
Currently, `makeSnap` does not support creating GUI stubs.
## Build a Hello World Snap {#ssec-pkgs-snapTools-build-a-snap-hello}
The following expression packages GNU Hello as a Snapcraft snap.
``` {#ex-snapTools-buildSnap-hello .nix}
let
inherit (import <nixpkgs> { }) snapTools hello;
in snapTools.makeSnap {
meta = {
name = "hello";
summary = hello.meta.description;
description = hello.meta.longDescription;
architectures = [ "amd64" ];
confinement = "strict";
apps.hello.command = "${hello}/bin/hello";
};
}
```
`nix-build` this expression and install it with `snap install ./result --dangerous`. `hello` will now be the Snapcraft version of the package.
## Build a Graphical Snap {#ssec-pkgs-snapTools-build-a-snap-firefox}
Graphical programs require many more integrations with the host. This example uses Firefox as an example because it is one of the most complicated programs we could package.
``` {#ex-snapTools-buildSnap-firefox .nix}
let
inherit (import <nixpkgs> { }) snapTools firefox;
in snapTools.makeSnap {
meta = {
name = "nix-example-firefox";
summary = firefox.meta.description;
architectures = [ "amd64" ];
apps.nix-example-firefox = {
command = "${firefox}/bin/firefox";
plugs = [
"pulseaudio"
"camera"
"browser-support"
"avahi-observe"
"cups-control"
"desktop"
"desktop-legacy"
"gsettings"
"home"
"network"
"mount-observe"
"removable-media"
"x11"
];
};
confinement = "strict";
};
}
```
`nix-build` this expression and install it with `snap install ./result --dangerous`. `nix-example-firefox` will now be the Snapcraft version of the Firefox package.
The specific meaning behind plugs can be looked up in the [Snapcraft interface documentation](https://docs.snapcraft.io/supported-interfaces).

View File

@@ -0,0 +1,59 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-pkgs-snapTools">
<title>pkgs.snapTools</title>
<para>
<varname>pkgs.snapTools</varname> is a set of functions for creating Snapcraft images. Snap and Snapcraft is not used to perform these operations.
</para>
<section xml:id="ssec-pkgs-snapTools-makeSnap-signature">
<title>The makeSnap Function</title>
<para>
<function>makeSnap</function> takes a single named argument, <parameter>meta</parameter>. This argument mirrors <link xlink:href="https://docs.snapcraft.io/snap-format">the upstream <filename>snap.yaml</filename> format</link> exactly.
</para>
<para>
The <parameter>base</parameter> should not be be specified, as <function>makeSnap</function> will force set it.
</para>
<para>
Currently, <function>makeSnap</function> does not support creating GUI stubs.
</para>
</section>
<section xml:id="ssec-pkgs-snapTools-build-a-snap-hello">
<title>Build a Hello World Snap</title>
<example xml:id="ex-snapTools-buildSnap-hello">
<title>Making a Hello World Snap</title>
<para>
The following expression packages GNU Hello as a Snapcraft snap.
</para>
<programlisting><xi:include href="./snap/example-hello.nix" parse="text" /></programlisting>
<para>
<command>nix-build</command> this expression and install it with <command>snap install ./result --dangerous</command>. <command>hello</command> will now be the Snapcraft version of the package.
</para>
</example>
</section>
<section xml:id="ssec-pkgs-snapTools-build-a-snap-firefox">
<title>Build a Hello World Snap</title>
<example xml:id="ex-snapTools-buildSnap-firefox">
<title>Making a Graphical Snap</title>
<para>
Graphical programs require many more integrations with the host. This example uses Firefox as an example, because it is one of the most complicated programs we could package.
</para>
<programlisting><xi:include href="./snap/example-firefox.nix" parse="text" /></programlisting>
<para>
<command>nix-build</command> this expression and install it with <command>snap install ./result --dangerous</command>. <command>nix-example-firefox</command> will now be the Snapcraft version of the Firefox package.
</para>
<para>
The specific meaning behind plugs can be looked up in the <link xlink:href="https://docs.snapcraft.io/supported-interfaces">Snapcraft interface documentation</link>.
</para>
</example>
</section>
</section>

View File

@@ -1,129 +0,0 @@
# Cataclysm: Dark Days Ahead {#cataclysm-dark-days-ahead}
## How to install Cataclysm DDA {#how-to-install-cataclysm-dda}
To install the latest stable release of Cataclysm DDA to your profile, execute
`nix-env -f "<nixpkgs>" -iA cataclysm-dda`. For the curses build (build
without tiles), install `cataclysmDDA.stable.curses`. Note: `cataclysm-dda` is
an alias to `cataclysmDDA.stable.tiles`.
If you like access to a development build of your favorite git revision,
override `cataclysm-dda-git` (or `cataclysmDDA.git.curses` if you like curses
build):
```nix
cataclysm-dda-git.override {
version = "YYYY-MM-DD";
rev = "YOUR_FAVORITE_REVISION";
sha256 = "CHECKSUM_OF_THE_REVISION";
}
```
The sha256 checksum can be obtained by
```sh
nix-prefetch-url --unpack "https://github.com/CleverRaven/Cataclysm-DDA/archive/${YOUR_FAVORITE_REVISION}.tar.gz"
```
The default configuration directory is `~/.cataclysm-dda`. If you prefer
`$XDG_CONFIG_HOME/cataclysm-dda`, override the derivation:
```nix
cataclysm-dda.override {
useXdgDir = true;
}
```
## Important note for overriding packages {#important-note-for-overriding-packages}
After applying `overrideAttrs`, you need to fix `passthru.pkgs` and
`passthru.withMods` attributes either manually or by using `attachPkgs`:
```nix
let
# You enabled parallel building.
myCDDA = cataclysm-dda-git.overrideAttrs (_: {
enableParallelBuilding = true;
});
# Unfortunately, this refers to the package before overriding and
# parallel building is still disabled.
badExample = myCDDA.withMods (_: []);
inherit (cataclysmDDA) attachPkgs pkgs wrapCDDA;
# You can fix it by hand
goodExample1 = myCDDA.overrideAttrs (old: {
passthru = old.passthru // {
pkgs = pkgs.override { build = goodExample1; };
withMods = wrapCDDA goodExample1;
};
});
# or by using a helper function `attachPkgs`.
goodExample2 = attachPkgs pkgs myCDDA;
in
# badExample # parallel building disabled
# goodExample1.withMods (_: []) # parallel building enabled
goodExample2.withMods (_: []) # parallel building enabled
```
## Customizing with mods {#customizing-with-mods}
To install Cataclysm DDA with mods of your choice, you can use `withMods`
attribute:
```nix
cataclysm-dda.withMods (mods: with mods; [
tileset.UndeadPeople
])
```
All mods, soundpacks, and tilesets available in nixpkgs are found in
`cataclysmDDA.pkgs`.
Here is an example to modify existing mods and/or add more mods not available
in nixpkgs:
```nix
let
customMods = self: super: lib.recursiveUpdate super {
# Modify existing mod
tileset.UndeadPeople = super.tileset.UndeadPeople.overrideAttrs (old: {
# If you like to apply a patch to the tileset for example
patches = [ ./path/to/your.patch ];
});
# Add another mod
mod.Awesome = cataclysmDDA.buildMod {
modName = "Awesome";
version = "0.x";
src = fetchFromGitHub {
owner = "Someone";
repo = "AwesomeMod";
rev = "...";
sha256 = "...";
};
# Path to be installed in the unpacked source (default: ".")
modRoot = "contents/under/this/path/will/be/installed";
};
# Add another soundpack
soundpack.Fantastic = cataclysmDDA.buildSoundPack {
# ditto
};
# Add another tileset
tileset.SuperDuper = cataclysmDDA.buildTileSet {
# ditto
};
};
in
cataclysm-dda.withMods (mods: with mods.extend customMods; [
tileset.UndeadPeople
mod.Awesome
soundpack.Fantastic
tileset.SuperDuper
])
```

View File

@@ -1,32 +0,0 @@
# Citrix Workspace {#sec-citrix}
The [Citrix Workspace App](https://www.citrix.com/products/workspace-app/) is a remote desktop viewer which provides access to [XenDesktop](https://www.citrix.com/products/xenapp-xendesktop/) installations.
## Basic usage {#sec-citrix-base}
The tarball archive needs to be downloaded manually, as the license agreements of the vendor for [Citrix Workspace](https://www.citrix.de/downloads/workspace-app/linux/workspace-app-for-linux-latest.html) needs to be accepted first. Then run `nix-prefetch-url file://$PWD/linuxx64-$version.tar.gz`. With the archive available in the store, the package can be built and installed with Nix.
## Citrix Self-service {#sec-citrix-selfservice}
The [self-service](https://support.citrix.com/article/CTX200337) is an application managing Citrix desktops and applications. Please note that this feature only works with at least citrix_workspace_20_06_0 and later versions.
In order to set this up, you first have to [download the `.cr` file from the Netscaler Gateway](https://its.uiowa.edu/support/article/102186). After that, you can configure the `selfservice` like this:
```ShellSession
$ storebrowse -C ~/Downloads/receiverconfig.cr
$ selfservice
```
## Custom certificates {#sec-citrix-custom-certs}
The `Citrix Workspace App` in `nixpkgs` trusts several certificates [from the Mozilla database](https://curl.haxx.se/docs/caextract.html) by default. However, several companies using Citrix might require their own corporate certificate. On distros with imperative packaging, these certs can be stored easily in [`$ICAROOT`](https://developer-docs.citrix.com/projects/receiver-for-linux-command-reference/en/13.7/), however this directory is a store path in `nixpkgs`. In order to work around this issue, the package provides a simple mechanism to add custom certificates without rebuilding the entire package using `symlinkJoin`:
```nix
with import <nixpkgs> { config.allowUnfree = true; };
let
extraCerts = [
./custom-cert-1.pem
./custom-cert-2.pem # ...
];
in citrix_workspace.override { inherit extraCerts; }
```

View File

@@ -0,0 +1,44 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-citrix">
<title>Citrix Workspace</title>
<para>
<note>
<para>
Please note that the <literal>citrix_receiver</literal> package has been deprecated since its development was <link xlink:href="https://docs.citrix.com/en-us/citrix-workspace-app.html">discontinued by upstream</link> and has been replaced by <link xlink:href="https://www.citrix.com/products/workspace-app/">the citrix workspace app</link>.
</para>
</note>
<link xlink:href="https://www.citrix.com/products/receiver/">Citrix Receiver</link> and <link xlink:href="https://www.citrix.com/products/workspace-app/">Citrix Workspace App</link> are a remote desktop viewers which provide access to <link xlink:href="https://www.citrix.com/products/xenapp-xendesktop/">XenDesktop</link> installations.
</para>
<section xml:id="sec-citrix-base">
<title>Basic usage</title>
<para>
The tarball archive needs to be downloaded manually as the license agreements of the vendor for <link xlink:href="https://www.citrix.com/downloads/citrix-receiver/">Citrix Receiver</link> or <link xlink:href="https://www.citrix.de/downloads/workspace-app/linux/workspace-app-for-linux-latest.html">Citrix Workspace</link> need to be accepted first. Then run <command>nix-prefetch-url file://$PWD/linuxx64-$version.tar.gz</command>. With the archive available in the store the package can be built and installed with Nix.
</para>
<warning>
<title>Caution with <command>nix-shell</command> installs</title>
<para>
It's recommended to install <literal>Citrix Receiver</literal> and/or <literal>Citrix Workspace</literal> using <literal>nix-env -i</literal> or globally to ensure that the <literal>.desktop</literal> files are installed properly into <literal>$XDG_CONFIG_DIRS</literal>. Otherwise it won't be possible to open <literal>.ica</literal> files automatically from the browser to start a Citrix connection.
</para>
</warning>
</section>
<section xml:id="sec-citrix-custom-certs">
<title>Custom certificates</title>
<para>
The <literal>Citrix Workspace App</literal> in <literal>nixpkgs</literal> trust several certificates <link xlink:href="https://curl.haxx.se/docs/caextract.html">from the Mozilla database</link> by default. However several companies using Citrix might require their own corporate certificate. On distros with imperative packaging these certs can be stored easily in <link xlink:href="https://developer-docs.citrix.com/projects/receiver-for-linux-command-reference/en/13.7/"><literal>$ICAROOT</literal></link>, however this directory is a store path in <literal>nixpkgs</literal>. In order to work around this issue the package provides a simple mechanism to add custom certificates without rebuilding the entire package using <literal>symlinkJoin</literal>:
<programlisting>
<![CDATA[with import <nixpkgs> { config.allowUnfree = true; };
let extraCerts = [ ./custom-cert-1.pem ./custom-cert-2.pem /* ... */ ]; in
citrix_workspace.override {
inherit extraCerts;
}]]>
</programlisting>
</para>
</section>
</section>

View File

@@ -1,13 +0,0 @@
# DLib {#dlib}
[DLib](http://dlib.net/) is a modern, C++-based toolkit which provides several machine learning algorithms.
## Compiling without AVX support {#compiling-without-avx-support}
Especially older CPUs don\'t support [AVX](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions) (Advanced Vector Extensions) instructions that are used by DLib to optimize their algorithms.
On the affected hardware errors like `Illegal instruction` will occur. In those cases AVX support needs to be disabled:
```nix
self: super: { dlib = super.dlib.override { avxSupport = false; }; }
```

View File

@@ -0,0 +1,24 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="dlib">
<title>DLib</title>
<para>
<link xlink:href="http://dlib.net/">DLib</link> is a modern, C++-based toolkit which provides several machine learning algorithms.
</para>
<section xml:id="compiling-without-avx-support">
<title>Compiling without AVX support</title>
<para>
Especially older CPUs don't support <link xlink:href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">AVX</link> (<abbrev>Advanced Vector Extensions</abbrev>) instructions that are used by DLib to optimize their algorithms.
</para>
<para>
On the affected hardware errors like <literal>Illegal instruction</literal> will occur. In those cases AVX support needs to be disabled:
<programlisting>self: super: {
dlib = super.dlib.override { avxSupport = false; };
}</programlisting>
</para>
</section>
</section>

View File

@@ -1,64 +0,0 @@
# Eclipse {#sec-eclipse}
The Nix expressions related to the Eclipse platform and IDE are in [`pkgs/applications/editors/eclipse`](https://github.com/NixOS/nixpkgs/blob/master/pkgs/applications/editors/eclipse).
Nixpkgs provides a number of packages that will install Eclipse in its various forms. These range from the bare-bones Eclipse Platform to the more fully featured Eclipse SDK or Scala-IDE packages and multiple version are often available. It is possible to list available Eclipse packages by issuing the command:
```ShellSession
$ nix-env -f '<nixpkgs>' -qaP -A eclipses --description
```
Once an Eclipse variant is installed, it can be run using the `eclipse` command, as expected. From within Eclipse, it is then possible to install plugins in the usual manner by either manually specifying an Eclipse update site or by installing the Marketplace Client plugin and using it to discover and install other plugins. This installation method provides an Eclipse installation that closely resemble a manually installed Eclipse.
If you prefer to install plugins in a more declarative manner, then Nixpkgs also offer a number of Eclipse plugins that can be installed in an _Eclipse environment_. This type of environment is created using the function `eclipseWithPlugins` found inside the `nixpkgs.eclipses` attribute set. This function takes as argument `{ eclipse, plugins ? [], jvmArgs ? [] }` where `eclipse` is a one of the Eclipse packages described above, `plugins` is a list of plugin derivations, and `jvmArgs` is a list of arguments given to the JVM running the Eclipse. For example, say you wish to install the latest Eclipse Platform with the popular Eclipse Color Theme plugin and also allow Eclipse to use more RAM. You could then add:
```nix
packageOverrides = pkgs: {
myEclipse = with pkgs.eclipses; eclipseWithPlugins {
eclipse = eclipse-platform;
jvmArgs = [ "-Xmx2048m" ];
plugins = [ plugins.color-theme ];
};
}
```
to your Nixpkgs configuration (`~/.config/nixpkgs/config.nix`) and install it by running `nix-env -f '<nixpkgs>' -iA myEclipse` and afterward run Eclipse as usual. It is possible to find out which plugins are available for installation using `eclipseWithPlugins` by running:
```ShellSession
$ nix-env -f '<nixpkgs>' -qaP -A eclipses.plugins --description
```
If there is a need to install plugins that are not available in Nixpkgs then it may be possible to define these plugins outside Nixpkgs using the `buildEclipseUpdateSite` and `buildEclipsePlugin` functions found in the `nixpkgs.eclipses.plugins` attribute set. Use the `buildEclipseUpdateSite` function to install a plugin distributed as an Eclipse update site. This function takes `{ name, src }` as argument, where `src` indicates the Eclipse update site archive. All Eclipse features and plugins within the downloaded update site will be installed. When an update site archive is not available, then the `buildEclipsePlugin` function can be used to install a plugin that consists of a pair of feature and plugin JARs. This function takes an argument `{ name, srcFeature, srcPlugin }` where `srcFeature` and `srcPlugin` are the feature and plugin JARs, respectively.
Expanding the previous example with two plugins using the above functions, we have:
```nix
packageOverrides = pkgs: {
myEclipse = with pkgs.eclipses; eclipseWithPlugins {
eclipse = eclipse-platform;
jvmArgs = [ "-Xmx2048m" ];
plugins = [
plugins.color-theme
(plugins.buildEclipsePlugin {
name = "myplugin1-1.0";
srcFeature = fetchurl {
url = "http:///features/myplugin1.jar";
sha256 = "123";
};
srcPlugin = fetchurl {
url = "http:///plugins/myplugin1.jar";
sha256 = "123";
};
});
(plugins.buildEclipseUpdateSite {
name = "myplugin2-1.0";
src = fetchurl {
stripRoot = false;
url = "http:///myplugin2.zip";
sha256 = "123";
};
});
];
};
}
```

View File

@@ -0,0 +1,72 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-eclipse">
<title>Eclipse</title>
<para>
The Nix expressions related to the Eclipse platform and IDE are in <link xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/applications/editors/eclipse"><filename>pkgs/applications/editors/eclipse</filename></link>.
</para>
<para>
Nixpkgs provides a number of packages that will install Eclipse in its various forms. These range from the bare-bones Eclipse Platform to the more fully featured Eclipse SDK or Scala-IDE packages and multiple version are often available. It is possible to list available Eclipse packages by issuing the command:
<screen>
<prompt>$ </prompt>nix-env -f '&lt;nixpkgs&gt;' -qaP -A eclipses --description
</screen>
Once an Eclipse variant is installed it can be run using the <command>eclipse</command> command, as expected. From within Eclipse it is then possible to install plugins in the usual manner by either manually specifying an Eclipse update site or by installing the Marketplace Client plugin and using it to discover and install other plugins. This installation method provides an Eclipse installation that closely resemble a manually installed Eclipse.
</para>
<para>
If you prefer to install plugins in a more declarative manner then Nixpkgs also offer a number of Eclipse plugins that can be installed in an <emphasis>Eclipse environment</emphasis>. This type of environment is created using the function <varname>eclipseWithPlugins</varname> found inside the <varname>nixpkgs.eclipses</varname> attribute set. This function takes as argument <literal>{ eclipse, plugins ? [], jvmArgs ? [] }</literal> where <varname>eclipse</varname> is a one of the Eclipse packages described above, <varname>plugins</varname> is a list of plugin derivations, and <varname>jvmArgs</varname> is a list of arguments given to the JVM running the Eclipse. For example, say you wish to install the latest Eclipse Platform with the popular Eclipse Color Theme plugin and also allow Eclipse to use more RAM. You could then add
<screen>
packageOverrides = pkgs: {
myEclipse = with pkgs.eclipses; eclipseWithPlugins {
eclipse = eclipse-platform;
jvmArgs = [ "-Xmx2048m" ];
plugins = [ plugins.color-theme ];
};
}
</screen>
to your Nixpkgs configuration (<filename>~/.config/nixpkgs/config.nix</filename>) and install it by running <command>nix-env -f '&lt;nixpkgs&gt;' -iA myEclipse</command> and afterward run Eclipse as usual. It is possible to find out which plugins are available for installation using <varname>eclipseWithPlugins</varname> by running
<screen>
<prompt>$ </prompt>nix-env -f '&lt;nixpkgs&gt;' -qaP -A eclipses.plugins --description
</screen>
</para>
<para>
If there is a need to install plugins that are not available in Nixpkgs then it may be possible to define these plugins outside Nixpkgs using the <varname>buildEclipseUpdateSite</varname> and <varname>buildEclipsePlugin</varname> functions found in the <varname>nixpkgs.eclipses.plugins</varname> attribute set. Use the <varname>buildEclipseUpdateSite</varname> function to install a plugin distributed as an Eclipse update site. This function takes <literal>{ name, src }</literal> as argument where <literal>src</literal> indicates the Eclipse update site archive. All Eclipse features and plugins within the downloaded update site will be installed. When an update site archive is not available then the <varname>buildEclipsePlugin</varname> function can be used to install a plugin that consists of a pair of feature and plugin JARs. This function takes an argument <literal>{ name, srcFeature, srcPlugin }</literal> where <literal>srcFeature</literal> and <literal>srcPlugin</literal> are the feature and plugin JARs, respectively.
</para>
<para>
Expanding the previous example with two plugins using the above functions we have
<screen>
packageOverrides = pkgs: {
myEclipse = with pkgs.eclipses; eclipseWithPlugins {
eclipse = eclipse-platform;
jvmArgs = [ "-Xmx2048m" ];
plugins = [
plugins.color-theme
(plugins.buildEclipsePlugin {
name = "myplugin1-1.0";
srcFeature = fetchurl {
url = "http://…/features/myplugin1.jar";
sha256 = "123…";
};
srcPlugin = fetchurl {
url = "http://…/plugins/myplugin1.jar";
sha256 = "123…";
};
});
(plugins.buildEclipseUpdateSite {
name = "myplugin2-1.0";
src = fetchurl {
stripRoot = false;
url = "http://…/myplugin2.zip";
sha256 = "123…";
};
});
];
};
}
</screen>
</para>
</section>

View File

@@ -1,11 +0,0 @@
# Elm {#sec-elm}
To start a development environment, run:
```ShellSession
nix-shell -p elmPackages.elm elmPackages.elm-format
```
To update the Elm compiler, see `nixpkgs/pkgs/development/compilers/elm/README.md`.
To package Elm applications, [read about elm2nix](https://github.com/hercules-ci/elm2nix#elm2nix).

View File

@@ -0,0 +1,17 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-elm">
<title>Elm</title>
<para>
To start a development environment do <command>nix-shell -p elmPackages.elm elmPackages.elm-format</command>
</para>
<para>
To update Elm compiler, see <filename>nixpkgs/pkgs/development/compilers/elm/README.md</filename>.
</para>
<para>
To package Elm applications, <link xlink:href="https://github.com/hercules-ci/elm2nix#elm2nix">read about elm2nix</link>.
</para>
</section>

View File

@@ -1,119 +0,0 @@
# Emacs {#sec-emacs}
## Configuring Emacs {#sec-emacs-config}
The Emacs package comes with some extra helpers to make it easier to configure. `emacs.pkgs.withPackages` allows you to manage packages from ELPA. This means that you will not have to install that packages from within Emacs. For instance, if you wanted to use `company` `counsel`, `flycheck`, `ivy`, `magit`, `projectile`, and `use-package` you could use this as a `~/.config/nixpkgs/config.nix` override:
```nix
{
packageOverrides = pkgs: with pkgs; {
myEmacs = emacs.pkgs.withPackages (epkgs: (with epkgs.melpaStablePackages; [
company
counsel
flycheck
ivy
magit
projectile
use-package
]));
}
}
```
You can install it like any other packages via `nix-env -iA myEmacs`. However, this will only install those packages. It will not `configure` them for us. To do this, we need to provide a configuration file. Luckily, it is possible to do this from within Nix! By modifying the above example, we can make Emacs load a custom config file. The key is to create a package that provides a `default.el` file in `/share/emacs/site-start/`. Emacs knows to load this file automatically when it starts.
```nix
{
packageOverrides = pkgs: with pkgs; rec {
myEmacsConfig = writeText "default.el" ''
;; initialize package
(require 'package)
(package-initialize 'noactivate)
(eval-when-compile
(require 'use-package))
;; load some packages
(use-package company
:bind ("<C-tab>" . company-complete)
:diminish company-mode
:commands (company-mode global-company-mode)
:defer 1
:config
(global-company-mode))
(use-package counsel
:commands (counsel-descbinds)
:bind (([remap execute-extended-command] . counsel-M-x)
("C-x C-f" . counsel-find-file)
("C-c g" . counsel-git)
("C-c j" . counsel-git-grep)
("C-c k" . counsel-ag)
("C-x l" . counsel-locate)
("M-y" . counsel-yank-pop)))
(use-package flycheck
:defer 2
:config (global-flycheck-mode))
(use-package ivy
:defer 1
:bind (("C-c C-r" . ivy-resume)
("C-x C-b" . ivy-switch-buffer)
:map ivy-minibuffer-map
("C-j" . ivy-call))
:diminish ivy-mode
:commands ivy-mode
:config
(ivy-mode 1))
(use-package magit
:defer
:if (executable-find "git")
:bind (("C-x g" . magit-status)
("C-x G" . magit-dispatch-popup))
:init
(setq magit-completing-read-function 'ivy-completing-read))
(use-package projectile
:commands projectile-mode
:bind-keymap ("C-c p" . projectile-command-map)
:defer 5
:config
(projectile-global-mode))
'';
myEmacs = emacs.pkgs.withPackages (epkgs: (with epkgs.melpaStablePackages; [
(runCommand "default.el" {} ''
mkdir -p $out/share/emacs/site-lisp
cp ${myEmacsConfig} $out/share/emacs/site-lisp/default.el
'')
company
counsel
flycheck
ivy
magit
projectile
use-package
]));
};
}
```
This provides a fairly full Emacs start file. It will load in addition to the user's personal config. You can always disable it by passing `-q` to the Emacs command.
Sometimes `emacs.pkgs.withPackages` is not enough, as this package set has some priorities imposed on packages (with the lowest priority assigned to Melpa Unstable, and the highest for packages manually defined in `pkgs/top-level/emacs-packages.nix`). But you can't control these priorities when some package is installed as a dependency. You can override it on a per-package-basis, providing all the required dependencies manually, but it's tedious and there is always a possibility that an unwanted dependency will sneak in through some other package. To completely override such a package, you can use `overrideScope'`.
```nix
overrides = self: super: rec {
haskell-mode = self.melpaPackages.haskell-mode;
...
};
((emacsPackagesFor emacs).overrideScope' overrides).withPackages
(p: with p; [
# here both these package will use haskell-mode of our own choice
ghc-mod
dante
])
```

View File

@@ -0,0 +1,131 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-emacs">
<title>Emacs</title>
<section xml:id="sec-emacs-config">
<title>Configuring Emacs</title>
<para>
The Emacs package comes with some extra helpers to make it easier to configure. <varname>emacsWithPackages</varname> allows you to manage packages from ELPA. This means that you will not have to install that packages from within Emacs. For instance, if you wanted to use <literal>company</literal>, <literal>counsel</literal>, <literal>flycheck</literal>, <literal>ivy</literal>, <literal>magit</literal>, <literal>projectile</literal>, and <literal>use-package</literal> you could use this as a <filename>~/.config/nixpkgs/config.nix</filename> override:
</para>
<screen>
{
packageOverrides = pkgs: with pkgs; {
myEmacs = emacsWithPackages (epkgs: (with epkgs.melpaStablePackages; [
company
counsel
flycheck
ivy
magit
projectile
use-package
]));
}
}
</screen>
<para>
You can install it like any other packages via <command>nix-env -iA myEmacs</command>. However, this will only install those packages. It will not <literal>configure</literal> them for us. To do this, we need to provide a configuration file. Luckily, it is possible to do this from within Nix! By modifying the above example, we can make Emacs load a custom config file. The key is to create a package that provide a <filename>default.el</filename> file in <filename>/share/emacs/site-start/</filename>. Emacs knows to load this file automatically when it starts.
</para>
<screen>
{
packageOverrides = pkgs: with pkgs; rec {
myEmacsConfig = writeText "default.el" ''
;; initialize package
(require 'package)
(package-initialize 'noactivate)
(eval-when-compile
(require 'use-package))
;; load some packages
(use-package company
:bind ("&lt;C-tab&gt;" . company-complete)
:diminish company-mode
:commands (company-mode global-company-mode)
:defer 1
:config
(global-company-mode))
(use-package counsel
:commands (counsel-descbinds)
:bind (([remap execute-extended-command] . counsel-M-x)
("C-x C-f" . counsel-find-file)
("C-c g" . counsel-git)
("C-c j" . counsel-git-grep)
("C-c k" . counsel-ag)
("C-x l" . counsel-locate)
("M-y" . counsel-yank-pop)))
(use-package flycheck
:defer 2
:config (global-flycheck-mode))
(use-package ivy
:defer 1
:bind (("C-c C-r" . ivy-resume)
("C-x C-b" . ivy-switch-buffer)
:map ivy-minibuffer-map
("C-j" . ivy-call))
:diminish ivy-mode
:commands ivy-mode
:config
(ivy-mode 1))
(use-package magit
:defer
:if (executable-find "git")
:bind (("C-x g" . magit-status)
("C-x G" . magit-dispatch-popup))
:init
(setq magit-completing-read-function 'ivy-completing-read))
(use-package projectile
:commands projectile-mode
:bind-keymap ("C-c p" . projectile-command-map)
:defer 5
:config
(projectile-global-mode))
'';
myEmacs = emacsWithPackages (epkgs: (with epkgs.melpaStablePackages; [
(runCommand "default.el" {} ''
mkdir -p $out/share/emacs/site-lisp
cp ${myEmacsConfig} $out/share/emacs/site-lisp/default.el
'')
company
counsel
flycheck
ivy
magit
projectile
use-package
]));
};
}
</screen>
<para>
This provides a fairly full Emacs start file. It will load in addition to the user's presonal config. You can always disable it by passing <command>-q</command> to the Emacs command.
</para>
<para>
Sometimes <varname>emacsWithPackages</varname> is not enough, as this package set has some priorities imposed on packages (with the lowest priority assigned to Melpa Unstable, and the highest for packages manually defined in <filename>pkgs/top-level/emacs-packages.nix</filename>). But you can't control this priorities when some package is installed as a dependency. You can override it on per-package-basis, providing all the required dependencies manually - but it's tedious and there is always a possibility that an unwanted dependency will sneak in through some other package. To completely override such a package you can use <varname>overrideScope'</varname>.
</para>
<screen>
overrides = self: super: rec {
haskell-mode = self.melpaPackages.haskell-mode;
...
};
((emacsPackagesGen emacs).overrideScope' overrides).emacsWithPackages (p: with p; [
# here both these package will use haskell-mode of our own choice
ghc-mod
dante
])
</screen>
</section>
</section>

View File

@@ -1,18 +0,0 @@
# /etc files {#etc}
Certain calls in glibc require access to runtime files found in `/etc` such as `/etc/protocols` or `/etc/services` -- [getprotobyname](https://linux.die.net/man/3/getprotobyname) is one such function.
On non-NixOS distributions these files are typically provided by packages (i.e., [netbase](https://packages.debian.org/sid/netbase)) if not already pre-installed in your distribution. This can cause non-reproducibility for code if they rely on these files being present.
If [iana-etc](https://hydra.nixos.org/job/nixos/trunk-combined/nixpkgs.iana-etc.x86_64-linux) is part of your `buildInputs`, then it will set the environment variables `NIX_ETC_PROTOCOLS` and `NIX_ETC_SERVICES` to the corresponding files in the package through a setup hook.
```bash
> nix-shell -p iana-etc
[nix-shell:~]$ env | grep NIX_ETC
NIX_ETC_SERVICES=/nix/store/aj866hr8fad8flnggwdhrldm0g799ccz-iana-etc-20210225/etc/services
NIX_ETC_PROTOCOLS=/nix/store/aj866hr8fad8flnggwdhrldm0g799ccz-iana-etc-20210225/etc/protocols
```
Nixpkg's version of [glibc](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/libraries/glibc/default.nix) has been patched to check for the existence of these environment variables. If the environment variables are *not* set, then it will attempt to find the files at the default location within `/etc`.

View File

@@ -1,55 +0,0 @@
# Firefox {#sec-firefox}
## Build wrapped Firefox with extensions and policies {#build-wrapped-firefox-with-extensions-and-policies}
The `wrapFirefox` function allows to pass policies, preferences and extensions that are available to Firefox. With the help of `fetchFirefoxAddon` this allows to build a Firefox version that already comes with add-ons pre-installed:
```nix
{
# Nix firefox addons only work with the firefox-esr package.
myFirefox = wrapFirefox firefox-esr-unwrapped {
nixExtensions = [
(fetchFirefoxAddon {
name = "ublock"; # Has to be unique!
url = "https://addons.mozilla.org/firefox/downloads/file/3679754/ublock_origin-1.31.0-an+fx.xpi";
sha256 = "1h768ljlh3pi23l27qp961v1hd0nbj2vasgy11bmcrlqp40zgvnr";
})
];
extraPolicies = {
CaptivePortal = false;
DisableFirefoxStudies = true;
DisablePocket = true;
DisableTelemetry = true;
DisableFirefoxAccounts = true;
FirefoxHome = {
Pocket = false;
Snippets = false;
};
UserMessaging = {
ExtensionRecommendations = false;
SkipOnboarding = true;
};
SecurityDevices = {
# Use a proxy module rather than `nixpkgs.config.firefox.smartcardSupport = true`
"PKCS#11 Proxy Module" = "${pkgs.p11-kit}/lib/p11-kit-proxy.so";
};
};
extraPrefs = ''
// Show more ssl cert infos
lockPref("security.identityblock.show_extended_validation", true);
'';
};
}
```
If `nixExtensions != null`, then all manually installed add-ons will be uninstalled from your browser profile.
To view available enterprise policies, visit [enterprise policies](https://github.com/mozilla/policy-templates#enterprisepoliciesenabled)
or type into the Firefox URL bar: `about:policies#documentation`.
Nix installed add-ons do not have a valid signature, which is why signature verification is disabled. This does not compromise security because downloaded add-ons are checksummed and manual add-ons can't be installed. Also, make sure that the `name` field of `fetchFirefoxAddon` is unique. If you remove an add-on from the `nixExtensions` array, rebuild and start Firefox: the removed add-on will be completely removed with all of its settings.
## Troubleshooting {#sec-firefox-troubleshooting}
If add-ons are marked as broken or the signature is invalid, make sure you have Firefox ESR installed. Normal Firefox does not provide the ability anymore to disable signature verification for add-ons thus nix add-ons get disabled by the normal Firefox binary.
If add-ons do not appear installed despite being defined in your nix configuration file, reset the local add-on state of your Firefox profile by clicking `Help -> More Troubleshooting Information -> Refresh Firefox`. This can happen if you switch from manual add-on mode to nix add-on mode and then back to manual mode and then again to nix add-on mode.

View File

@@ -1,50 +0,0 @@
# Fish {#sec-fish}
Fish is a "smart and user-friendly command line shell" with support for plugins.
## Vendor Fish scripts {#sec-fish-vendor}
Any package may ship its own Fish completions, configuration snippets, and
functions. Those should be installed to
`$out/share/fish/vendor_{completions,conf,functions}.d` respectively.
When the `programs.fish.enable` and
`programs.fish.vendor.{completions,config,functions}.enable` options from the
NixOS Fish module are set to true, those paths are symlinked in the current
system environment and automatically loaded by Fish.
## Packaging Fish plugins {#sec-fish-plugins-pkg}
While packages providing standalone executables belong to the top level,
packages which have the sole purpose of extending Fish belong to the
`fishPlugins` scope and should be registered in
`pkgs/shells/fish/plugins/default.nix`.
The `buildFishPlugin` utility function can be used to automatically copy Fish
scripts from `$src/{completions,conf,conf.d,functions}` to the standard vendor
installation paths. It also sets up the test environment so that the optional
`checkPhase` is executed in a Fish shell with other already packaged plugins
and package-local Fish functions specified in `checkPlugins` and
`checkFunctionDirs` respectively.
See `pkgs/shells/fish/plugins/pure.nix` for an example of Fish plugin package
using `buildFishPlugin` and running unit tests with the `fishtape` test runner.
## Fish wrapper {#sec-fish-wrapper}
The `wrapFish` package is a wrapper around Fish which can be used to create
Fish shells initialized with some plugins as well as completions, configuration
snippets and functions sourced from the given paths. This provides a convenient
way to test Fish plugins and scripts without having to alter the environment.
```nix
wrapFish {
pluginPkgs = with fishPlugins; [ pure foreign-env ];
completionDirs = [];
functionDirs = [];
confDirs = [ "/path/to/some/fish/init/dir/" ];
}
```

View File

@@ -1,45 +0,0 @@
# FUSE {#sec-fuse}
Some packages rely on
[FUSE](https://www.kernel.org/doc/html/latest/filesystems/fuse.html) to provide
support for additional filesystems not supported by the kernel.
In general, FUSE software are primarily developed for Linux but many of them can
also run on macOS. Nixpkgs supports FUSE packages on macOS, but it requires
[macFUSE](https://osxfuse.github.io) to be installed outside of Nix. macFUSE
currently isn't packaged in Nixpkgs mainly because it includes a kernel
extension, which isn't supported by Nix outside of NixOS.
If a package fails to run on macOS with an error message similar to the
following, it's a likely sign that you need to have macFUSE installed.
dyld: Library not loaded: /usr/local/lib/libfuse.2.dylib
Referenced from: /nix/store/w8bi72bssv0bnxhwfw3xr1mvn7myf37x-sshfs-fuse-2.10/bin/sshfs
Reason: image not found
[1] 92299 abort /nix/store/w8bi72bssv0bnxhwfw3xr1mvn7myf37x-sshfs-fuse-2.10/bin/sshfs
Package maintainers may often encounter the following error when building FUSE
packages on macOS:
checking for fuse.h... no
configure: error: No fuse.h found.
This happens on autoconf based projects that use `AC_CHECK_HEADERS` or
`AC_CHECK_LIBS` to detect libfuse, and will occur even when the `fuse` package
is included in `buildInputs`. It happens because libfuse headers throw an error
on macOS if the `FUSE_USE_VERSION` macro is undefined. Many projects do define
`FUSE_USE_VERSION`, but only inside C source files. This results in the above
error at configure time because the configure script would attempt to compile
sample FUSE programs without defining `FUSE_USE_VERSION`.
There are two possible solutions for this problem in Nixpkgs:
1. Pass `FUSE_USE_VERSION` to the configure script by adding
`CFLAGS=-DFUSE_USE_VERSION=25` in `configureFlags`. The actual value would
have to match the definition used in the upstream source code.
2. Remove `AC_CHECK_HEADERS` / `AC_CHECK_LIBS` for libfuse.
However, a better solution might be to fix the build script upstream to use
`PKG_CHECK_MODULES` instead. This approach wouldn't suffer from the problem that
`AC_CHECK_HEADERS`/`AC_CHECK_LIBS` has at the price of introducing a dependency
on pkg-config.

View File

@@ -1,38 +0,0 @@
# ibus-engines.typing-booster {#sec-ibus-typing-booster}
This package is an ibus-based completion method to speed up typing.
## Activating the engine {#sec-ibus-typing-booster-activate}
IBus needs to be configured accordingly to activate `typing-booster`. The configuration depends on the desktop manager in use. For detailed instructions, please refer to the [upstream docs](https://mike-fabian.github.io/ibus-typing-booster/documentation.html).
On NixOS, you need to explicitly enable `ibus` with given engines before customizing your desktop to use `typing-booster`. This can be achieved using the `ibus` module:
```nix
{ pkgs, ... }: {
i18n.inputMethod = {
enabled = "ibus";
ibus.engines = with pkgs.ibus-engines; [ typing-booster ];
};
}
```
## Using custom hunspell dictionaries {#sec-ibus-typing-booster-customize-hunspell}
The IBus engine is based on `hunspell` to support completion in many languages. By default, the dictionaries `de-de`, `en-us`, `fr-moderne` `es-es`, `it-it`, `sv-se` and `sv-fi` are in use. To add another dictionary, the package can be overridden like this:
```nix
ibus-engines.typing-booster.override { langs = [ "de-at" "en-gb" ]; }
```
_Note: each language passed to `langs` must be an attribute name in `pkgs.hunspellDicts`._
## Built-in emoji picker {#sec-ibus-typing-booster-emoji-picker}
The `ibus-engines.typing-booster` package contains a program named `emoji-picker`. To display all emojis correctly, a special font such as `noto-fonts-emoji` is needed:
On NixOS, it can be installed using the following expression:
```nix
{ pkgs, ... }: { fonts.fonts = with pkgs; [ noto-fonts-emoji ]; }
```

View File

@@ -0,0 +1,57 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-ibus-typing-booster">
<title>ibus-engines.typing-booster</title>
<para>
This package is an ibus-based completion method to speed up typing.
</para>
<section xml:id="sec-ibus-typing-booster-activate">
<title>Activating the engine</title>
<para>
IBus needs to be configured accordingly to activate <literal>typing-booster</literal>. The configuration depends on the desktop manager in use. For detailed instructions, please refer to the <link xlink:href="https://mike-fabian.github.io/ibus-typing-booster/documentation.html">upstream docs</link>.
</para>
<para>
On NixOS you need to explicitly enable <literal>ibus</literal> with given engines before customizing your desktop to use <literal>typing-booster</literal>. This can be achieved using the <literal>ibus</literal> module:
<programlisting>{ pkgs, ... }: {
i18n.inputMethod = {
enabled = "ibus";
ibus.engines = with pkgs.ibus-engines; [ typing-booster ];
};
}</programlisting>
</para>
</section>
<section xml:id="sec-ibus-typing-booster-customize-hunspell">
<title>Using custom hunspell dictionaries</title>
<para>
The IBus engine is based on <literal>hunspell</literal> to support completion in many languages. By default the dictionaries <literal>de-de</literal>, <literal>en-us</literal>, <literal>fr-moderne</literal> <literal>es-es</literal>, <literal>it-it</literal>, <literal>sv-se</literal> and <literal>sv-fi</literal> are in use. To add another dictionary, the package can be overridden like this:
<programlisting>ibus-engines.typing-booster.override {
langs = [ "de-at" "en-gb" ];
}</programlisting>
</para>
<para>
<emphasis>Note: each language passed to <literal>langs</literal> must be an attribute name in <literal>pkgs.hunspellDicts</literal>.</emphasis>
</para>
</section>
<section xml:id="sec-ibus-typing-booster-emoji-picker">
<title>Built-in emoji picker</title>
<para>
The <literal>ibus-engines.typing-booster</literal> package contains a program named <literal>emoji-picker</literal>. To display all emojis correctly, a special font such as <literal>noto-fonts-emoji</literal> is needed:
</para>
<para>
On NixOS it can be installed using the following expression:
<programlisting>{ pkgs, ... }: {
fonts.fonts = with pkgs; [ noto-fonts-emoji ];
}</programlisting>
</para>
</section>
</section>

View File

@@ -5,25 +5,20 @@
<para>
This chapter contains information about how to use and maintain the Nix expressions for a number of specific packages, such as the Linux kernel or X.org.
</para>
<xi:include href="citrix.section.xml" />
<xi:include href="dlib.section.xml" />
<xi:include href="eclipse.section.xml" />
<xi:include href="elm.section.xml" />
<xi:include href="emacs.section.xml" />
<xi:include href="firefox.section.xml" />
<xi:include href="fish.section.xml" />
<xi:include href="fuse.section.xml" />
<xi:include href="ibus.section.xml" />
<xi:include href="kakoune.section.xml" />
<xi:include href="linux.section.xml" />
<xi:include href="locales.section.xml" />
<xi:include href="etc-files.section.xml" />
<xi:include href="nginx.section.xml" />
<xi:include href="opengl.section.xml" />
<xi:include href="shell-helpers.section.xml" />
<xi:include href="steam.section.xml" />
<xi:include href="cataclysm-dda.section.xml" />
<xi:include href="urxvt.section.xml" />
<xi:include href="weechat.section.xml" />
<xi:include href="xorg.section.xml" />
<xi:include href="citrix.xml" />
<xi:include href="dlib.xml" />
<xi:include href="eclipse.xml" />
<xi:include href="elm.xml" />
<xi:include href="emacs.xml" />
<xi:include href="ibus.xml" />
<xi:include href="kakoune.xml" />
<xi:include href="linux.xml" />
<xi:include href="locales.xml" />
<xi:include href="nginx.xml" />
<xi:include href="opengl.xml" />
<xi:include href="shell-helpers.xml" />
<xi:include href="steam.xml" />
<xi:include href="urxvt.xml" />
<xi:include href="weechat.xml" />
<xi:include href="xorg.xml" />
</chapter>

View File

@@ -1,9 +0,0 @@
# Kakoune {#sec-kakoune}
Kakoune can be built to autoload plugins:
```nix
(kakoune.override {
plugins = with pkgs.kakounePlugins; [ parinfer-rust ];
})
```

View File

@@ -0,0 +1,14 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-kakoune">
<title>Kakoune</title>
<para>
Kakoune can be built to autoload plugins:
<programlisting>(kakoune.override {
configure = {
plugins = with pkgs.kakounePlugins; [ parinfer-rust ];
};
})</programlisting>
</para>
</section>

View File

@@ -1,41 +0,0 @@
# Linux kernel {#sec-linux-kernel}
The Nix expressions to build the Linux kernel are in [`pkgs/os-specific/linux/kernel`](https://github.com/NixOS/nixpkgs/blob/master/pkgs/os-specific/linux/kernel).
The function that builds the kernel has an argument `kernelPatches` which should be a list of `{name, patch, extraConfig}` attribute sets, where `name` is the name of the patch (which is included in the kernels `meta.description` attribute), `patch` is the patch itself (possibly compressed), and `extraConfig` (optional) is a string specifying extra options to be concatenated to the kernel configuration file (`.config`).
The kernel derivation exports an attribute `features` specifying whether optional functionality is or isnt enabled. This is used in NixOS to implement kernel-specific behaviour. For instance, if the kernel has the `iwlwifi` feature (i.e., has built-in support for Intel wireless chipsets), then NixOS doesnt have to build the external `iwlwifi` package:
```nix
modulesTree = [kernel]
++ pkgs.lib.optional (!kernel.features ? iwlwifi) kernelPackages.iwlwifi
++ ...;
```
How to add a new (major) version of the Linux kernel to Nixpkgs:
1. Copy the old Nix expression (e.g., `linux-2.6.21.nix`) to the new one (e.g., `linux-2.6.22.nix`) and update it.
2. Add the new kernel to the `kernels` attribute set in `linux-kernels.nix` (e.g., create an attribute `kernel_2_6_22`).
3. Now were going to update the kernel configuration. First unpack the kernel. Then for each supported platform (`i686`, `x86_64`, `uml`) do the following:
1. Make a copy from the old config (e.g., `config-2.6.21-i686-smp`) to the new one (e.g., `config-2.6.22-i686-smp`).
2. Copy the config file for this platform (e.g., `config-2.6.22-i686-smp`) to `.config` in the kernel source tree.
3. Run `make oldconfig ARCH={i386,x86_64,um}` and answer all questions. (For the uml configuration, also add `SHELL=bash`.) Make sure to keep the configuration consistent between platforms (i.e., dont enable some feature on `i686` and disable it on `x86_64`).
4. If needed, you can also run `make menuconfig`:
```ShellSession
$ nix-env -f "<nixpkgs>" -iA ncurses
$ export NIX_CFLAGS_LINK=-lncurses
$ make menuconfig ARCH=arch
```
5. Copy `.config` over the new config file (e.g., `config-2.6.22-i686-smp`).
4. Test building the kernel: `nix-build -A linuxKernel.kernels.kernel_2_6_22`. If it compiles, ship it! For extra credit, try booting NixOS with it.
5. It may be that the new kernel requires updating the external kernel modules and kernel-dependent packages listed in the `linuxPackagesFor` function in `linux-kernels.nix` (such as the NVIDIA drivers, AUFS, etc.). If the updated packages arent backwards compatible with older kernels, you may need to keep the older versions around.

View File

@@ -0,0 +1,85 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-linux-kernel">
<title>Linux kernel</title>
<para>
The Nix expressions to build the Linux kernel are in <link
xlink:href="https://github.com/NixOS/nixpkgs/blob/master/pkgs/os-specific/linux/kernel"><filename>pkgs/os-specific/linux/kernel</filename></link>.
</para>
<para>
The function that builds the kernel has an argument <varname>kernelPatches</varname> which should be a list of <literal>{name, patch, extraConfig}</literal> attribute sets, where <varname>name</varname> is the name of the patch (which is included in the kernels <varname>meta.description</varname> attribute), <varname>patch</varname> is the patch itself (possibly compressed), and <varname>extraConfig</varname> (optional) is a string specifying extra options to be concatenated to the kernel configuration file (<filename>.config</filename>).
</para>
<para>
The kernel derivation exports an attribute <varname>features</varname> specifying whether optional functionality is or isnt enabled. This is used in NixOS to implement kernel-specific behaviour. For instance, if the kernel has the <varname>iwlwifi</varname> feature (i.e. has built-in support for Intel wireless chipsets), then NixOS doesnt have to build the external <varname>iwlwifi</varname> package:
<programlisting>
modulesTree = [kernel]
++ pkgs.lib.optional (!kernel.features ? iwlwifi) kernelPackages.iwlwifi
++ ...;
</programlisting>
</para>
<para>
How to add a new (major) version of the Linux kernel to Nixpkgs:
<orderedlist>
<listitem>
<para>
Copy the old Nix expression (e.g. <filename>linux-2.6.21.nix</filename>) to the new one (e.g. <filename>linux-2.6.22.nix</filename>) and update it.
</para>
</listitem>
<listitem>
<para>
Add the new kernel to <filename>all-packages.nix</filename> (e.g., create an attribute <varname>kernel_2_6_22</varname>).
</para>
</listitem>
<listitem>
<para>
Now were going to update the kernel configuration. First unpack the kernel. Then for each supported platform (<literal>i686</literal>, <literal>x86_64</literal>, <literal>uml</literal>) do the following:
<orderedlist>
<listitem>
<para>
Make an copy from the old config (e.g. <filename>config-2.6.21-i686-smp</filename>) to the new one (e.g. <filename>config-2.6.22-i686-smp</filename>).
</para>
</listitem>
<listitem>
<para>
Copy the config file for this platform (e.g. <filename>config-2.6.22-i686-smp</filename>) to <filename>.config</filename> in the kernel source tree.
</para>
</listitem>
<listitem>
<para>
Run <literal>make oldconfig ARCH=<replaceable>{i386,x86_64,um}</replaceable></literal> and answer all questions. (For the uml configuration, also add <literal>SHELL=bash</literal>.) Make sure to keep the configuration consistent between platforms (i.e. dont enable some feature on <literal>i686</literal> and disable it on <literal>x86_64</literal>).
</para>
</listitem>
<listitem>
<para>
If needed you can also run <literal>make menuconfig</literal>:
<screen>
<prompt>$ </prompt>nix-env -i ncurses
<prompt>$ </prompt>export NIX_CFLAGS_LINK=-lncurses
<prompt>$ </prompt>make menuconfig ARCH=<replaceable>arch</replaceable></screen>
</para>
</listitem>
<listitem>
<para>
Copy <filename>.config</filename> over the new config file (e.g. <filename>config-2.6.22-i686-smp</filename>).
</para>
</listitem>
</orderedlist>
</para>
</listitem>
<listitem>
<para>
Test building the kernel: <literal>nix-build -A kernel_2_6_22</literal>. If it compiles, ship it! For extra credit, try booting NixOS with it.
</para>
</listitem>
<listitem>
<para>
It may be that the new kernel requires updating the external kernel modules and kernel-dependent packages listed in the <varname>linuxPackagesFor</varname> function in <filename>all-packages.nix</filename> (such as the NVIDIA drivers, AUFS, etc.). If the updated packages arent backwards compatible with older kernels, you may need to keep the older versions around.
</para>
</listitem>
</orderedlist>
</para>
</section>

View File

@@ -1,5 +0,0 @@
# Locales {#locales}
To allow simultaneous use of packages linked against different versions of `glibc` with different locale archive formats, Nixpkgs patches `glibc` to rely on `LOCALE_ARCHIVE` environment variable.
On non-NixOS distributions, this variable is obviously not set. This can cause regressions in language support or even crashes in some Nixpkgs-provided programs. The simplest way to mitigate this problem is exporting the `LOCALE_ARCHIVE` variable pointing to `${glibcLocales}/lib/locale/locale-archive`. The drawback (and the reason this is not the default) is the relatively large (a hundred MiB) size of the full set of locales. It is possible to build a custom set of locales by overriding parameters `allLocales` and `locales` of the package.

View File

@@ -0,0 +1,13 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="locales">
<title>Locales</title>
<para>
To allow simultaneous use of packages linked against different versions of <literal>glibc</literal> with different locale archive formats Nixpkgs patches <literal>glibc</literal> to rely on <literal>LOCALE_ARCHIVE</literal> environment variable.
</para>
<para>
On non-NixOS distributions this variable is obviously not set. This can cause regressions in language support or even crashes in some Nixpkgs-provided programs. The simplest way to mitigate this problem is exporting the <literal>LOCALE_ARCHIVE</literal> variable pointing to <literal>${glibcLocales}/lib/locale/locale-archive</literal>. The drawback (and the reason this is not the default) is the relatively large (a hundred MiB) size of the full set of locales. It is possible to build a custom set of locales by overriding parameters <literal>allLocales</literal> and <literal>locales</literal> of the package.
</para>
</section>

View File

@@ -1,11 +0,0 @@
# Nginx {#sec-nginx}
[Nginx](https://nginx.org) is a reverse proxy and lightweight webserver.
## ETags on static files served from the Nix store {#sec-nginx-etag}
HTTP has a couple of different mechanisms for caching to prevent clients from having to download the same content repeatedly if a resource has not changed since the last time it was requested. When nginx is used as a server for static files, it implements the caching mechanism based on the [`Last-Modified`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Last-Modified) response header automatically; unfortunately, it works by using filesystem timestamps to determine the value of the `Last-Modified` header. This doesn't give the desired behavior when the file is in the Nix store because all file timestamps are set to 0 (for reasons related to build reproducibility).
Fortunately, HTTP supports an alternative (and more effective) caching mechanism: the [`ETag`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag) response header. The value of the `ETag` header specifies some identifier for the particular content that the server is sending (e.g., a hash). When a client makes a second request for the same resource, it sends that value back in an `If-None-Match` header. If the ETag value is unchanged, then the server does not need to resend the content.
As of NixOS 19.09, the nginx package in Nixpkgs is patched such that when nginx serves a file out of `/nix/store`, the hash in the store path is used as the `ETag` header in the HTTP response, thus providing proper caching functionality. This happens automatically; you do not need to do modify any configuration to get this behavior.

View File

@@ -0,0 +1,25 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-nginx">
<title>Nginx</title>
<para>
<link xlink:href="https://nginx.org/">Nginx</link> is a reverse proxy and lightweight webserver.
</para>
<section xml:id="sec-nginx-etag">
<title>ETags on static files served from the Nix store</title>
<para>
HTTP has a couple different mechanisms for caching to prevent clients from having to download the same content repeatedly if a resource has not changed since the last time it was requested. When nginx is used as a server for static files, it implements the caching mechanism based on the <link xlink:href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Last-Modified"><literal>Last-Modified</literal></link> response header automatically; unfortunately, it works by using filesystem timestamps to determine the value of the <literal>Last-Modified</literal> header. This doesn't give the desired behavior when the file is in the Nix store, because all file timestamps are set to 0 (for reasons related to build reproducibility).
</para>
<para>
Fortunately, HTTP supports an alternative (and more effective) caching mechanism: the <link xlink:href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag"><literal>ETag</literal></link> response header. The value of the <literal>ETag</literal> header specifies some identifier for the particular content that the server is sending (e.g. a hash). When a client makes a second request for the same resource, it sends that value back in an <literal>If-None-Match</literal> header. If the ETag value is unchanged, then the server does not need to resend the content.
</para>
<para>
As of NixOS 19.09, the nginx package in Nixpkgs is patched such that when nginx serves a file out of <filename>/nix/store</filename>, the hash in the store path is used as the <literal>ETag</literal> header in the HTTP response, thus providing proper caching functionality. This happens automatically; you do not need to do modify any configuration to get this behavior.
</para>
</section>
</section>

View File

@@ -1,15 +0,0 @@
# OpenGL {#sec-opengl}
OpenGL support varies depending on which hardware is used and which drivers are available and loaded.
Broadly, we support both GL vendors: Mesa and NVIDIA.
## NixOS Desktop {#nixos-desktop}
The NixOS desktop or other non-headless configurations are the primary target for OpenGL libraries and applications. The current solution for discovering which drivers are available is based on [libglvnd](https://gitlab.freedesktop.org/glvnd/libglvnd). `libglvnd` performs "vendor-neutral dispatch", trying a variety of techniques to find the system's GL implementation. In practice, this will be either via standard GLX for X11 users or EGL for Wayland users, and supporting either NVIDIA or Mesa extensions.
## Nix on GNU/Linux {#nix-on-gnulinux}
If you are using a non-NixOS GNU/Linux/X11 desktop with free software video drivers, consider launching OpenGL-dependent programs from Nixpkgs with Nixpkgs versions of `libglvnd` and `mesa.drivers` in `LD_LIBRARY_PATH`. For Mesa drivers, the Linux kernel version doesn't have to match nixpkgs.
For proprietary video drivers, you might have luck with also adding the corresponding video driver package.

View File

@@ -0,0 +1,9 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-opengl">
<title>OpenGL</title>
<para>
Packages that use OpenGL have NixOS desktop as their primary target. The current solution for loading the GPU-specific drivers is based on <literal>libglvnd</literal> and looks for the driver implementation in <literal>LD_LIBRARY_PATH</literal>. If you are using a non-NixOS GNU/Linux/X11 desktop with free software video drivers, consider launching OpenGL-dependent programs from Nixpkgs with Nixpkgs versions of <literal>libglvnd</literal> and <literal>mesa_drivers</literal> in <literal>LD_LIBRARY_PATH</literal>. For proprietary video drivers you might have luck with also adding the corresponding video driver package.
</para>
</section>

View File

@@ -1,12 +0,0 @@
# Interactive shell helpers {#sec-shell-helpers}
Some packages provide the shell integration to be more useful. But unlike other systems, nix doesn't have a standard `share` directory location. This is why a bunch `PACKAGE-share` scripts are shipped that print the location of the corresponding shared folder. Current list of such packages is as following:
- `fzf` : `fzf-share`
E.g. `fzf` can then be used in the `.bashrc` like this:
```bash
source "$(fzf-share)/completion.bash"
source "$(fzf-share)/key-bindings.bash"
```

View File

@@ -0,0 +1,25 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-shell-helpers">
<title>Interactive shell helpers</title>
<para>
Some packages provide the shell integration to be more useful. But unlike other systems, nix doesn't have a standard share directory location. This is why a bunch <command>PACKAGE-share</command> scripts are shipped that print the location of the corresponding shared folder. Current list of such packages is as following:
<itemizedlist>
<listitem>
<para>
<literal>autojump</literal>: <command>autojump-share</command>
</para>
</listitem>
<listitem>
<para>
<literal>fzf</literal>: <command>fzf-share</command>
</para>
</listitem>
</itemizedlist>
E.g. <literal>autojump</literal> can then used in the .bashrc like this:
<screen>
source "$(autojump-share)/autojump.bash"
</screen>
</para>
</section>

View File

@@ -1,63 +0,0 @@
# Steam {#sec-steam}
## Steam in Nix {#sec-steam-nix}
Steam is distributed as a `.deb` file, for now only as an i686 package (the amd64 package only has documentation). When unpacked, it has a script called `steam` that in Ubuntu (their target distro) would go to `/usr/bin`. When run for the first time, this script copies some files to the user's home, which include another script that is the ultimate responsible for launching the steam binary, which is also in `$HOME`.
Nix problems and constraints:
- We don't have `/bin/bash` and many scripts point there. Same thing for `/usr/bin/python`.
- We don't have the dynamic loader in `/lib`.
- The `steam.sh` script in `$HOME` cannot be patched, as it is checked and rewritten by steam.
- The steam binary cannot be patched, it's also checked.
The current approach to deploy Steam in NixOS is composing a FHS-compatible chroot environment, as documented [here](http://sandervanderburg.blogspot.nl/2013/09/composing-fhs-compatible-chroot.html). This allows us to have binaries in the expected paths without disrupting the system, and to avoid patching them to work in a non FHS environment.
## How to play {#sec-steam-play}
Use `programs.steam.enable = true;` if you want to add steam to `systemPackages` and also enable a few workarounds as well as Steam controller support or other Steam supported controllers such as the DualShock 4 or Nintendo Switch Pro Controller.
## Troubleshooting {#sec-steam-troub}
- **Steam fails to start. What do I do?**
Try to run
```ShellSession
strace steam
```
to see what is causing steam to fail.
- **Using the FOSS Radeon or nouveau (nvidia) drivers**
- The `newStdcpp` parameter was removed since NixOS 17.09 and should not be needed anymore.
- Steam ships statically linked with a version of `libcrypto` that conflicts with the one dynamically loaded by radeonsi_dri.so. If you get the error:
```
steam.sh: line 713: 7842 Segmentation fault (core dumped)
```
have a look at [this pull request](https://github.com/NixOS/nixpkgs/pull/20269).
- **Java**
1. There is no java in steam chrootenv by default. If you get a message like:
```
/home/foo/.local/share/Steam/SteamApps/common/towns/towns.sh: line 1: java: command not found
```
you need to add:
```nix
steam.override { withJava = true; };
```
## steam-run {#sec-steam-run}
The FHS-compatible chroot used for Steam can also be used to run other Linux games that expect a FHS environment. To use it, install the `steam-run` package and run the game with:
```
steam-run ./foo
```

View File

@@ -0,0 +1,131 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-steam">
<title>Steam</title>
<section xml:id="sec-steam-nix">
<title>Steam in Nix</title>
<para>
Steam is distributed as a <filename>.deb</filename> file, for now only as an i686 package (the amd64 package only has documentation). When unpacked, it has a script called <filename>steam</filename> that in Ubuntu (their target distro) would go to <filename>/usr/bin </filename>. When run for the first time, this script copies some files to the user's home, which include another script that is the ultimate responsible for launching the steam binary, which is also in $HOME.
</para>
<para>
Nix problems and constraints:
<itemizedlist>
<listitem>
<para>
We don't have <filename>/bin/bash</filename> and many scripts point there. Similarly for <filename>/usr/bin/python</filename> .
</para>
</listitem>
<listitem>
<para>
We don't have the dynamic loader in <filename>/lib </filename>.
</para>
</listitem>
<listitem>
<para>
The <filename>steam.sh</filename> script in $HOME can not be patched, as it is checked and rewritten by steam.
</para>
</listitem>
<listitem>
<para>
The steam binary cannot be patched, it's also checked.
</para>
</listitem>
</itemizedlist>
</para>
<para>
The current approach to deploy Steam in NixOS is composing a FHS-compatible chroot environment, as documented <link xlink:href="http://sandervanderburg.blogspot.nl/2013/09/composing-fhs-compatible-chroot.html">here</link>. This allows us to have binaries in the expected paths without disrupting the system, and to avoid patching them to work in a non FHS environment.
</para>
</section>
<section xml:id="sec-steam-play">
<title>How to play</title>
<para>
For 64-bit systems it's important to have
<programlisting>hardware.opengl.driSupport32Bit = true;</programlisting>
in your <filename>/etc/nixos/configuration.nix</filename>. You'll also need
<programlisting>hardware.pulseaudio.support32Bit = true;</programlisting>
if you are using PulseAudio - this will enable 32bit ALSA apps integration. To use the Steam controller or other Steam supported controllers such as the DualShock 4 or Nintendo Switch Pro, you need to add
<programlisting>hardware.steam-hardware.enable = true;</programlisting>
to your configuration.
</para>
</section>
<section xml:id="sec-steam-troub">
<title>Troubleshooting</title>
<para>
<variablelist>
<varlistentry>
<term>
Steam fails to start. What do I do?
</term>
<listitem>
<para>
Try to run
<programlisting>strace steam</programlisting>
to see what is causing steam to fail.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
Using the FOSS Radeon or nouveau (nvidia) drivers
</term>
<listitem>
<itemizedlist>
<listitem>
<para>
The <literal>newStdcpp</literal> parameter was removed since NixOS 17.09 and should not be needed anymore.
</para>
</listitem>
<listitem>
<para>
Steam ships statically linked with a version of libcrypto that conflics with the one dynamically loaded by radeonsi_dri.so. If you get the error
<programlisting>steam.sh: line 713: 7842 Segmentation fault (core dumped)</programlisting>
have a look at <link xlink:href="https://github.com/NixOS/nixpkgs/pull/20269">this pull request</link>.
</para>
</listitem>
</itemizedlist>
</listitem>
</varlistentry>
<varlistentry>
<term>
Java
</term>
<listitem>
<orderedlist>
<listitem>
<para>
There is no java in steam chrootenv by default. If you get a message like
<programlisting>/home/foo/.local/share/Steam/SteamApps/common/towns/towns.sh: line 1: java: command not found</programlisting>
You need to add
<programlisting> steam.override { withJava = true; };</programlisting>
to your configuration.
</para>
</listitem>
</orderedlist>
</listitem>
</varlistentry>
</variablelist>
</para>
</section>
<section xml:id="sec-steam-run">
<title>steam-run</title>
<para>
The FHS-compatible chroot used for steam can also be used to run other linux games that expect a FHS environment. To do it, add
<programlisting>pkgs.(steam.override {
nativeOnly = true;
newStdcpp = true;
}).run</programlisting>
to your configuration, rebuild, and run the game with
<programlisting>steam-run ./foo</programlisting>
</para>
</section>
</section>

View File

@@ -0,0 +1,13 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="unfree-software">
<title>Unfree software</title>
<para>
All users of Nixpkgs are free software users, and many users (and developers) of Nixpkgs want to limit and tightly control their exposure to unfree software. At the same time, many users need (or want) to run some specific pieces of proprietary software. Nixpkgs includes some expressions for unfree software packages. By default unfree software cannot be installed and doesnt show up in searches. To allow installing unfree software in a single Nix invocation one can export <literal>NIXPKGS_ALLOW_UNFREE=1</literal>. For a persistent solution, users can set <literal>allowUnfree</literal> in the Nixpkgs configuration.
</para>
<para>
Fine-grained control is possible by defining <literal>allowUnfreePredicate</literal> function in config; it takes the <literal>mkDerivation</literal> parameter attrset and returns <literal>true</literal> for unfree packages that should be allowed.
</para>
</section>

View File

@@ -1,71 +0,0 @@
# Urxvt {#sec-urxvt}
Urxvt, also known as rxvt-unicode, is a highly customizable terminal emulator.
## Configuring urxvt {#sec-urxvt-conf}
In `nixpkgs`, urxvt is provided by the package `rxvt-unicode`. It can be configured to include your choice of plugins, reducing its closure size from the default configuration which includes all available plugins. To make use of this functionality, use an overlay or directly install an expression that overrides its configuration, such as:
```nix
rxvt-unicode.override {
configure = { availablePlugins, ... }: {
plugins = with availablePlugins; [ perls resize-font vtwheel ];
};
}
```
If the `configure` function returns an attrset without the `plugins` attribute, `availablePlugins` will be used automatically.
In order to add plugins but also keep all default plugins installed, it is possible to use the following method:
```nix
rxvt-unicode.override {
configure = { availablePlugins, ... }: {
plugins = (builtins.attrValues availablePlugins) ++ [ custom-plugin ];
};
}
```
To get a list of all the plugins available, open the Nix REPL and run
```ShellSession
$ nix repl
:l <nixpkgs>
map (p: p.name) pkgs.rxvt-unicode.plugins
```
Alternatively, if your shell is bash or zsh and have completion enabled, simply type `nixpkgs.rxvt-unicode.plugins.<tab>`.
In addition to `plugins` the options `extraDeps` and `perlDeps` can be used to install extra packages. `extraDeps` can be used, for example, to provide `xsel` (a clipboard manager) to the clipboard plugin, without installing it globally:
```nix
rxvt-unicode.override {
configure = { availablePlugins, ... }: {
pluginsDeps = [ xsel ];
};
}
```
`perlDeps` is a handy way to provide Perl packages to your custom plugins (in `$HOME/.urxvt/ext`). For example, if you need `AnyEvent` you can do:
```nix
rxvt-unicode.override {
configure = { availablePlugins, ... }: {
perlDeps = with perlPackages; [ AnyEvent ];
};
}
```
## Packaging urxvt plugins {#sec-urxvt-pkg}
Urxvt plugins resides in `pkgs/applications/misc/rxvt-unicode-plugins`. To add a new plugin, create an expression in a subdirectory and add the package to the set in `pkgs/applications/misc/rxvt-unicode-plugins/default.nix`.
A plugin can be any kind of derivation, the only requirement is that it should always install perl scripts in `$out/lib/urxvt/perl`. Look for existing plugins for examples.
If the plugin is itself a Perl package that needs to be imported from other plugins or scripts, add the following passthrough:
```nix
passthru.perlPackages = [ "self" ];
```
This will make the urxvt wrapper pick up the dependency and set up the Perl path accordingly.

View File

@@ -0,0 +1,101 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-urxvt">
<title>Urxvt</title>
<para>
Urxvt, also known as rxvt-unicode, is a highly customizable terminal emulator.
</para>
<section xml:id="sec-urxvt-conf">
<title>Configuring urxvt</title>
<para>
In <literal>nixpkgs</literal>, urxvt is provided by the package
<literal>rxvt-unicode</literal>. It can be configured to include your choice
of plugins, reducing its closure size from the default configuration which
includes all available plugins. To make use of this functionality, use an
overlay or directly install an expression that overrides its configuration,
such as
<programlisting>rxvt-unicode.override { configure = { availablePlugins, ... }: {
plugins = with availablePlugins; [ perls resize-font vtwheel ];
}
}</programlisting>
If the <literal>configure</literal> function returns an attrset without the
<literal>plugins</literal> attribute, <literal>availablePlugins</literal>
will be used automatically.
</para>
<para>
In order to add plugins but also keep all default plugins installed, it is
possible to use the following method:
<programlisting>rxvt-unicode.override { configure = { availablePlugins, ... }: {
plugins = (builtins.attrValues availablePlugins) ++ [ custom-plugin ];
};
}</programlisting>
</para>
<para>
To get a list of all the plugins available, open the Nix REPL and run
<programlisting>$ nix repl
:l &lt;nixpkgs&gt;
map (p: p.name) pkgs.rxvt-unicode.plugins
</programlisting>
Alternatively, if your shell is bash or zsh and have completion enabled,
simply type <literal>nixpkgs.rxvt-unicode.plugins.&lt;tab&gt;</literal>.
</para>
<para>
In addition to <literal>plugins</literal> the options
<literal>extraDeps</literal> and <literal>perlDeps</literal> can be used
to install extra packages.
<literal>extraDeps</literal> can be used, for example, to provide
<literal>xsel</literal> (a clipboard manager) to the clipboard plugin,
without installing it globally:
<programlisting>rxvt-unicode.override { configure = { availablePlugins, ... }: {
pluginsDeps = [ xsel ];
}
}</programlisting>
<literal>perlDeps</literal> is a handy way to provide Perl packages to
your custom plugins (in <literal>$HOME/.urxvt/ext</literal>). For example,
if you need <literal>AnyEvent</literal> you can do:
<programlisting>rxvt-unicode.override { configure = { availablePlugins, ... }: {
perlDeps = with perlPackages; [ AnyEvent ];
}
}</programlisting>
</para>
</section>
<section xml:id="sec-urxvt-pkg">
<title>Packaging urxvt plugins</title>
<para>
Urxvt plugins resides in
<literal>pkgs/applications/misc/rxvt-unicode-plugins</literal>.
To add a new plugin create an expression in a subdirectory and add the
package to the set in
<literal>pkgs/applications/misc/rxvt-unicode-plugins/default.nix</literal>.
</para>
<para>
A plugin can be any kind of derivation, the only requirement is that it
should always install perl scripts in <literal>$out/lib/urxvt/perl</literal>.
Look for existing plugins for examples.
</para>
<para>
If the plugin is itself a perl package that needs to be imported from
other plugins or scripts, add the following passthrough:
<programlisting>passthru.perlPackages = [ "self" ];
</programlisting>
This will make the urxvt wrapper pick up the dependency and set up the perl
path accordingly.
</para>
</section>
</section>

View File

@@ -1,85 +0,0 @@
# WeeChat {#sec-weechat}
WeeChat can be configured to include your choice of plugins, reducing its closure size from the default configuration which includes all available plugins. To make use of this functionality, install an expression that overrides its configuration, such as:
```nix
weechat.override {configure = {availablePlugins, ...}: {
plugins = with availablePlugins; [ python perl ];
}
}
```
If the `configure` function returns an attrset without the `plugins` attribute, `availablePlugins` will be used automatically.
The plugins currently available are `python`, `perl`, `ruby`, `guile`, `tcl` and `lua`.
The Python and Perl plugins allows the addition of extra libraries. For instance, the `inotify.py` script in `weechat-scripts` requires D-Bus or libnotify, and the `fish.py` script requires `pycrypto`. To use these scripts, use the plugin's `withPackages` attribute:
```nix
weechat.override { configure = {availablePlugins, ...}: {
plugins = with availablePlugins; [
(python.withPackages (ps: with ps; [ pycrypto python-dbus ]))
];
};
}
```
In order to also keep all default plugins installed, it is possible to use the following method:
```nix
weechat.override { configure = { availablePlugins, ... }: {
plugins = builtins.attrValues (availablePlugins // {
python = availablePlugins.python.withPackages (ps: with ps; [ pycrypto python-dbus ]);
});
}; }
```
WeeChat allows to set defaults on startup using the `--run-command`. The `configure` method can be used to pass commands to the program:
```nix
weechat.override {
configure = { availablePlugins, ... }: {
init = ''
/set foo bar
/server add libera irc.libera.chat
'';
};
}
```
Further values can be added to the list of commands when running `weechat --run-command "your-commands"`.
Additionally, it's possible to specify scripts to be loaded when starting `weechat`. These will be loaded before the commands from `init`:
```nix
weechat.override {
configure = { availablePlugins, ... }: {
scripts = with pkgs.weechatScripts; [
weechat-xmpp weechat-matrix-bridge wee-slack
];
init = ''
/set plugins.var.python.jabber.key "val"
'':
};
}
```
In `nixpkgs` there's a subpackage which contains derivations for WeeChat scripts. Such derivations expect a `passthru.scripts` attribute, which contains a list of all scripts inside the store path. Furthermore, all scripts have to live in `$out/share`. An exemplary derivation looks like this:
```nix
{ stdenv, fetchurl }:
stdenv.mkDerivation {
name = "exemplary-weechat-script";
src = fetchurl {
url = "https://scripts.tld/your-scripts.tar.gz";
sha256 = "...";
};
passthru.scripts = [ "foo.py" "bar.lua" ];
installPhase = ''
mkdir $out/share
cp foo.py $out/share
cp bar.lua $out/share
'';
}
```

View File

@@ -0,0 +1,85 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-weechat">
<title>Weechat</title>
<para>
Weechat can be configured to include your choice of plugins, reducing its closure size from the default configuration which includes all available plugins. To make use of this functionality, install an expression that overrides its configuration such as
<programlisting>weechat.override {configure = {availablePlugins, ...}: {
plugins = with availablePlugins; [ python perl ];
}
}</programlisting>
If the <literal>configure</literal> function returns an attrset without the <literal>plugins</literal> attribute, <literal>availablePlugins</literal> will be used automatically.
</para>
<para>
The plugins currently available are <literal>python</literal>, <literal>perl</literal>, <literal>ruby</literal>, <literal>guile</literal>, <literal>tcl</literal> and <literal>lua</literal>.
</para>
<para>
The python and perl plugins allows the addition of extra libraries. For instance, the <literal>inotify.py</literal> script in weechat-scripts requires D-Bus or libnotify, and the <literal>fish.py</literal> script requires pycrypto. To use these scripts, use the plugin's <literal>withPackages</literal> attribute:
<programlisting>weechat.override { configure = {availablePlugins, ...}: {
plugins = with availablePlugins; [
(python.withPackages (ps: with ps; [ pycrypto python-dbus ]))
];
};
}
</programlisting>
</para>
<para>
In order to also keep all default plugins installed, it is possible to use the following method:
<programlisting>weechat.override { configure = { availablePlugins, ... }: {
plugins = builtins.attrValues (availablePlugins // {
python = availablePlugins.python.withPackages (ps: with ps; [ pycrypto python-dbus ]);
});
}; }
</programlisting>
</para>
<para>
WeeChat allows to set defaults on startup using the <literal>--run-command</literal>. The <literal>configure</literal> method can be used to pass commands to the program:
<programlisting>weechat.override {
configure = { availablePlugins, ... }: {
init = ''
/set foo bar
/server add freenode chat.freenode.org
'';
};
}</programlisting>
Further values can be added to the list of commands when running <literal>weechat --run-command "your-commands"</literal>.
</para>
<para>
Additionally it's possible to specify scripts to be loaded when starting <literal>weechat</literal>. These will be loaded before the commands from <literal>init</literal>:
<programlisting>weechat.override {
configure = { availablePlugins, ... }: {
scripts = with pkgs.weechatScripts; [
weechat-xmpp weechat-matrix-bridge wee-slack
];
init = ''
/set plugins.var.python.jabber.key "val"
'':
};
}</programlisting>
</para>
<para>
In <literal>nixpkgs</literal> there's a subpackage which contains derivations for WeeChat scripts. Such derivations expect a <literal>passthru.scripts</literal> attribute which contains a list of all scripts inside the store path. Furthermore all scripts have to live in <literal>$out/share</literal>. An exemplary derivation looks like this:
<programlisting>{ stdenv, fetchurl }:
stdenv.mkDerivation {
name = "exemplary-weechat-script";
src = fetchurl {
url = "https://scripts.tld/your-scripts.tar.gz";
sha256 = "...";
};
passthru.scripts = [ "foo.py" "bar.lua" ];
installPhase = ''
mkdir $out/share
cp foo.py $out/share
cp bar.lua $out/share
'';
}</programlisting>
</para>
</section>

View File

@@ -1,34 +0,0 @@
# X.org {#sec-xorg}
The Nix expressions for the X.org packages reside in `pkgs/servers/x11/xorg/default.nix`. This file is automatically generated from lists of tarballs in an X.org release. As such it should not be modified directly; rather, you should modify the lists, the generator script or the file `pkgs/servers/x11/xorg/overrides.nix`, in which you can override or add to the derivations produced by the generator.
## Katamari Tarballs {#katamari-tarballs}
X.org upstream releases used to include [katamari](https://en.wiktionary.org/wiki/%E3%81%8B%E3%81%9F%E3%81%BE%E3%82%8A) releases, which included a holistic recommended version for each tarball, up until 7.7. To create a list of tarballs in a katamari release:
```ShellSession
export release="X11R7.7"
export url="mirror://xorg/$release/src/everything/"
cat $(PRINT_PATH=1 nix-prefetch-url $url | tail -n 1) \
| perl -e 'while (<>) { if (/(href|HREF)="([^"]*.bz2)"/) { print "$ENV{'url'}$2\n"; }; }' \
| sort > "tarballs-$release.list"
```
## Individual Tarballs {#individual-tarballs}
The upstream release process for [X11R7.8](https://x.org/wiki/Releases/7.8/) does not include a planned katamari. Instead, each component of X.org is released as its own tarball. We maintain `pkgs/servers/x11/xorg/tarballs.list` as a list of tarballs for each individual package. This list includes X.org core libraries and protocol descriptions, extra newer X11 interface libraries, like `xorg.libxcb`, and classic utilities which are largely unused but still available if needed, like `xorg.imake`.
## Generating Nix Expressions {#generating-nix-expressions}
The generator is invoked as follows:
```ShellSession
cd pkgs/servers/x11/xorg
<tarballs.list perl ./generate-expr-from-tarballs.pl
```
For each of the tarballs in the `.list` files, the script downloads it, unpacks it, and searches its `configure.ac` and `*.pc.in` files for dependencies. This information is used to generate `default.nix`. The generator caches downloaded tarballs between runs. Pay close attention to the `NOT FOUND: $NAME` messages at the end of the run, since they may indicate missing dependencies. (Some might be optional dependencies, however.)
## Overriding the Generator {#overriding-the-generator}
If the expression for a package requires derivation attributes that the generator cannot figure out automatically (say, `patches` or a `postInstall` hook), you should modify `pkgs/servers/x11/xorg/overrides.nix`.

View File

@@ -0,0 +1,34 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xml:id="sec-xorg">
<title>X.org</title>
<para>
The Nix expressions for the X.org packages reside in <filename>pkgs/servers/x11/xorg/default.nix</filename>. This file is automatically generated from lists of tarballs in an X.org release. As such it should not be modified directly; rather, you should modify the lists, the generator script or the file <filename>pkgs/servers/x11/xorg/overrides.nix</filename>, in which you can override or add to the derivations produced by the generator.
</para>
<para>
The generator is invoked as follows:
<screen>
<prompt>$ </prompt>cd pkgs/servers/x11/xorg
<prompt>$ </prompt>cat tarballs-7.5.list extra.list old.list \
| perl ./generate-expr-from-tarballs.pl
</screen>
For each of the tarballs in the <filename>.list</filename> files, the script downloads it, unpacks it, and searches its <filename>configure.ac</filename> and <filename>*.pc.in</filename> files for dependencies. This information is used to generate <filename>default.nix</filename>. The generator caches downloaded tarballs between runs. Pay close attention to the <literal>NOT FOUND: <replaceable>name</replaceable></literal> messages at the end of the run, since they may indicate missing dependencies. (Some might be optional dependencies, however.)
</para>
<para>
A file like <filename>tarballs-7.5.list</filename> contains all tarballs in a X.org release. It can be generated like this:
<screen>
<prompt>$ </prompt>export i="mirror://xorg/X11R7.4/src/everything/"
<prompt>$ </prompt>cat $(PRINT_PATH=1 nix-prefetch-url $i | tail -n 1) \
| perl -e 'while (&lt;>) { if (/(href|HREF)="([^"]*.bz2)"/) { print "$ENV{'i'}$2\n"; }; }' \
| sort > tarballs-7.4.list
</screen>
<filename>extra.list</filename> contains libraries that arent part of X.org proper, but are closely related to it, such as <literal>libxcb</literal>. <filename>old.list</filename> contains some packages that were removed from X.org, but are still needed by some people or by other packages (such as <varname>imake</varname>).
</para>
<para>
If the expression for a package requires derivation attributes that the generator cannot figure out automatically (say, <varname>patches</varname> or a <varname>postInstall</varname> hook), you should modify <filename>pkgs/servers/x11/xorg/overrides.nix</filename>.
</para>
</section>

View File

@@ -5,6 +5,6 @@
<para>
This chapter describes several special builders.
</para>
<xi:include href="special/fhs-environments.section.xml" />
<xi:include href="special/mkshell.section.xml" />
<xi:include href="special/fhs-environments.xml" />
<xi:include href="special/mkshell.xml" />
</chapter>

View File

@@ -1,49 +0,0 @@
# buildFHSUserEnv {#sec-fhs-environments}
`buildFHSUserEnv` provides a way to build and run FHS-compatible lightweight sandboxes. It creates an isolated root with bound `/nix/store`, so its footprint in terms of disk space needed is quite small. This allows one to run software which is hard or unfeasible to patch for NixOS -- 3rd-party source trees with FHS assumptions, games distributed as tarballs, software with integrity checking and/or external self-updated binaries. It uses Linux namespaces feature to create temporary lightweight environments which are destroyed after all child processes exit, without root user rights requirement. Accepted arguments are:
- `name`
Environment name.
- `targetPkgs`
Packages to be installed for the main host's architecture (i.e. x86_64 on x86_64 installations). Along with libraries binaries are also installed.
- `multiPkgs`
Packages to be installed for all architectures supported by a host (i.e. i686 and x86_64 on x86_64 installations). Only libraries are installed by default.
- `extraBuildCommands`
Additional commands to be executed for finalizing the directory structure.
- `extraBuildCommandsMulti`
Like `extraBuildCommands`, but executed only on multilib architectures.
- `extraOutputsToInstall`
Additional derivation outputs to be linked for both target and multi-architecture packages.
- `extraInstallCommands`
Additional commands to be executed for finalizing the derivation with runner script.
- `runScript`
A command that would be executed inside the sandbox and passed all the command line arguments. It defaults to `bash`.
- `profile`
Optional script for `/etc/profile` within the sandbox.
One can create a simple environment using a `shell.nix` like that:
```nix
{ pkgs ? import <nixpkgs> {} }:
(pkgs.buildFHSUserEnv {
name = "simple-x11-env";
targetPkgs = pkgs: (with pkgs;
[ udev
alsa-lib
]) ++ (with pkgs.xorg;
[ libX11
libXcursor
libXrandr
]);
multiPkgs = pkgs: (with pkgs;
[ udev
alsa-lib
]);
runScript = "bash";
}).env
```
Running `nix-shell` would then drop you into a shell with these libraries and binaries available. You can use this to run closed-source applications which expect FHS structure without hassles: simply change `runScript` to the application path, e.g. `./bin/start.sh` -- relative paths are supported.
Additionally, the FHS builder links all relocated gsettings-schemas (the glib setup-hook moves them to `share/gsettings-schemas/${name}/glib-2.0/schemas`) to their standard FHS location. This means you don't need to wrap binaries with `wrapGAppsHook`.

View File

@@ -0,0 +1,122 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-fhs-environments">
<title>buildFHSUserEnv</title>
<para>
<function>buildFHSUserEnv</function> provides a way to build and run FHS-compatible lightweight sandboxes. It creates an isolated root with bound <filename>/nix/store</filename>, so its footprint in terms of disk space needed is quite small. This allows one to run software which is hard or unfeasible to patch for NixOS -- 3rd-party source trees with FHS assumptions, games distributed as tarballs, software with integrity checking and/or external self-updated binaries. It uses Linux namespaces feature to create temporary lightweight environments which are destroyed after all child processes exit, without root user rights requirement. Accepted arguments are:
</para>
<variablelist>
<varlistentry>
<term>
<literal>name</literal>
</term>
<listitem>
<para>
Environment name.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>targetPkgs</literal>
</term>
<listitem>
<para>
Packages to be installed for the main host's architecture (i.e. x86_64 on x86_64 installations). Along with libraries binaries are also installed.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>multiPkgs</literal>
</term>
<listitem>
<para>
Packages to be installed for all architectures supported by a host (i.e. i686 and x86_64 on x86_64 installations). Only libraries are installed by default.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>extraBuildCommands</literal>
</term>
<listitem>
<para>
Additional commands to be executed for finalizing the directory structure.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>extraBuildCommandsMulti</literal>
</term>
<listitem>
<para>
Like <literal>extraBuildCommands</literal>, but executed only on multilib architectures.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>extraOutputsToInstall</literal>
</term>
<listitem>
<para>
Additional derivation outputs to be linked for both target and multi-architecture packages.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>extraInstallCommands</literal>
</term>
<listitem>
<para>
Additional commands to be executed for finalizing the derivation with runner script.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<literal>runScript</literal>
</term>
<listitem>
<para>
A command that would be executed inside the sandbox and passed all the command line arguments. It defaults to <literal>bash</literal>.
</para>
</listitem>
</varlistentry>
</variablelist>
<para>
One can create a simple environment using a <literal>shell.nix</literal> like that:
</para>
<programlisting><![CDATA[
{ pkgs ? import <nixpkgs> {} }:
(pkgs.buildFHSUserEnv {
name = "simple-x11-env";
targetPkgs = pkgs: (with pkgs;
[ udev
alsaLib
]) ++ (with pkgs.xorg;
[ libX11
libXcursor
libXrandr
]);
multiPkgs = pkgs: (with pkgs;
[ udev
alsaLib
]);
runScript = "bash";
}).env
]]></programlisting>
<para>
Running <literal>nix-shell</literal> would then drop you into a shell with these libraries and binaries available. You can use this to run closed-source applications which expect FHS structure without hassles: simply change <literal>runScript</literal> to the application path, e.g. <filename>./bin/start.sh</filename> -- relative paths are supported.
</para>
</section>

View File

@@ -1,37 +0,0 @@
# pkgs.mkShell {#sec-pkgs-mkShell}
`pkgs.mkShell` is a specialized `stdenv.mkDerivation` that removes some
repetition when using it with `nix-shell` (or `nix develop`).
## Usage {#sec-pkgs-mkShell-usage}
Here is a common usage example:
```nix
{ pkgs ? import <nixpkgs> {} }:
pkgs.mkShell {
packages = [ pkgs.gnumake ];
inputsFrom = [ pkgs.hello pkgs.gnutar ];
shellHook = ''
export DEBUG=1
'';
}
```
## Attributes
* `name` (default: `nix-shell`). Set the name of the derivation.
* `packages` (default: `[]`). Add executable packages to the `nix-shell` environment.
* `inputsFrom` (default: `[]`). Add build dependencies of the listed derivations to the `nix-shell` environment.
* `shellHook` (default: `""`). Bash statements that are executed by `nix-shell`.
... all the attributes of `stdenv.mkDerivation`.
## Building the shell
This derivation output will contain a text file that contains a reference to
all the build inputs. This is useful in CI where we want to make sure that
every derivation, and its dependencies, build properly. Or when creating a GC
root so that the build dependencies don't get garbage-collected.

View File

@@ -0,0 +1,24 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-pkgs-mkShell">
<title>pkgs.mkShell</title>
<para>
<function>pkgs.mkShell</function> is a special kind of derivation that is only useful when using it combined with <command>nix-shell</command>. It will in fact fail to instantiate when invoked with <command>nix-build</command>.
</para>
<section xml:id="sec-pkgs-mkShell-usage">
<title>Usage</title>
<programlisting><![CDATA[
{ pkgs ? import <nixpkgs> {} }:
pkgs.mkShell {
# this will make all the build inputs from hello and gnutar
# available to the shell environment
inputsFrom = with pkgs; [ hello gnutar ];
buildInputs = [ pkgs.gnumake ];
}
]]></programlisting>
</section>
</section>

View File

@@ -1,128 +0,0 @@
# Testers {#chap-testers}
This chapter describes several testing builders which are available in the <literal>testers</literal> namespace.
## `testVersion` {#tester-testVersion}
Checks the command output contains the specified version
Although simplistic, this test assures that the main program
can run. While there's no substitute for a real test case,
it does catch dynamic linking errors and such. It also provides
some protection against accidentally building the wrong version,
for example when using an 'old' hash in a fixed-output derivation.
Examples:
```nix
passthru.tests.version = testVersion { package = hello; };
passthru.tests.version = testVersion {
package = seaweedfs;
command = "weed version";
};
passthru.tests.version = testVersion {
package = key;
command = "KeY --help";
# Wrong '2.5' version in the code. Drop on next version.
version = "2.5";
};
```
## `testEqualDerivation` {#tester-testEqualDerivation}
Checks that two packages produce the exact same build instructions.
This can be used to make sure that a certain difference of configuration,
such as the presence of an overlay does not cause a cache miss.
When the derivations are equal, the return value is an empty file.
Otherwise, the build log explains the difference via `nix-diff`.
Example:
```nix
testEqualDerivation
"The hello package must stay the same when enabling checks."
hello
(hello.overrideAttrs(o: { doCheck = true; }))
```
## `invalidateFetcherByDrvHash` {#tester-invalidateFetcherByDrvHash}
Use the derivation hash to invalidate the output via name, for testing.
Type: `(a@{ name, ... } -> Derivation) -> a -> Derivation`
Normally, fixed output derivations can and should be cached by their output
hash only, but for testing we want to re-fetch everytime the fetcher changes.
Changes to the fetcher become apparent in the drvPath, which is a hash of
how to fetch, rather than a fixed store path.
By inserting this hash into the name, we can make sure to re-run the fetcher
every time the fetcher changes.
This relies on the assumption that Nix isn't clever enough to reuse its
database of local store contents to optimize fetching.
You might notice that the "salted" name derives from the normal invocation,
not the final derivation. `invalidateFetcherByDrvHash` has to invoke the fetcher
function twice: once to get a derivation hash, and again to produce the final
fixed output derivation.
Example:
```nix
tests.fetchgit = invalidateFetcherByDrvHash fetchgit {
name = "nix-source";
url = "https://github.com/NixOS/nix";
rev = "9d9dbe6ed05854e03811c361a3380e09183f4f4a";
sha256 = "sha256-7DszvbCNTjpzGRmpIVAWXk20P0/XTrWZ79KSOGLrUWY=";
};
```
## `nixosTest` {#tester-nixosTest}
Run a NixOS VM network test using this evaluation of Nixpkgs.
NOTE: This function is primarily for external use. NixOS itself uses `make-test-python.nix` directly. Packages defined in Nixpkgs [reuse NixOS tests via `nixosTests`, plural](#ssec-nixos-tests-linking).
It is mostly equivalent to the function `import ./make-test-python.nix` from the
[NixOS manual](https://nixos.org/nixos/manual/index.html#sec-nixos-tests),
except that the current application of Nixpkgs (`pkgs`) will be used, instead of
letting NixOS invoke Nixpkgs anew.
If a test machine needs to set NixOS options under `nixpkgs`, it must set only the
`nixpkgs.pkgs` option.
### Parameter
A [NixOS VM test network](https://nixos.org/nixos/manual/index.html#sec-nixos-tests), or path to it. Example:
```nix
{
name = "my-test";
nodes = {
machine1 = { lib, pkgs, nodes, ... }: {
environment.systemPackages = [ pkgs.hello ];
services.foo.enable = true;
};
# machine2 = ...;
};
testScript = ''
start_all()
machine1.wait_for_unit("foo.service")
machine1.succeed("hello | foo-send")
'';
}
```
### Result
A derivation that runs the VM test.
Notable attributes:
* `nodes`: the evaluated NixOS configurations. Useful for debugging and exploring the configuration.
* `driverInteractive`: a script that launches an interactive Python session in the context of the `testScript`.

View File

@@ -1,223 +0,0 @@
# Trivial builders {#chap-trivial-builders}
Nixpkgs provides a couple of functions that help with building derivations. The most important one, `stdenv.mkDerivation`, has already been documented above. The following functions wrap `stdenv.mkDerivation`, making it easier to use in certain cases.
## `runCommand` {#trivial-builder-runCommand}
This takes three arguments, `name`, `env`, and `buildCommand`. `name` is just the name that Nix will append to the store path in the same way that `stdenv.mkDerivation` uses its `name` attribute. `env` is an attribute set specifying environment variables that will be set for this derivation. These attributes are then passed to the wrapped `stdenv.mkDerivation`. `buildCommand` specifies the commands that will be run to create this derivation. Note that you will need to create `$out` for Nix to register the command as successful.
An example of using `runCommand` is provided below.
```nix
(import <nixpkgs> {}).runCommand "my-example" {} ''
echo My example command is running
mkdir $out
echo I can write data to the Nix store > $out/message
echo I can also run basic commands like:
echo ls
ls
echo whoami
whoami
echo date
date
''
```
## `runCommandCC` {#trivial-builder-runCommandCC}
This works just like `runCommand`. The only difference is that it also provides a C compiler in `buildCommand`'s environment. To minimize your dependencies, you should only use this if you are sure you will need a C compiler as part of running your command.
## `runCommandLocal` {#trivial-builder-runCommandLocal}
Variant of `runCommand` that forces the derivation to be built locally, it is not substituted. This is intended for very cheap commands (<1s execution time). It saves on the network round-trip and can speed up a build.
::: {.note}
This sets [`allowSubstitutes` to `false`](https://nixos.org/nix/manual/#adv-attr-allowSubstitutes), so only use `runCommandLocal` if you are certain the user will always have a builder for the `system` of the derivation. This should be true for most trivial use cases (e.g., just copying some files to a different location or adding symlinks) because there the `system` is usually the same as `builtins.currentSystem`.
:::
## `writeTextFile`, `writeText`, `writeTextDir`, `writeScript`, `writeScriptBin` {#trivial-builder-writeText}
These functions write `text` to the Nix store. This is useful for creating scripts from Nix expressions. `writeTextFile` takes an attribute set and expects two arguments, `name` and `text`. `name` corresponds to the name used in the Nix store path. `text` will be the contents of the file. You can also set `executable` to true to make this file have the executable bit set.
Many more commands wrap `writeTextFile` including `writeText`, `writeTextDir`, `writeScript`, and `writeScriptBin`. These are convenience functions over `writeTextFile`.
Here are a few examples:
```nix
# Writes my-file to /nix/store/<store path>
writeTextFile {
name = "my-file";
text = ''
Contents of File
'';
}
# See also the `writeText` helper function below.
# Writes executable my-file to /nix/store/<store path>/bin/my-file
writeTextFile {
name = "my-file";
text = ''
Contents of File
'';
executable = true;
destination = "/bin/my-file";
}
# Writes contents of file to /nix/store/<store path>
writeText "my-file"
''
Contents of File
'';
# Writes contents of file to /nix/store/<store path>/share/my-file
writeTextDir "share/my-file"
''
Contents of File
'';
# Writes my-file to /nix/store/<store path> and makes executable
writeScript "my-file"
''
Contents of File
'';
# Writes my-file to /nix/store/<store path>/bin/my-file and makes executable.
writeScriptBin "my-file"
''
Contents of File
'';
# Writes my-file to /nix/store/<store path> and makes executable.
writeShellScript "my-file"
''
Contents of File
'';
# Writes my-file to /nix/store/<store path>/bin/my-file and makes executable.
writeShellScriptBin "my-file"
''
Contents of File
'';
```
## `concatTextFile`, `concatText`, `concatScript` {#trivial-builder-concatText}
These functions concatenate `files` to the Nix store in a single file. This is useful for configuration files structured in lines of text. `concatTextFile` takes an attribute set and expects two arguments, `name` and `files`. `name` corresponds to the name used in the Nix store path. `files` will be the files to be concatenated. You can also set `executable` to true to make this file have the executable bit set.
`concatText` and`concatScript` are simple wrappers over `concatTextFile`.
Here are a few examples:
```nix
# Writes my-file to /nix/store/<store path>
concatTextFile {
name = "my-file";
files = [ drv1 "${drv2}/path/to/file" ];
}
# See also the `concatText` helper function below.
# Writes executable my-file to /nix/store/<store path>/bin/my-file
concatTextFile {
name = "my-file";
files = [ drv1 "${drv2}/path/to/file" ];
executable = true;
destination = "/bin/my-file";
}
# Writes contents of files to /nix/store/<store path>
concatText "my-file" [ file1 file2 ]
# Writes contents of files to /nix/store/<store path>
concatScript "my-file" [ file1 file2 ]
```
## `writeShellApplication` {#trivial-builder-writeShellApplication}
This can be used to easily produce a shell script that has some dependencies (`runtimeInputs`). It automatically sets the `PATH` of the script to contain all of the listed inputs, sets some sanity shellopts (`errexit`, `nounset`, `pipefail`), and checks the resulting script with [`shellcheck`](https://github.com/koalaman/shellcheck).
For example, look at the following code:
```nix
writeShellApplication {
name = "show-nixos-org";
runtimeInputs = [ curl w3m ];
text = ''
curl -s 'https://nixos.org' | w3m -dump -T text/html
'';
}
```
Unlike with normal `writeShellScriptBin`, there is no need to manually write out `${curl}/bin/curl`, setting the PATH
was handled by `writeShellApplication`. Moreover, the script is being checked with `shellcheck` for more strict
validation.
## `symlinkJoin` {#trivial-builder-symlinkJoin}
This can be used to put many derivations into the same directory structure. It works by creating a new derivation and adding symlinks to each of the paths listed. It expects two arguments, `name`, and `paths`. `name` is the name used in the Nix store path for the created derivation. `paths` is a list of paths that will be symlinked. These paths can be to Nix store derivations or any other subdirectory contained within.
Here is an example:
```nix
# adds symlinks of hello and stack to current build and prints "links added"
symlinkJoin { name = "myexample"; paths = [ pkgs.hello pkgs.stack ]; postBuild = "echo links added"; }
```
This creates a derivation with a directory structure like the following:
```
/nix/store/sglsr5g079a5235hy29da3mq3hv8sjmm-myexample
|-- bin
| |-- hello -> /nix/store/qy93dp4a3rqyn2mz63fbxjg228hffwyw-hello-2.10/bin/hello
| `-- stack -> /nix/store/6lzdpxshx78281vy056lbk553ijsdr44-stack-2.1.3.1/bin/stack
`-- share
|-- bash-completion
| `-- completions
| `-- stack -> /nix/store/6lzdpxshx78281vy056lbk553ijsdr44-stack-2.1.3.1/share/bash-completion/completions/stack
|-- fish
| `-- vendor_completions.d
| `-- stack.fish -> /nix/store/6lzdpxshx78281vy056lbk553ijsdr44-stack-2.1.3.1/share/fish/vendor_completions.d/stack.fish
...
```
## `writeReferencesToFile` {#trivial-builder-writeReferencesToFile}
Writes the closure of transitive dependencies to a file.
This produces the equivalent of `nix-store -q --requisites`.
For example,
```nix
writeReferencesToFile (writeScriptBin "hi" ''${hello}/bin/hello'')
```
produces an output path `/nix/store/<hash>-runtime-deps` containing
```nix
/nix/store/<hash>-hello-2.10
/nix/store/<hash>-hi
/nix/store/<hash>-libidn2-2.3.0
/nix/store/<hash>-libunistring-0.9.10
/nix/store/<hash>-glibc-2.32-40
```
You can see that this includes `hi`, the original input path,
`hello`, which is a direct reference, but also
the other paths that are indirectly required to run `hello`.
## `writeDirectReferencesToFile` {#trivial-builder-writeDirectReferencesToFile}
Writes the set of references to the output file, that is, their immediate dependencies.
This produces the equivalent of `nix-store -q --references`.
For example,
```nix
writeDirectReferencesToFile (writeScriptBin "hi" ''${hello}/bin/hello'')
```
produces an output path `/nix/store/<hash>-runtime-references` containing
```nix
/nix/store/<hash>-hello-2.10
```
but none of `hello`'s dependencies because those are not referenced directly
by `hi`'s output.

View File

@@ -0,0 +1,90 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="chap-trivial-builders">
<title>Trivial builders</title>
<para>
Nixpkgs provides a couple of functions that help with building derivations. The most important one, <function>stdenv.mkDerivation</function>, has already been documented above. The following functions wrap <function>stdenv.mkDerivation</function>, making it easier to use in certain cases.
</para>
<variablelist>
<varlistentry xml:id="trivial-builder-runCommand">
<term>
<literal>runCommand</literal>
</term>
<listitem>
<para>
This takes three arguments, <literal>name</literal>, <literal>env</literal>, and <literal>buildCommand</literal>. <literal>name</literal> is just the name that Nix will append to the store path in the same way that <literal>stdenv.mkDerivation</literal> uses its <literal>name</literal> attribute. <literal>env</literal> is an attribute set specifying environment variables that will be set for this derivation. These attributes are then passed to the wrapped <literal>stdenv.mkDerivation</literal>. <literal>buildCommand</literal> specifies the commands that will be run to create this derivation. Note that you will need to create <literal>$out</literal> for Nix to register the command as successful.
</para>
<para>
An example of using <literal>runCommand</literal> is provided below.
</para>
<programlisting>
(import &lt;nixpkgs&gt; {}).runCommand "my-example" {} ''
echo My example command is running
mkdir $out
echo I can write data to the Nix store > $out/message
echo I can also run basic commands like:
echo ls
ls
echo whoami
whoami
echo date
date
''
</programlisting>
</listitem>
</varlistentry>
<varlistentry xml:id="trivial-builder-runCommandCC">
<term>
<literal>runCommandCC</literal>
</term>
<listitem>
<para>
This works just like <literal>runCommand</literal>. The only difference is that it also provides a C compiler in <literal>buildCommand</literal>s environment. To minimize your dependencies, you should only use this if you are sure you will need a C compiler as part of running your command.
</para>
</listitem>
</varlistentry>
<varlistentry xml:id="trivial-builder-runCommandLocal">
<term>
<literal>runCommandLocal</literal>
</term>
<listitem>
<para>
Variant of <literal>runCommand</literal> that forces the derivation to be built locally, it is not substituted. This is intended for very cheap commands (&lt;1s execution time). It saves on the network roundrip and can speed up a build.
</para>
<note><para>
This sets <link xlink:href="https://nixos.org/nix/manual/#adv-attr-allowSubstitutes"><literal>allowSubstitutes</literal> to <literal>false</literal></link>, so only use <literal>runCommandLocal</literal> if you are certain the user will always have a builder for the <literal>system</literal> of the derivation. This should be true for most trivial use cases (e.g. just copying some files to a different location or adding symlinks), because there the <literal>system</literal> is usually the same as <literal>builtins.currentSystem</literal>.
</para></note>
</listitem>
</varlistentry>
<varlistentry xml:id="trivial-builder-writeText">
<term>
<literal>writeTextFile</literal>, <literal>writeText</literal>, <literal>writeTextDir</literal>, <literal>writeScript</literal>, <literal>writeScriptBin</literal>
</term>
<listitem>
<para>
These functions write <literal>text</literal> to the Nix store. This is useful for creating scripts from Nix expressions. <literal>writeTextFile</literal> takes an attribute set and expects two arguments, <literal>name</literal> and <literal>text</literal>. <literal>name</literal> corresponds to the name used in the Nix store path. <literal>text</literal> will be the contents of the file. You can also set <literal>executable</literal> to true to make this file have the executable bit set.
</para>
<para>
Many more commands wrap <literal>writeTextFile</literal> including <literal>writeText</literal>, <literal>writeTextDir</literal>, <literal>writeScript</literal>, and <literal>writeScriptBin</literal>. These are convenience functions over <literal>writeTextFile</literal>.
</para>
</listitem>
</varlistentry>
<varlistentry xml:id="trivial-builder-symlinkJoin">
<term>
<literal>symlinkJoin</literal>
</term>
<listitem>
<para>
This can be used to put many derivations into the same directory structure. It works by creating a new derivation and adding symlinks to each of the paths listed. It expects two arguments, <literal>name</literal>, and <literal>paths</literal>. <literal>name</literal> is the name used in the Nix store path for the created derivation. <literal>paths</literal> is a list of paths that will be symlinked. These paths can be to Nix store derivations or any other subdirectory contained within.
</para>
</listitem>
</varlistentry>
</variablelist>
</chapter>

View File

@@ -1,660 +0,0 @@
# Coding conventions {#chap-conventions}
## Syntax {#sec-syntax}
- Use 2 spaces of indentation per indentation level in Nix expressions, 4 spaces in shell scripts.
- Do not use tab characters, i.e. configure your editor to use soft tabs. For instance, use `(setq-default indent-tabs-mode nil)` in Emacs. Everybody has different tab settings so its asking for trouble.
- Use `lowerCamelCase` for variable names, not `UpperCamelCase`. Note, this rule does not apply to package attribute names, which instead follow the rules in [](#sec-package-naming).
- Function calls with attribute set arguments are written as
```nix
foo {
arg = ...;
}
```
not
```nix
foo
{
arg = ...;
}
```
Also fine is
```nix
foo { arg = ...; }
```
if it's a short call.
- In attribute sets or lists that span multiple lines, the attribute names or list elements should be aligned:
```nix
# A long list.
list = [
elem1
elem2
elem3
];
# A long attribute set.
attrs = {
attr1 = short_expr;
attr2 =
if true then big_expr else big_expr;
};
# Combined
listOfAttrs = [
{
attr1 = 3;
attr2 = "fff";
}
{
attr1 = 5;
attr2 = "ggg";
}
];
```
- Short lists or attribute sets can be written on one line:
```nix
# A short list.
list = [ elem1 elem2 elem3 ];
# A short set.
attrs = { x = 1280; y = 1024; };
```
- Breaking in the middle of a function argument can give hard-to-read code, like
```nix
someFunction { x = 1280;
y = 1024; } otherArg
yetAnotherArg
```
(especially if the argument is very large, spanning multiple lines).
Better:
```nix
someFunction
{ x = 1280; y = 1024; }
otherArg
yetAnotherArg
```
or
```nix
let res = { x = 1280; y = 1024; };
in someFunction res otherArg yetAnotherArg
```
- The bodies of functions, asserts, and withs are not indented to prevent a lot of superfluous indentation levels, i.e.
```nix
{ arg1, arg2 }:
assert system == "i686-linux";
stdenv.mkDerivation { ...
```
not
```nix
{ arg1, arg2 }:
assert system == "i686-linux";
stdenv.mkDerivation { ...
```
- Function formal arguments are written as:
```nix
{ arg1, arg2, arg3 }:
```
but if they don't fit on one line they're written as:
```nix
{ arg1, arg2, arg3
, arg4, ...
, # Some comment...
argN
}:
```
- Functions should list their expected arguments as precisely as possible. That is, write
```nix
{ stdenv, fetchurl, perl }: ...
```
instead of
```nix
args: with args; ...
```
or
```nix
{ stdenv, fetchurl, perl, ... }: ...
```
For functions that are truly generic in the number of arguments (such as wrappers around `mkDerivation`) that have some required arguments, you should write them using an `@`-pattern:
```nix
{ stdenv, doCoverageAnalysis ? false, ... } @ args:
stdenv.mkDerivation (args // {
... if doCoverageAnalysis then "bla" else "" ...
})
```
instead of
```nix
args:
args.stdenv.mkDerivation (args // {
... if args ? doCoverageAnalysis && args.doCoverageAnalysis then "bla" else "" ...
})
```
- Unnecessary string conversions should be avoided. Do
```nix
rev = version;
```
instead of
```nix
rev = "${version}";
```
- Building lists conditionally _should_ be done with `lib.optional(s)` instead of using `if cond then [ ... ] else null` or `if cond then [ ... ] else [ ]`.
```nix
buildInputs = lib.optional stdenv.isDarwin iconv;
```
instead of
```nix
buildInputs = if stdenv.isDarwin then [ iconv ] else null;
```
As an exception, an explicit conditional expression with null can be used when fixing a important bug without triggering a mass rebuild.
If this is done a follow up pull request _should_ be created to change the code to `lib.optional(s)`.
- Arguments should be listed in the order they are used, with the exception of `lib`, which always goes first.
## Package naming {#sec-package-naming}
The key words _must_, _must not_, _required_, _shall_, _shall not_, _should_, _should not_, _recommended_, _may_, and _optional_ in this section are to be interpreted as described in [RFC 2119](https://tools.ietf.org/html/rfc2119). Only _emphasized_ words are to be interpreted in this way.
In Nixpkgs, there are generally three different names associated with a package:
- The `name` attribute of the derivation (excluding the version part). This is what most users see, in particular when using `nix-env`.
- The variable name used for the instantiated package in `all-packages.nix`, and when passing it as a dependency to other functions. Typically this is called the _package attribute name_. This is what Nix expression authors see. It can also be used when installing using `nix-env -iA`.
- The filename for (the directory containing) the Nix expression.
Most of the time, these are the same. For instance, the package `e2fsprogs` has a `name` attribute `"e2fsprogs-version"`, is bound to the variable name `e2fsprogs` in `all-packages.nix`, and the Nix expression is in `pkgs/os-specific/linux/e2fsprogs/default.nix`.
There are a few naming guidelines:
- The `pname` attribute _should_ be identical to the upstream package name.
- The `pname` and the `version` attribute _must not_ contain uppercase letters — e.g., `"mplayer" instead of `"MPlayer"`.
- The `version` attribute _must_ start with a digit e.g`"0.3.1rc2".
- If a package is not a release but a commit from a repository, then the `version` attribute _must_ be the date of that (fetched) commit. The date _must_ be in `"unstable-YYYY-MM-DD"` format.
- Dashes in the package `pname` _should_ be preserved in new variable names, rather than converted to underscores or camel cased — e.g., `http-parser` instead of `http_parser` or `httpParser`. The hyphenated style is preferred in all three package names.
- If there are multiple versions of a package, this _should_ be reflected in the variable names in `all-packages.nix`, e.g. `json-c_0_9` and `json-c_0_11`. If there is an obvious “default” version, make an attribute like `json-c = json-c_0_9;`. See also [](#sec-versioning)
## File naming and organisation {#sec-organisation}
Names of files and directories should be in lowercase, with dashes between words — not in camel case. For instance, it should be `all-packages.nix`, not `allPackages.nix` or `AllPackages.nix`.
### Hierarchy {#sec-hierarchy}
Each package should be stored in its own directory somewhere in the `pkgs/` tree, i.e. in `pkgs/category/subcategory/.../pkgname`. Below are some rules for picking the right category for a package. Many packages fall under several categories; what matters is the _primary_ purpose of a package. For example, the `libxml2` package builds both a library and some tools; but its a library foremost, so it goes under `pkgs/development/libraries`.
When in doubt, consider refactoring the `pkgs/` tree, e.g. creating new categories or splitting up an existing category.
**If its used to support _software development_:**
- **If its a _library_ used by other packages:**
- `development/libraries` (e.g. `libxml2`)
- **If its a _compiler_:**
- `development/compilers` (e.g. `gcc`)
- **If its an _interpreter_:**
- `development/interpreters` (e.g. `guile`)
- **If its a (set of) development _tool(s)_:**
- **If its a _parser generator_ (including lexers):**
- `development/tools/parsing` (e.g. `bison`, `flex`)
- **If its a _build manager_:**
- `development/tools/build-managers` (e.g. `gnumake`)
- **Else:**
- `development/tools/misc` (e.g. `binutils`)
- **Else:**
- `development/misc`
**If its a (set of) _tool(s)_:**
(A tool is a relatively small program, especially one intended to be used non-interactively.)
- **If its for _networking_:**
- `tools/networking` (e.g. `wget`)
- **If its for _text processing_:**
- `tools/text` (e.g. `diffutils`)
- **If its a _system utility_, i.e., something related or essential to the operation of a system:**
- `tools/system` (e.g. `cron`)
- **If its an _archiver_ (which may include a compression function):**
- `tools/archivers` (e.g. `zip`, `tar`)
- **If its a _compression_ program:**
- `tools/compression` (e.g. `gzip`, `bzip2`)
- **If its a _security_-related program:**
- `tools/security` (e.g. `nmap`, `gnupg`)
- **Else:**
- `tools/misc`
**If its a _shell_:**
- `shells` (e.g. `bash`)
**If its a _server_:**
- **If its a web server:**
- `servers/http` (e.g. `apache-httpd`)
- **If its an implementation of the X Windowing System:**
- `servers/x11` (e.g. `xorg` — this includes the client libraries and programs)
- **Else:**
- `servers/misc`
**If its a _desktop environment_:**
- `desktops` (e.g. `kde`, `gnome`, `enlightenment`)
**If its a _window manager_:**
- `applications/window-managers` (e.g. `awesome`, `stumpwm`)
**If its an _application_:**
A (typically large) program with a distinct user interface, primarily used interactively.
- **If its a _version management system_:**
- `applications/version-management` (e.g. `subversion`)
- **If its a _terminal emulator_:**
- `applications/terminal-emulators` (e.g. `alacritty` or `rxvt` or `termite`)
- **If its a _file manager_:**
- `applications/file-managers` (e.g. `mc` or `ranger` or `pcmanfm`)
- **If its for _video playback / editing_:**
- `applications/video` (e.g. `vlc`)
- **If its for _graphics viewing / editing_:**
- `applications/graphics` (e.g. `gimp`)
- **If its for _networking_:**
- **If its a _mailreader_:**
- `applications/networking/mailreaders` (e.g. `thunderbird`)
- **If its a _newsreader_:**
- `applications/networking/newsreaders` (e.g. `pan`)
- **If its a _web browser_:**
- `applications/networking/browsers` (e.g. `firefox`)
- **Else:**
- `applications/networking/misc`
- **Else:**
- `applications/misc`
**If its _data_ (i.e., does not have a straight-forward executable semantics):**
- **If its a _font_:**
- `data/fonts`
- **If its an _icon theme_:**
- `data/icons`
- **If its related to _SGML/XML processing_:**
- **If its an _XML DTD_:**
- `data/sgml+xml/schemas/xml-dtd` (e.g. `docbook`)
- **If its an _XSLT stylesheet_:**
(Okay, these are executable...)
- `data/sgml+xml/stylesheets/xslt` (e.g. `docbook-xsl`)
- **If its a _theme_ for a _desktop environment_, a _window manager_ or a _display manager_:**
- `data/themes`
**If its a _game_:**
- `games`
**Else:**
- `misc`
### Versioning {#sec-versioning}
Because every version of a package in Nixpkgs creates a potential maintenance burden, old versions of a package should not be kept unless there is a good reason to do so. For instance, Nixpkgs contains several versions of GCC because other packages dont build with the latest version of GCC. Other examples are having both the latest stable and latest pre-release version of a package, or to keep several major releases of an application that differ significantly in functionality.
If there is only one version of a package, its Nix expression should be named `e2fsprogs/default.nix`. If there are multiple versions, this should be reflected in the filename, e.g. `e2fsprogs/1.41.8.nix` and `e2fsprogs/1.41.9.nix`. The version in the filename should leave out unnecessary detail. For instance, if we keep the latest Firefox 2.0.x and 3.5.x versions in Nixpkgs, they should be named `firefox/2.0.nix` and `firefox/3.5.nix`, respectively (which, at a given point, might contain versions `2.0.0.20` and `3.5.4`). If a version requires many auxiliary files, you can use a subdirectory for each version, e.g. `firefox/2.0/default.nix` and `firefox/3.5/default.nix`.
All versions of a package _must_ be included in `all-packages.nix` to make sure that they evaluate correctly.
## Fetching Sources {#sec-sources}
There are multiple ways to fetch a package source in nixpkgs. The general guideline is that you should package reproducible sources with a high degree of availability. Right now there is only one fetcher which has mirroring support and that is `fetchurl`. Note that you should also prefer protocols which have a corresponding proxy environment variable.
You can find many source fetch helpers in `pkgs/build-support/fetch*`.
In the file `pkgs/top-level/all-packages.nix` you can find fetch helpers, these have names on the form `fetchFrom*`. The intention of these are to provide snapshot fetches but using the same api as some of the version controlled fetchers from `pkgs/build-support/`. As an example going from bad to good:
- Bad: Uses `git://` which won't be proxied.
```nix
src = fetchgit {
url = "git://github.com/NixOS/nix.git";
rev = "1f795f9f44607cc5bec70d1300150bfefcef2aae";
sha256 = "1cw5fszffl5pkpa6s6wjnkiv6lm5k618s32sp60kvmvpy7a2v9kg";
}
```
- Better: This is ok, but an archive fetch will still be faster.
```nix
src = fetchgit {
url = "https://github.com/NixOS/nix.git";
rev = "1f795f9f44607cc5bec70d1300150bfefcef2aae";
sha256 = "1cw5fszffl5pkpa6s6wjnkiv6lm5k618s32sp60kvmvpy7a2v9kg";
}
```
- Best: Fetches a snapshot archive and you get the rev you want.
```nix
src = fetchFromGitHub {
owner = "NixOS";
repo = "nix";
rev = "1f795f9f44607cc5bec70d1300150bfefcef2aae";
sha256 = "1i2yxndxb6yc9l6c99pypbd92lfq5aac4klq7y2v93c9qvx2cgpc";
}
```
Find the value to put as `sha256` by running `nix-shell -p nix-prefetch-github --run "nix-prefetch-github --rev 1f795f9f44607cc5bec70d1300150bfefcef2aae NixOS nix"`.
## Obtaining source hash {#sec-source-hashes}
Preferred source hash type is sha256. There are several ways to get it.
1. Prefetch URL (with `nix-prefetch-XXX URL`, where `XXX` is one of `url`, `git`, `hg`, `cvs`, `bzr`, `svn`). Hash is printed to stdout.
2. Prefetch by package source (with `nix-prefetch-url '<nixpkgs>' -A PACKAGE.src`, where `PACKAGE` is package attribute name). Hash is printed to stdout.
This works well when you've upgraded existing package version and want to find out new hash, but is useless if package can't be accessed by attribute or package has multiple sources (`.srcs`, architecture-dependent sources, etc).
3. Upstream provided hash: use it when upstream provides `sha256` or `sha512` (when upstream provides `md5`, don't use it, compute `sha256` instead).
A little nuance is that `nix-prefetch-*` tools produce hash encoded with `base32`, but upstream usually provides hexadecimal (`base16`) encoding. Fetchers understand both formats. Nixpkgs does not standardize on any one format.
You can convert between formats with nix-hash, for example:
```ShellSession
$ nix-hash --type sha256 --to-base32 HASH
```
4. Extracting hash from local source tarball can be done with `sha256sum`. Use `nix-prefetch-url file:///path/to/tarball` if you want base32 hash.
5. Fake hash: set fake hash in package expression, perform build and extract correct hash from error Nix prints.
For package updates it is enough to change one symbol to make hash fake. For new packages, you can use `lib.fakeSha256`, `lib.fakeSha512` or any other fake hash.
This is last resort method when reconstructing source URL is non-trivial and `nix-prefetch-url -A` isnt applicable (for example, [one of `kodi` dependencies](https://github.com/NixOS/nixpkgs/blob/d2ab091dd308b99e4912b805a5eb088dd536adb9/pkgs/applications/video/kodi/default.nix#L73)). The easiest way then would be replace hash with a fake one and rebuild. Nix build will fail and error message will contain desired hash.
::: {.warning}
This method has security problems. Check below for details.
:::
### Obtaining hashes securely {#sec-source-hashes-security}
Let's say Man-in-the-Middle (MITM) sits close to your network. Then instead of fetching source you can fetch malware, and instead of source hash you get hash of malware. Here are security considerations for this scenario:
- `http://` URLs are not secure to prefetch hash from;
- hashes from upstream (in method 3) should be obtained via secure protocol;
- `https://` URLs are secure in methods 1, 2, 3;
- `https://` URLs are not secure in method 5. When obtaining hashes with fake hash method, TLS checks are disabled. So refetch source hash from several different networks to exclude MITM scenario. Alternatively, use fake hash method to make Nix error, but instead of extracting hash from error, extract `https://` URL and prefetch it with method 1.
## Patches {#sec-patches}
Patches available online should be retrieved using `fetchpatch`.
```nix
patches = [
(fetchpatch {
name = "fix-check-for-using-shared-freetype-lib.patch";
url = "http://git.ghostscript.com/?p=ghostpdl.git;a=patch;h=8f5d285";
sha256 = "1f0k043rng7f0rfl9hhb89qzvvksqmkrikmm38p61yfx51l325xr";
})
];
```
Otherwise, you can add a `.patch` file to the `nixpkgs` repository. In the interest of keeping our maintenance burden to a minimum, only patches that are unique to `nixpkgs` should be added in this way.
If a patch is available online but does not cleanly apply, it can be modified in some fixed ways by using additional optional arguments for `fetchpatch`. Check [](#fetchpatch) for details.
```nix
patches = [ ./0001-changes.patch ];
```
If you do need to do create this sort of patch file, one way to do so is with git:
1. Move to the root directory of the source code you're patching.
```ShellSession
$ cd the/program/source
```
2. If a git repository is not already present, create one and stage all of the source files.
```ShellSession
$ git init
$ git add .
```
3. Edit some files to make whatever changes need to be included in the patch.
4. Use git to create a diff, and pipe the output to a patch file:
```ShellSession
$ git diff -a > nixpkgs/pkgs/the/package/0001-changes.patch
```
## Package tests {#sec-package-tests}
Tests are important to ensure quality and make reviews and automatic updates easy.
The following types of tests exists:
* [NixOS **module tests**](https://nixos.org/manual/nixos/stable/#sec-nixos-tests), which spawn one or more NixOS VMs. They exercise both NixOS modules and the packaged programs used within them. For example, a NixOS module test can start a web server VM running the `nginx` module, and a client VM running `curl` or a graphical `firefox`, and test that they can talk to each other and display the correct content.
* Nix **package tests** are a lightweight alternative to NixOS module tests. They should be used to create simple integration tests for packages, but cannot test NixOS services, and some programs with graphical user interfaces may also be difficult to test with them.
* The **`checkPhase` of a package**, which should execute the unit tests that are included in the source code of a package.
Here in the nixpkgs manual we describe mostly _package tests_; for _module tests_ head over to the corresponding [section in the NixOS manual](https://nixos.org/manual/nixos/stable/#sec-nixos-tests).
### Writing inline package tests {#ssec-inline-package-tests-writing}
For very simple tests, they can be written inline:
```nix
{ …, yq-go }:
buildGoModule rec {
passthru.tests = {
simple = runCommand "${pname}-test" {} ''
echo "test: 1" | ${yq-go}/bin/yq eval -j > $out
[ "$(cat $out | tr -d $'\n ')" = '{"test":1}' ]
'';
};
}
```
### Writing larger package tests {#ssec-package-tests-writing}
This is an example using the `phoronix-test-suite` package with the current best practices.
Add the tests in `passthru.tests` to the package definition like this:
```nix
{ stdenv, lib, fetchurl, callPackage }:
stdenv.mkDerivation {
passthru.tests = {
simple-execution = callPackage ./tests.nix { };
};
meta = { … };
}
```
Create `tests.nix` in the package directory:
```nix
{ runCommand, phoronix-test-suite }:
let
inherit (phoronix-test-suite) pname version;
in
runCommand "${pname}-tests" { meta.timeout = 60; }
''
# automatic initial setup to prevent interactive questions
${phoronix-test-suite}/bin/phoronix-test-suite enterprise-setup >/dev/null
# get version of installed program and compare with package version
if [[ `${phoronix-test-suite}/bin/phoronix-test-suite version` != *"${version}"* ]]; then
echo "Error: program version does not match package version"
exit 1
fi
# run dummy command
${phoronix-test-suite}/bin/phoronix-test-suite dummy_module.dummy-command >/dev/null
# needed for Nix to register the command as successful
touch $out
''
```
### Running package tests {#ssec-package-tests-running}
You can run these tests with:
```ShellSession
$ cd path/to/nixpkgs
$ nix-build -A phoronix-test-suite.tests
```
### Examples of package tests {#ssec-package-tests-examples}
Here are examples of package tests:
- [Jasmin compile test](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/compilers/jasmin/test-assemble-hello-world/default.nix)
- [Lobster compile test](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/compilers/lobster/test-can-run-hello-world.nix)
- [Spacy annotation test](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/python-modules/spacy/annotation-test/default.nix)
- [Libtorch test](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/libraries/science/math/libtorch/test/default.nix)
- [Multiple tests for nanopb](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/libraries/nanopb/default.nix)
### Linking NixOS module tests to a package {#ssec-nixos-tests-linking}
Like [package tests](#ssec-package-tests-writing) as shown above, [NixOS module tests](https://nixos.org/manual/nixos/stable/#sec-nixos-tests) can also be linked to a package, so that the tests can be easily run when changing the related package.
For example, assuming we're packaging `nginx`, we can link its module test via `passthru.tests`:
```nix
{ stdenv, lib, nixosTests }:
stdenv.mkDerivation {
...
passthru.tests = {
nginx = nixosTests.nginx;
};
...
}
```

Some files were not shown because too many files have changed in this diff Show More