When an artifact is requested from hydra the output is first copied
from the nix store into memory and then sent as a response, delaying
the download and taking up significant amounts of memory.
As reported in https://github.com/NixOS/hydra/issues/1357
Instead of calling a command and blocking while reading in the entire
output, this adds read_into_socket(). the function takes a
command, starting a subprocess with that command, returning a file
descriptor attached to stdout.
This file descriptor is then by responsebuilder of Catalyst to steam
the output directly
(cherry picked from commit 459aa0a5983a0bd546399c08231468d6e9282f54)
Implements support for Nix's new Perl bindings[1]. The current state
basically does `openStore()`, but always uses `auto` and doesn't support
stores at other URIs.
Even though the stores are cached inside the Perl implementation, I
decided to instantiate those once in the Nix helper module. That way
store openings aren't cluttered across the entire codebase. Also, there
are two stores used later on - MACHINE_LOCAL_STORE for `auto`,
BINARY_CACHE_STORE for the one from `store_uri` in `hydra.conf` - and
using consistent names should make the intent clearer then.
This doesn't contain any behavioral changes, i.e. the build product
availability issue from #1352 isn't fixed. This patch only contains the
migration to the new API.
[1] https://github.com/NixOS/nix/pull/9863
To correctly render HTML reports we make sure to return the following MIME
types instead of "text/plain"
- *.css: "text/css"
- *.js: "application/javascript"
Fixes: #1267
This also adds the `runcommandlog` object to the stash so that we can
access its uuid as well as command run in order to display more useful
and specific information on the webpage.
Using a sha1 of the command combined with the build ID is not a
particularly good or unique identifier:
* A build could fail, be restarted, and then succeed -- assuming no
configuration changes, the sha1 hash of the command as well as the build
ID will be the same. This would lead to an overwritten log file.
* Allowing user input to influence filenames is not the best of ideas.
To quote the function's comment:
Awful hack to handle timeouts in SQLite: just retry the transaction.
DBD::SQLite *has* a 30 second retry window, but apparently it
doesn't work.
Since SQLite is now dropped entirely, this wrapper can be removed
completely.
This makes downloading/viewing build results work with binary cache
stores. For good performance, this should be used in conjunction with
ca580bec35,
i.e. you should set store_uri to something like
s3://my-cache?local-nar-cache=/tmp/nar-cache
to cache NARs between requests.