New jobs have their "new" status take precedence over them being
"failed" or "queued", which means actions that can act on "failed" or
"queued" jobs weren't shown to the user when they could only act on
"new" jobs.
(cherry picked from commit 9a4a5dd624)
This is implement in an extremely hacky way due to poor DBIx feature
support. Ideally, what we'd need is a way to tell DBIx to ignore the
errormsg column unless explicitly requested, and to automatically add a
computed 'errormsg IS NULL' column in others. Since it does not support
that, this commit instead hacks some support via method overrides while
taking care to not break anything obvious.
The third argument to `open()` in `-|` mode is passed to a shell if it's
a string. In my case the store URI contains
`?secret-key=${signingKey.directory}/secret&compression=zstd`
For the `nix store cat` case this means that
* until `&` the process will be started in the background. This fails
immediately because no path to cat is specified.
* `compression=zstd` is a variable assignment
* the `$path` argument to `store cat` is attempted to be executed as
another command
Passing just the list solves the problem.
(cherry picked from commit 3ee51dbe589458cc54ff753317bbc6db530bddc0)
When an artifact is requested from hydra the output is first copied
from the nix store into memory and then sent as a response, delaying
the download and taking up significant amounts of memory.
As reported in https://github.com/NixOS/hydra/issues/1357
Instead of calling a command and blocking while reading in the entire
output, this adds read_into_socket(). the function takes a
command, starting a subprocess with that command, returning a file
descriptor attached to stdout.
This file descriptor is then by responsebuilder of Catalyst to steam
the output directly
(cherry picked from commit 459aa0a5983a0bd546399c08231468d6e9282f54)
In 73694087a088ed4481b4ab268a03351b1bcaac3c I gave builds that failed
because of a timeout or exceeded log limit a stop sign and I stand by
that reasoning: with that it's possible to distinguish between actual
build failures and rather transient things such as timeouts.
Back then I considered it a feature that these are shown in a different
tab, but I don't think that's a good idea anymore. When using a jobset to
e.g. track the regressions from a mass rebuild (like a compiler or gcc
update), "Newly failed builds" should exclusively display regressions (and
flaky builds of course, not much I can do about that).
Also, when a bunch of builds fail in such a jobset because of e.g. a
broken connection to a builder that results in a timeout, I want to be
able to restart them all w/o rebuilding actual regressions.
To make it clear that we not only have "Aborted" builds in the tab, I
renamed the label to "Aborted / Timed out".
When content addressed derivations are built on the hydra server,
one may run into an issue where some builds suddenly don't load anymore.
This seems to be caused by outPaths that are NULL (which is
allowed for ca-derivations). Filter them out to prevent querying the
database for them, which is not supported by the database abstraction
layer that's currently in use.
On my instance this appears to resolve the issue.
I feel like I might be doing this at the wrong abstraction layer, but on
the other hand -- it seems to resolve it and it also doesn't really look
like it will hurt anything.
The test added in a previous commit uncovers this issue, and this commit
resolves it. So I'm happy with this patch for now.
The issue I was seeing on my server:
hydra-server[2549]: [error] Couldn't render template "undef error - DBIx::Class::SQLMaker::ClassicExtensions::puke(): Fatal: NULL-within-IN not implemented: The upcoming SQL::Abstract::Classic 2.0 will emit the logically correct SQL instead of raising this exception. at /nix/store/<hash>-hydra-unstable-2024-03-08_nix_2_20/libexec/hydra/lib/Hydra/Helper/Nix.pm line 190
See also short discussion here: https://github.com/NixOS/nixpkgs/pull/297392#issuecomment-2035366263
Implements support for Nix's new Perl bindings[1]. The current state
basically does `openStore()`, but always uses `auto` and doesn't support
stores at other URIs.
Even though the stores are cached inside the Perl implementation, I
decided to instantiate those once in the Nix helper module. That way
store openings aren't cluttered across the entire codebase. Also, there
are two stores used later on - MACHINE_LOCAL_STORE for `auto`,
BINARY_CACHE_STORE for the one from `store_uri` in `hydra.conf` - and
using consistent names should make the intent clearer then.
This doesn't contain any behavioral changes, i.e. the build product
availability issue from #1352 isn't fixed. This patch only contains the
migration to the new API.
[1] https://github.com/NixOS/nix/pull/9863
This shouldn't be possible normally, but it is possible to:
$db->resultset('RunCommandLogs')->new({ uuid => "../etc/passwd" });
if you have access to the `$db`.
Using a sha1 of the command combined with the build ID is not a
particularly good or unique identifier:
* A build could fail, be restarted, and then succeed -- assuming no
configuration changes, the sha1 hash of the command as well as the build
ID will be the same. This would lead to an overwritten log file.
* Allowing user input to influence filenames is not the best of ideas.
Using it causes database information to get fixated early, before tests can set a
new database. We only used it in one case, and that is an absolute reference anyway. The
tests for channel generation are passing, and that uses
[requireLocalStore, so this should be fine.
12:34:12 hydra-server.1 | Use of uninitialized value $s in substitution (s///) at /home/vin/workspace/vcs/hydra/src/script/../lib/Hydra/Helper/CatalystUtils.pm line 283, <$fh> line 1.
Fix parsing breakage from #1003: assigning the lines to $lines broke chomp and the filters.
This test validates the parsing works as expected, and also fixes
a minor bug where '-' in features isn't pruned, like in the C++
repo.