The database has these constraints:
check ((type = 0) = (nixExprInput is not null and nixExprPath is not null)),
check ((type = 1) = (flake is not null)),
which prevented switching to flakes in a declarative jobspec, since the
nixexpr{path,input} fields were not nulled in such an update
Co-Authored-By: Graham Christensen <graham@grahamc.com>
Fixes this error:
ERROR: failed to process declarative jobset test:inputs,
DBIx::Class::Storage::DBI::_dbh_execute(): DBI Exception: DBD::Pg::st
execute failed: ERROR: null value in column "emailoverride" violates
not-null constraint
Taken from `Perl::Critic`:
A common idiom in perl for dealing with possible errors is to use `eval`
followed by a check of `$@`/`$EVAL_ERROR`:
eval {
...
};
if ($EVAL_ERROR) {
...
}
There's a problem with this: the value of `$EVAL_ERROR` (`$@`) can change
between the end of the `eval` and the `if` statement. The issue are object
destructors:
package Foo;
...
sub DESTROY {
...
eval { ... };
...
}
package main;
eval {
my $foo = Foo->new();
...
};
if ($EVAL_ERROR) {
...
}
Assuming there are no other references to `$foo` created, when the
`eval` block in `main` is exited, `Foo::DESTROY()` will be invoked,
regardless of whether the `eval` finished normally or not. If the `eval`
in `main` fails, but the `eval` in `Foo::DESTROY()` succeeds, then
`$EVAL_ERROR` will be empty by the time that the `if` is executed.
Additional issues arise if you depend upon the exact contents of
`$EVAL_ERROR` and both `eval`s fail, because the messages from both will
be concatenated.
Even if there isn't an `eval` directly in the `DESTROY()` method code,
it may invoke code that does use `eval` or otherwise affects
`$EVAL_ERROR`.
The solution is to ensure that, upon normal exit, an `eval` returns a
true value and to test that value:
# Constructors are no problem.
my $object = eval { Class->new() };
# To cover the possiblity that an operation may correctly return a
# false value, end the block with "1":
if ( eval { something(); 1 } ) {
...
}
eval {
...
1;
}
or do {
# Error handling here
};
Unfortunately, you can't use the `defined` function to test the result;
`eval` returns an empty string on failure.
Various modules have been written to take some of the pain out of
properly localizing and checking `$@`/`$EVAL_ERROR`. For example:
use Try::Tiny;
try {
...
} catch {
# Error handling here;
# The exception is in $_/$ARG, not $@/$EVAL_ERROR.
}; # Note semicolon.
"But we don't use DESTROY() anywhere in our code!" you say. That may be
the case, but do any of the third-party modules you use have them? What
about any you may use in the future or updated versions of the ones you
already use?
The current implementation will pass all values to `create_or_update` method. The
missing values will end up as `undef` (or `NULL`) when assigned to `%update`.
Thus, for columns that are NOT NULL, when, for example, flakes are not used,
will result in a horrible:
DBIx::Class::Storage::DBI::_dbh_execute(): DBI Exception: DBD::Pg::st execute failed:
ERROR: null value in column "type" violates not-null constraint
DETAIL: Failing row contains (.jobsets, 118, hydra, hydra jobsets, src, hydra/jobsets.nix, null,
null, null, 1589536378, 1, 0, 0, , 3, 30, 100, null, null, 1589536379, null, null). [for Statement
"UPDATE jobsets SET checkinterval = ?, description = ?, enableemail = ?, nixexprinput = ?,
nixexprpath = ?, type = ? WHERE ( ( name = ? AND project = ? ) )" with ParamValues: 1='30',
2='hydra jobsets', 3='0', 4='src', 5='hydra/jobsets.nix', 6=undef, 7='.jobsets', 8='hydra'] at
/nix/store/lsf81ip9ybxihk5praf2n0nh14a6i9j0-hydra-0.1.19700101.DIRTY/libexec/hydra/lib/Hydra/Helper/AddBuilds.pm line 50
This change just omits adding such values to `%update`, which results in
PostgreSQL assigning the default values.
To quote the function's comment:
Awful hack to handle timeouts in SQLite: just retry the transaction.
DBD::SQLite *has* a 30 second retry window, but apparently it
doesn't work.
Since SQLite is now dropped entirely, this wrapper can be removed
completely.
This allows fully declarative project specifications. This is best
illustrated by example:
* I create a new project, setting the declarative spec file to
"spec.json" and the declarative input to a git repo pointing
at git://github.com/shlevy/declarative-hydra-example.git
* hydra creates a special ".jobsets" jobset alongside the project
* Just before evaluating the ".jobsets" jobset, hydra fetches
declarative-hydra-example.git, reads spec.json as a jobset spec,
and updates the jobset's configuration accordingly:
{
"enabled": 1,
"hidden": false,
"description": "Jobsets",
"nixexprinput": "src",
"nixexprpath": "default.nix",
"checkinterval": 300,
"schedulingshares": 100,
"enableemail": false,
"emailoverride": "",
"keepnr": 3,
"inputs": {
"src": { "type": "git", "value": "git://github.com/shlevy/declarative-hydra-example.git", "emailresponsible": false },
"nixpkgs": { "type": "git", "value": "git://github.com/NixOS/nixpkgs.git release-16.03", "emailresponsible": false }
}
}
* When the "jobsets" job of the ".jobsets" jobset completes, hydra
reads its output as a JSON representation of a dictionary of
jobset specs and creates a jobset named "master" configured
accordingly (In this example, this is the same configuration as
.jobsets itself, except using release.nix instead of default.nix):
{
"enabled": 1,
"hidden": false,
"description": "js",
"nixexprinput": "src",
"nixexprpath": "release.nix",
"checkinterval": 300,
"schedulingshares": 100,
"enableemail": false,
"emailoverride": "",
"keepnr": 3,
"inputs": {
"src": { "type": "git", "value": "git://github.com/shlevy/declarative-hydra-example.git", "emailresponsible": false },
"nixpkgs": { "type": "git", "value": "git://github.com/NixOS/nixpkgs.git release-16.03", "emailresponsible": false }
}
}
This is necessary given the current size of the Nixpkgs/NixOS
jobsets. Once we have a Nix store + Postgres on SSD, we can reduce
this again.
Should really make this configurable...
This removes the "busy", "locker" and "logfile" columns, which are no
longer used by the queue runner. The "Running builds" page now only
shows builds that have an active build step.
This is to properly separate channels from regular jobs and also make
sure that we can always iterate on them, no matter whether the build has
failed. The reason why we were not able to do this until now was because
we were iterating on the build products, and whenever some constituent
of a channel job has failed, we didn't get a build output.
So whenever there is a meta.isHydraChannel, we can now properly
distinguish it from the other jobs.
I still don't have any clue, why "make -C src/sql update-dbix" without
*any* modifications tries to create additional schema definitions. But
I've checked the md5sums of the existing schema definitions and they
don't seem to match, so it seems that they already have been tampered
with.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
These give warnings in Perl >= 5.18:
given is experimental at /home/hydra/src/hydra/src/lib/Hydra/Helper/CatalystUtils.pm line 241.
when is experimental at /home/hydra/src/hydra/src/lib/Hydra/Helper/CatalystUtils.pm line 242.
...
All successful, non-garbage-collected builds in the evaluation are
passed in a attribute set. So if you declare a Hydra input named
‘foo’ of type ‘eval’, you get a set with members ‘foo.<jobname>’. For
instance, if you passed a Nixpkgs eval as an input named ‘nixpkgs’,
then you could get the Firefox build for x86_64-linux as
‘nixpkgs.firefox.x86_64-linux’.
Inputs of type ‘eval’ can be specified in three ways:
* As the number of the evaluation.
* As a jobset identifier (‘<project>:<jobset>’), which will yield the
latest finished evaluation of that jobset. Note that there is no
guarantee that any job in that evaluation has succeeded, so it might
not be very useful.
* As a job identifier (‘<project>:<jobset>:<job>’), which will yield
the latest finished evaluation of that jobset in which <job>
succeeded. In conjunction with aggregate jobs, this allows you to
make sure that the evaluation contains the desired builds.
This reverts commit 2d7e106d293c7e81b4b0b333d256aef0490ea1bc.
Unfortunately some jobsets still depend on this behaviour. They could
probably do something like "assert system == input.system; ..." but
changing them all is undesirable.