This verison has a worse UI, but also chnages the schema less: One
non-null constraint is removed, but no new columns are added.
Co-Authored-By: Andrea Ciceri <andrea.ciceri@autistici.org>
Co-Authored-By: regnat <rg@regnat.ovh>
Since the default lengths in Crypt::Passphrase::Argon2 changed from 16
to 32 in in 0.009, some tests that expected the passphrase to be
unchanged started failing.
Using a sha1 of the command combined with the build ID is not a
particularly good or unique identifier:
* A build could fail, be restarted, and then succeed -- assuming no
configuration changes, the sha1 hash of the command as well as the build
ID will be the same. This would lead to an overwritten log file.
* Allowing user input to influence filenames is not the best of ideas.
Yet again, manual testing is proving to be insufficient. I'm pretty
sure I wrote this code but lost it in a rebase, or perhaps the switch
to result classes.
At any rate, this implements the actual "fetch a retry row and run it"
for the hydra-notify daemon.
Tested by hand.
Get the number of seconds before the next retriable task is ready.
This number is specifically intended to be used as a timeout, where
`undef` means never time out.
This gives us a place to put helper functions that act on entire
tables, not just individual records.
This should be a backwards compatible change, except in places we're
manually using result class names.
Without this commit, two jobsets using the same repository as input,
but different `deepClone` options, end up incorrectly sharing the same
"checkout" for a given (`uri`, `branch`, `revision`) tuple. The
presence or absence of `.git` is determined by the jobset execution
order.
This patch adds the missing `isDeepClone` boolean to the cache key.
The database upgrade script empties the `CachedGitInputs` table, as we
don't know if existing checkouts are deep clones. Unfortunately, this
generally forces rebuilds even for correct `deepClone` checkouts, as
the binary contents of `.git` are not deterministic.
Fixes#510
If the project isn't declarative, who cares about it in the response? After
setting the `declfile` to an empty string, everything related to declarative-
ness is wiped out, anyways.
It appears the Jobs table was removed in
8adb433e3b48638f3faaecc61ae5cb63efefff6e, but the Jobsets schema was never
updated to reflect this. This relationship was added in
efa1f1d4fbdc468c61f7403e8709d0295e052c7f, roughly 3 months prior.
Previously, one would see a message similar to the following logged when
deleting a jobset:
17:38:23 hydra-server.1 | DBIx::Class::Relationship::CascadeActions::delete(): Skipping cascade delete on relationship 'jobs' - related resultsource 'Hydra::Schema::Jobs' is not registered with this schema at /home/vin/workspace/vcs/hydra/src/script/../lib/Hydra/Controller/Jobset.pm line 106
To further align with the API, we return custom JSON in order to display a
`visible` field rather than `hidden` -- a `PUT` request expects `visible`, while
a `GET` request returns `hidden`.
This also allows us to rename the `jobsetinputs` field to `inputs` for the same
reason: `PUT` expects `inputs`, while `GET` returns `jobsetinputs`.
`PUT /jobsets/{project-id}/{jobset-id}` expects a JSON object `inputs` which
maps a name to a name, a type, a value, and a boolean that enables emailing
responsible parties. However, `GET /jobsets/{project-id}/{jobset-id}` responds
with an object that doesn't contain a value, but does contain a jobsetinputalts
(which is old and should be unused).
This commit aligns the two by removing the old and unused `jobsetinputalts` from
the response and replaces it with `value`.
* made all columns available via the API (except for forceeval)
* renamed flakeref to flake to unify the API with the database schema
* renamed inputs to jobsetinputs to unify the API with the database schema