Replace zap with zerolog.
zerolog has a cleaner interface and can be easily configured with custom
error chain printing using a new error handling library that will be
implemented in another PR.
* Create an APIError that should only be used for api returned errors.
It'll wrap an error and can have different Kinds and optional code and
message.
* The http handlers will use the first APIError available in the
error chain and generate a json response body containing the code and
the user message. The wrapped error is internal and is not sent in the
response.
If no api error is available in the chain a generic internal
server error will be returned.
* Add a RemoteError type that will be created from remote services calls
(runservice, configstore). It's similar to the APIError but a
different type to not propagate to the caller response and it'll not
contain any wrapped error.
* Gateway: when we call a remote service, by default, we'll create a
APIError using the RemoteError Kind (omitting the code and the
message that usually must not be propagated).
This is done for all the remote service calls as a starting point, in
future, if this default behavior is not the right one for a specific
remote service call, a new api error with a different kind and/or
augmented with the calling service error codes and user messages could
be created.
* datamanager: Use a dedicated ErrNotExist (and converting objectstorage
ErrNotExist).
skip fetching of tasks with status skipped, not only tasks marked as skip.
This avoid many wrong an noisy logs of type "executor task with id taskid
doesn't exist. This shouldn't happen. Skipping fetching"
updateRunTaskStatus should also accept transitions from not started to a
finished state like "success", "failed", "stopped" since we could miss some
status updates from the executor for many reasons.
rename activeExecutorTasks to scheduledExecutorTasks and don't filter out
finished tasks.
In some logic we need all the scheduled tasks and not only the not finished
ones.
Since the executor only periodically updates its state we could end up
scheduling much more tasks than the executor ActiveTasksLimit. This will happen
in the case of many parallel tasks that can all start at the same time.
To avoid this also considere the executor tasks saved in etcd that represent
the real view of scheduled tasks.
Currently when a run is marked to stop we are going to stop currently running
tasks and then their childs will be marked as skipped.
But tasks not depending on a stopped task (root task or childs with a finished
parent) that are just waiting for an executor slot, will be scheduled when
there will be a free slot also if the run is marked to stop (and then the
scheduler will stop them after some seconds).
This patch will mark all not started tasks as skipped when the run is marked to
stop.
During tests provide a zaptest Logger so all services output will be redirected
to golang testing logger.
When multiple services of the same type are provided add a unique name field to
distinguish them.
Explicitly write and flush the headers in the various services LogHandlers.
Currently the 200 response and the other headers will be automatically written
by the golang http implementation only when we send something in the body. But if
there's nothing to send (no logs yet written) the client will never receive the
headers and cannot know if the request was successful.
* return errNotExist in readTaskLogs when the executor task doesn't exist: so
the client will receive a 404 instead of a 500 (since a generic error will be
mapped to a 500).
* Wrap the errNotExist returned by readTaskLogs with a new ErrNotExits reporting
"log doesn't exist"
etcd PR 11104 (https://github.com/etcd-io/etcd/pull/11104) implemented mutex
TryLock. Since it's only available in etcd master just copy relevant code and
add a TODO to remove it when updating the etcd client to a version implementing
TryLock.
Use TryLock everywhere where it'll be useful.
Rename errCh to doneCh (error is not needed) and always send to it when one of
the HandleEvents functions exits (not only on error).
This will ensure that all the goroutines will be stopped also if one of them
returns without an error.
* objectstorage: remove `types` package and move `ErrNotExist` in base package
* objectstorage: Implement .Is and add helper `IsErrNotExist` for `ErrNotExist`
* util: Rename `ErrNotFound` to `ErrNotExist`
* util: Add `IsErr*` helpers and use them in place of `errors.Is()`
* datamanager: add `ErrNoDataStatus` to report when there's not data status in ost
* runservice/common: remove `ErrNotExist` and use errors in util package
Reorganize ExecutorTask to better distinguish between the task Spec and
the Status.
Split the task Spec in a sub part called ExecutorTaskSpecData that contains
tasks data that don't have to be saved in etcd because it contains data that can
be very big and can be generated starting from the run and the runconfig.
Currently `advanceRunTasks` isn't deterministic and doesn't calculate the final
state in one call. So could happen that `getTasksToRun` will select a task to be
executed since its parent are finished (marked as skipped in advanceRunTasks)
but the task isn't marked to be skipped (because advanceRunTasks has calculated
this task before its parents).
Currently fix this doing the same task selection logic done in `advanceRunTasks`
and add a TODO to make `advanceRunTasks` be deterministic by processing tasks by
their level (from level 0).
Export clients and related packages.
The main rule is to not import internal packages from exported packages.
The gateway client and related types are totally decoupled from the gateway
service (not shared types between the client and the server).
Instead the configstore and the runservice client currently share many types
that are now exported (decoupling them will require that a lot of types must be
duplicated and the need of functions to convert between them, this will be done
in future when the APIs will be declared as stable).
* Don't fail tasks inside the delete executor action, just delete the executor
from etcd
* The scheduler, when detecting a task without a related executor will mark the
task as failed and correctly set end time of the task and its steps.
Implement runservice maintenance mode and export/import.
When runservice is set in maintenance mode it'll start only the maintenance and
export/import handlers.
Setting maintenance mode will set a key in etcd so all the runservice instances
will detect it and enter in maintenance mode. This is done asyncronously so it
could take some time (future improvements will add some api to show all the
runservice states)
Export is always available and will export the datamanager contents. Currently
only datamanager contents are exported (no logs and workspace archives).
Import is available only during maintenance, given a datamanager export will
import it and reset etcd to this import state.
Use the go sql context functions (ExecContext, QueryContext etc...)
The context is saved inside Tx so the library users should only pass it one time
to the db.Do function.
In runservice readdb Run method we could end with a deadlock if two of the
goroutines that call HandleEvents.* try to write to the errCh at the same
time before the errCh is read. If this happens one of the two will be blocked on
writing to the channel but the read won't happen since it'll blocked by
wg.Wait().
Fix this doing:
* use a buffered channel large as the number of executed goroutines.
* create a new errCh at every loop (so we'll ignore later errors after the first
one)
Note: we could also use a non blocking send to avoid this situation but we
should also start the wg.Wait before the goroutines or earlier errors could be
lost causing another kind of hang.
* Don't make cors enabled on all (*) by default.
* Handle related web.allowedOrigins options
* Only the gateway api should be called by a browser so setup the cors handler
only on it
currently we are deleting the executor tasks only when all the run tasks
log/archives were fetched. But it'll better to remove a single executor task
when the task fetching is finished.
This could also fix possible issues on k8s since we are scheduling tasks but the
k8s scheduler may not schedule them if there aren't enough resources causing a
scheduling deadlock since we won't remove finished pods because their related
tasks are not removed and k8s cannot start new pods since it has no resources.
Don't put datamanager base dirs inside the root of the ost but use a base path.
Let's do it now before releasing since this is a breaking change that requires
moving the ost data to the new path
Currently we aren't setting a basepath and it wasn't always correctly handled.
Fix missing basepath handling and improve tests to also use a non empty
basepath.
The cache group fields defines under which cache group the run cache data will
belong. This is needed/useful for some next changes:
* Make cache correctly work for user direct runs. Since the user direct runs all
belong to the same run group (the user id) all the use direct runs will share the
same caches. To distinguish between the different caches we need to use something
in addition to the user id (the local repo uuid generated by the direct run
start command)
* Share the cache between multiple projects