Implement a new error handling library based on pkg/errors. It provides
stack saving on wrapping and exports some function to add stack saving
also to external errors.
It also implements custom zerolog error formatting without adding too
much verbosity by just printing the chain error file:line without a full
stack trace of every error.
* Add a --detailed-errors options to print error with they full chain
* Wrap all error returns. Use errors.WithStack to wrap without adding a
new messsage and error.Wrap[f] to add a message.
* Add golangci-lint wrapcheck to check that external packages errors are
wrapped. This won't check that internal packages error are wrapped.
But we want also to ensure this case so we'll have to find something
else to check also these.
Replace zap with zerolog.
zerolog has a cleaner interface and can be easily configured with custom
error chain printing using a new error handling library that will be
implemented in another PR.
* Create an APIError that should only be used for api returned errors.
It'll wrap an error and can have different Kinds and optional code and
message.
* The http handlers will use the first APIError available in the
error chain and generate a json response body containing the code and
the user message. The wrapped error is internal and is not sent in the
response.
If no api error is available in the chain a generic internal
server error will be returned.
* Add a RemoteError type that will be created from remote services calls
(runservice, configstore). It's similar to the APIError but a
different type to not propagate to the caller response and it'll not
contain any wrapped error.
* Gateway: when we call a remote service, by default, we'll create a
APIError using the RemoteError Kind (omitting the code and the
message that usually must not be propagated).
This is done for all the remote service calls as a starting point, in
future, if this default behavior is not the right one for a specific
remote service call, a new api error with a different kind and/or
augmented with the calling service error codes and user messages could
be created.
* datamanager: Use a dedicated ErrNotExist (and converting objectstorage
ErrNotExist).
Replace https://github.com/satori/go.uuid with maintained version at
https://github.com/gofrs/uuid
Since the new version uuid.NewV4 returns an error when failing to read
from the random source reader we use uuid.Must to panic on error since
it's considered an unrecoverable error.
In future, if needed, we could handle the error instead of panicking.
taskUpdater will be called serially and won't block. It'll execute a goroutine
for executing the task and for sending the task state to the scheduler.
executeTask will just start task execution, all the logic of choosing if
starting a task is moved inside taskUpdater
In this way we avoid concurrency issues when handling the same executorTask
in parallel
During tests provide a zaptest Logger so all services output will be redirected
to golang testing logger.
When multiple services of the same type are provided add a unique name field to
distinguish them.
Instead of doing the current hack of copying the agola toolbox inside the host
tmp dir (always done but only needed when running the executor inside a docker
container) that has different issues (like tmp file removal done by
tmpwatch/systemd-tmpfiles), use a solution similar to the k8s driver: for every
pod create a volume containing the agola-toolbox and remove it at pod removal.
We could also use a single "global" volume but we should handle cases like
volume removal (i.e. a docker volume prune command). So for now just create a
dedicated per pod volume.
Explicitly write and flush the headers in the various services LogHandlers.
Currently the 200 response and the other headers will be automatically written
by the golang http implementation only when we send something in the body. But if
there's nothing to send (no logs yet written) the client will never receive the
headers and cannot know if the request was successful.
* Add a generic container volume option that currently only support tmpfs. In
future it could be expanded to use of host volumes or other kind of volumes (if
supported by the underlying executor)
* Implement creation of tmpfs volumes in docker and k8s drivers.
Reorganize ExecutorTask to better distinguish between the task Spec and
the Status.
Split the task Spec in a sub part called ExecutorTaskSpecData that contains
tasks data that don't have to be saved in etcd because it contains data that can
be very big and can be generated starting from the run and the runconfig.
Currently, if no shell is defined in the task and in the step, the executor will
use an hardcoded default shell.
This will cause changed run behavior if we add an option to globally set the
agola default shell.
To avoid this set the task shell to the default shell inside the runconfig if
it's empty so future executions will always use this value.
Defining an option to override the user for a run step is too much fine grained
and, for consistency, will require to do the same also for the other steps
(clone, *workspace etc...).
Remove it since it's probably enough to define it at the task level.
In c1ff28ef9f we exported various types. Unfortunately the types used by cmd
variable create/update are the wrong types and marshalling fails. Fix it using
the right type. In future this internal types should be exported.
Since the current logic is to use the first available private ip address as the
advertized address we have to listen on wildcard since a different host provided
in web.ListenAddress will make the executor unreachable.
In future improve this to let the user to manually define the bind and the
advertized address (perhaps using go-sockaddr templates like done by consul) to
also support nat between the schedulers and the executors.
Older version of docker doesn't support the exec api Env and WorkingDir options.
Support these versions by doing the same we already do with the k8s driver: use
the `toolbox exec` command that will set the provided Env, change the cwd to the
WorkingDir and the exec the wanted command.
Export clients and related packages.
The main rule is to not import internal packages from exported packages.
The gateway client and related types are totally decoupled from the gateway
service (not shared types between the client and the server).
Instead the configstore and the runservice client currently share many types
that are now exported (decoupling them will require that a lot of types must be
duplicated and the need of functions to convert between them, this will be done
in future when the APIs will be declared as stable).
There was a typo so we weren't setting the task endTime when the setup step
failed.
Also unify all logic to just use `et` (instead of a mix of `et` or `rt.et`)
Before kubernetes 1.14 nodes were labeled with the "beta.kubernetes.io/arch"
label instead of the "kubernetes.io/arch".
Current k8s version (v1.15) labels nodes with both labels but it's
deprecated and will removed in future versions.
At driver start get the current k8s api version and choose the right label to
use as node selector based on it.
* add a config option allowPrivilegedContainers
* fail task setup if privileged containers are requested but they aren't
allowed.
* report if privileged containers are allowed to the runservice
* don't remove the runningTask when executeTask finishes but just mark the
runningTask a not executing
* add a loop to periodically update executorTask status and remove the
runningTask if not executing and status update was successful
* remove runningTask when it disappears from the runservice