Since we are using the shared cache with the lock notify we won't receive
SQLITE_BUSY errors but we could receive SQLITE_LOCKED errors due to deadlocks or
locked tables on concurrent read and write transactions.
This patch catches this kind of errors and retries the tx until maxTxRetries.
In runservice readdb Run method we could end with a deadlock if two of the
goroutines that call HandleEvents.* try to write to the errCh at the same
time before the errCh is read. If this happens one of the two will be blocked on
writing to the channel but the read won't happen since it'll blocked by
wg.Wait().
Fix this doing:
* use a buffered channel large as the number of executed goroutines.
* create a new errCh at every loop (so we'll ignore later errors after the first
one)
Note: we could also use a non blocking send to avoid this situation but we
should also start the wg.Wait before the goroutines or earlier errors could be
lost causing another kind of hang.
When doing an initEtcd (new instance or etcd reset) create a new wal (that will
have a new sequence epoch) and do a checkpoint.
In this way:
* readdb will detect that an epoch change and do a full resync
* we always have a data file (also if empty) that provides the last checkpointed
wal. This information could be used by readdb to resync
* Don't make cors enabled on all (*) by default.
* Handle related web.allowedOrigins options
* Only the gateway api should be called by a browser so setup the cors handler
only on it
currently we are deleting the executor tasks only when all the run tasks
log/archives were fetched. But it'll better to remove a single executor task
when the task fetching is finished.
This could also fix possible issues on k8s since we are scheduling tasks but the
k8s scheduler may not schedule them if there aren't enough resources causing a
scheduling deadlock since we won't remove finished pods because their related
tasks are not removed and k8s cannot start new pods since it has no resources.
Before kubernetes 1.14 nodes were labeled with the "beta.kubernetes.io/arch"
label instead of the "kubernetes.io/arch".
Current k8s version (v1.15) labels nodes with both labels but it's
deprecated and will removed in future versions.
At driver start get the current k8s api version and choose the right label to
use as node selector based on it.
* Override the provided remotesource id with the current one (it could not be
provided or provided with a different id but the remotesource ref is the way to
get the current remote source).
* When changing remotesource name check that a remote source with the new name
does not already exist.
* Make the new fields RegistrationEnabled/LoginEnabled in types.RemoteSource
bool pointers (since they are new fields that don't exist in previously saved
remote sources) and default them to true if null when unmarshaling (or existing
remotesources will have registration and login disabled)
* Add options to cmd remotesource create/update to set the registration/login
disabled.
Don't put datamanager base dirs inside the root of the ost but use a base path.
Let's do it now before releasing since this is a breaking change that requires
moving the ost data to the new path
Don't put datamanager base dirs inside the root of the ost but use a base path.
Let's do it now before releasing since this is a breaking change that requires
moving the ost data to the new path
Currently we aren't setting a basepath and it wasn't always correctly handled.
Fix missing basepath handling and improve tests to also use a non empty
basepath.
Since the user direct runs all belong to the same run group (the user id) all
the user direct runs will share the same caches. To distinguish between the
different caches we need to use something in addition to the user id. In this
case we are usin the local repo uuid generated by the direct run start command.
The cache group fields defines under which cache group the run cache data will
belong. This is needed/useful for some next changes:
* Make cache correctly work for user direct runs. Since the user direct runs all
belong to the same run group (the user id) all the use direct runs will share the
same caches. To distinguish between the different caches we need to use something
in addition to the user id (the local repo uuid generated by the direct run
start command)
* Share the cache between multiple projects