- It specifies whether the network driver can
provide containers connectivity across hosts.
- As of now, the data scope of the driver was
being overloaded with this notion.
- The driver scope information is still valid
and it defines whether the data allocation
of the network resources can be done globally
or only locally.
- With the scope network option, user can now
force a network as swarm scoped
regardless of the driver data scope.
- In case the network is configured as swarm scoped,
and the network driver is multihost capable,
a network DB instance will be launched for it.
Signed-off-by: Alessandro Boch <aboch@docker.com>
- They are configuration-only networks which
can be used to supply the configuration
when creating regular networks.
- They do not get allocated and do net get plumbed.
Drivers do not get to know about them.
- They can be removed, once no other network is
using them.
- When user creates a network specifying a
configuration network for the config, no
other network specific configuration field
is are accepted. User can only specify
network operator fields (attachable, internal,...)
- They do not need to have a driver field, that
field gets actually reset upon creation.
Signed-off-by: Alessandro Boch <aboch@docker.com>
getNetworksFromStore reads networks and endpoint_cnt from the kvstores.
endpoint_cnt especially is read in a for-loop for each network and that
causes a lot of stress in poorly performing KV-Stores.
This fix eases the load on the kvstore by fetching all the endpoint_cnt
in a single read and the operation is performed on it.
Signed-off-by: Madhu Venugopal <madhu@docker.com>
This fix tries to fix logrus formatting by removing `f` from
`logrus.[Error|Warn|Debug|Fatal|Panic|Info]f` when formatting string
is not present.
Also fix import name to use original project name 'logrus' instead of
'log'
Signed-off-by: Daehyeok Mun <daehyeok@gmail.com>
Avoid the whole store endpoint update logic when running in swarm mode
and the endpoint is part of a global scope network. Currently there is
no store update that is happening for global scope networks in swarm
mode, but this code path will delete the svcRecords database when the
last endpoint on the network is removed which is something that is not
required.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
- Earlier this was guaranteed by ipam driver intialization
which was not creating a global address space if the
global datastore was missing. Now that ipam address spaces
can be initialized with no backing datastore, insert an
explicit check in libnetwork, which should have been there
regardless.
Signed-off-by: Alessandro Boch <aboch@docker.com>
Add a notion of service in libnetwork so that a group of endpoints
which form a service can be treated as such so that service level
features can be added on top. Initially as part of this PR the support
to assign a name to the said service is added which results in DNS
queries to the service name to return all the IPs of the backing
endpoints so that DNS RR behavior on the service name can be achieved.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Currently datastore has dependencies on various kv backends.
This is undesirable if datastore had to be used as a backend
agnostic store management package with it's cache layer. This
PR aims to achieve that.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
With the current implementation, a config relaod event causes all the
datastores to reinitialize and that impacts objects with Persist=false
such as none and host network.
Signed-off-by: Madhu Venugopal <madhu@docker.com>
- ... on ungraceful shutdown during network create
- Allow forceful deletion of network
- On network delete, first mark the network for deletion
- On controller creation, first forcely remove any network
that is marked for deletion.
Signed-off-by: Alessandro Boch <aboch@docker.com>
Its safe to cache the scope value in network object and can be reused
for cleanup operations. The current implementation assume the presence
of driver during cleanup operation. Since a remote driver may not be
present, we should not fail such cleanup operations. Hence make use of
the scope variable from network object.
Signed-off-by: Madhu Venugopal <madhu@docker.com>
Cleanup the service db for the network when the last
container on the network leaves on the host. This is
because we stop watching the network after the last
container leaves and so if we keep the service db
around it might be kept uptodate with containers
joining and leaving in other hosts. The service
db will populated properly when a container joins
this network at a later point in time.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
A local endpoint is known to the watch database only
during Join. But the same endpoint can be known to the
watch database as remote endpoint well before the Join
because a CreateEndpoint updates the endpoint to the store.
So on Join when you come to know that this is indeed a
local endpoint remove it from remote endpoint list and add it
to local endpoint list.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
When we bootup cleanup all dangling local
endpoints since they are not needed anymore.
The only reason it can happen is when the process
went down ungracefully after an endpoint is
created but before join is successfull.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Currently endpoint count is maintained as part of
network object and the endpoint count gets updated
frequently while the rest of network is quite stable.
Because of the frequent updates to endpoint count the
network object is getting marshalled and unmarshalled
ferquently. This is causing a lot of churn and transient
memory usage. Fix this by creating a deparate object of
endpoint count so that only that gets updated.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Today we try to increment/decrement endpoint count
only once even if it is a key modified error. In case
of key modified error we should retry it to allow it to
succeed.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Always on watching of networks and endpoints can
affect scalability of the cluster beyond a few nodes.
Remove pro active watching and watch only the objects
you are interested in.
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
1. Don't save localscope endpoints to localstore for now.
2. Add common function updateToStore/deleteFromStore to store KVObjects.
3. Merge `getNetworksFromGlobalStore` and `getNetworksFromLocalStore`
4. Add `n.isGlobalScoped` before `n.watchEndpoints` in `addNetwork`
5. Fix integration-tests
6. Fix test failure in drivers/remote/driver_test.go
7. Restore network to store if deleteNework failed
- Maps 1 to 1 with container's networking stack
- It holds container's specific nw options which
before were incorrectly owned by Endpoint.
- Sandbox creation no longer coupled with Endpoint Join,
sandbox and endpoint have now separate lifecycle.
- LeaveAll naturally replaced by Sandbox.Delete
- some pkg and file renaming in order to have clear
mapping between structure name and entity ("sandbox")
- Revisited hosts and resolv.conf handling
- Removed from JoinInfo interface capability of setting hosts and resolv.conf paths
- Changed etchosts.Build() to first write the search domains and then the nameservers
Signed-off-by: Alessandro Boch <aboch@docker.com>
In that commit, AtomicPutCreate takes previous = nil to Atomically create keys
that don't exist. We need a create operation that is atomic to prevent races
between multiple libnetworks creating the same object.
Previously, we just created new KVs with an index of 0 and wrote them to the
datastore. Consul accepts this behaviour and interprets index of 0 as
non-existing, but other data backends do no.
- Add Exists() to the KV interface. SetIndex() should also modify a KV so
that it exists.
- Call SetIndex() from within the GetObject() method on DataStore interface.
- This ensures objects have the updated values for exists and index.
- Add SetValue() to the KV interface. This allows implementers to define
their own method to marshall and unmarshall (as bitseq and allocator have).
- Update existing users of the DataStore (endpoint, network, bitseq,
allocator, ov_network) to new interfaces.
- Fix UTs.
Currently we rely on watch to catchup after the init. But there could be
a small time window on which, we might end up in a race condition on
network creates. By reading and populating networks during init, we
avoid any such conditions, especially for default network handling.
Signed-off-by: Madhu Venugopal <madhu@docker.com>
The configuration format for docker runtime is based on daemon flags and
hence adjusting the libnetwork configuration to accomodate it by moving
the TOML based configuration to the dnet tool.
Also changed the controller configuration via options
Signed-off-by: Madhu Venugopal <madhu@docker.com>
Currently store makes use of a static isReservedNetwork check to decide
if a network needs to be stored in the distributed store or not. But it
is better if the check is not static, but be determined based on the
capability of the driver that backs the network.
Hence introducing a new capability mechanism to the driver which it can
express its capability during registration. Making use of first such
capability : Scope. This can be expanded in the future for more such cases.
Signed-off-by: Madhu Venugopal <madhu@docker.com>
* Removed network from being marshalled (it is part of the key anyways)
* Reworked the watch function to handle container-id on endpoints
* Included ContainerInfo to be marshalled which needs to be synchronized
* Resolved multiple race issues by introducing data locks
Signed-off-by: Madhu Venugopal <madhu@docker.com>