otto/README.adoc

120 lines
6.6 KiB
Plaintext
Raw Normal View History

= Otto
2019-07-06 18:08:02 +00:00
**link:https://webchat.freenode.net/?channels=#otto[Chat on IRC]**
Meet Otto, your friendly continuous delivery companion.
Otto is a robust distributed system for scalable continuous integration and
delivery. To accomplish this Otto is multi-process oriented and distributed by
default; all system interactions must occur over process boundaries even if
executing in the same logical host. This document outlines the high level
architecture but omits specific inter-process communication formats, protocols,
and dependencies.
Otto does not aim to be the center of the entire continuous delivery process,
but rather seeks to interoperate seamlessly with all the various components
which make CD work for you.
= Status
Otto is currently not usable.
2020-01-09 00:03:44 +00:00
The various components are in different states of development, please consult
the README document in the subfolders for their current purpose and status.
= Development
Much of Otto is built in
link:https://www.rust-lang.org/[Rust].
2020-01-09 00:03:44 +00:00
The service-to-service communication is all done with JSON over HTTP.
In order to contribute to the project, please join us on IRC or open pull
requests and issues.
== Problems to Solve
Below is an incomplete listing of the problems Otto aims to solve for:
* **Existing tools do not model the entire continuous delivery process.** Using
an external tool such as Puppet, or relying on an external provider such as
AWS ECS, there can be a "black hole" in the deployment. A point where control
is delegated to an external system, and the orchestration tool (Otto), loses
sight of what is happening.
* **Expecting "one single instance" to be the hub is unrealistic.** Many
deployment processes have "development" operated components, and "ops"
operated components, with little to no automated hand-off of control between
the two.
* **Mixing of management and execution contexts causes a myriad of issues.**
Many tools allow the management/orchestrator process to run user-defined
workloads. This allows breaches of isolation between user-defined workloads
and administrator configuration and data.
* **Non-deterministic runtime behavior adds instability.** Without being able to
"explain" a set of operations which should occur before runtime, it is
impossible to determine whether or not a given delivery pipeline is correctly
constructed.
* **Blending runtime data and logic with process definition confuses users.** Related to the
problem above, Providing runtime data about the process in a manner which is
only accessible in the delivery process itself, overly complicates the parsing
and execution of a defined continuous delivery process.
* **Modeling of the delivery process is blurred with build tooling.** Without a
clear separation of concerns between the responsibility of build tools like
GNU/Make, Maven, Gradle, etc and the continuous delivery process definition,
logic invariably bleeds between the two.
* **Opinionated platform requirements prevent easy usage across different
environments.** Forcing a reliance on containers, or a runtime like the Java
Virtual Machine results in burdensome system configuration before starting to
do the real work of defining a continuous delivery process. Without gracefully
degrading in functionality depending on the system requirements which are
present, users are forced to hack around the platform requirements, or spent
significant worrying about and maintaining pre-requisites.
2019-03-14 14:25:03 +00:00
* **Many tools are difficult to configure by default.** For most application
stacks, there are common conventions which can be easily prescribed for the
80% use-case. Ruby on Rails applications will almost all look identical, and
should require zero additional configuration.
* **Secrets and credentials can be inadvertently leaked.** Many tools provide
some ability to configure secrets for the continuous delivery process, but
expose them to the process itself in insecure ways, which allow for leakage.
* **Extensibility must not come at the expense of system integrity.** Systems
which allow for administrator, or user-injected code at runtime cannot avoid
system reliability and security problems. Extensibility is an important
characteristic to support, but secondary to system integrity.
* **Usage cannot grow across an organization without user-defined extension**.
The operators of the system will not be able to provide for every eventual
requirement from users. Some mechanism for extending or consolidating aspects
of a continuous delivery process must exist.
== Modeling Continuous Delivery
Some characteristics of a continuous delivery process model which Otto must ensure:
* **Deterministic ahead-of-time**. Without executing the full process, it must
be possible to "explain" what will happen.
* **External interactions must be model-able.** Deferring control to an
external system must be accounted for in a user-defined model. For example,
submitting a deployment request, and then waiting for some external condition
to be made to indicate that the deployment has completed and the service is now
online. This should support both an evented model, wherein the external service
"calls back" and a polling model, where the process waits until some external
condition can be verified.
* **Branching logic**, a user must be able to easily define branching logic.
For example, a web application's delivery may be different depending on
whether this is a production or a staging deployment.
* **Describe, though not fully, environments.** All applications have at least
some concept of environments, whether it is a web application's concept of
staging/production, or a compiled asset's concept of debug/release builds.
* **Safe credentials access**, credentials should not be exposed to in a way
which might allow the user-defined code to inadvertently leak the credential.
* **Caching data between runs** must be describable in some form or fashion.
Taking Maven projects as an example, where successive runs of `mvn` on a
cold-cache will result in significant downloads of data, whereas caching
`~/.m2` will result in more acceptable performance.
* **Refactor/extensibility support in-repo or externally.** Depending on
whether the source repository is a monorepo, or something more modular.
Common aspects of the process must be able to be templatized/parameterized in
some form, and shared within the repository or via an external repository.
* **Scale down to near zero-configuration.** the simplest model possible should
simply define what platform's conventions to use. With Rails applications,
many applications are functionally in the same with their use of Bundler,
Rake, and RSpec.