Right now I'm just working within sqlite, but the migrations will need to be
mirrored for PostgreSQL in the future in `migrations/postgres`
Running this entails using the sqlx-cli (`cargo install sqlx-cli`)
% sqlx migrate --source migrations/sqlite run
This is now the third place I have this same exact code. This should be
refactored into an otto-exec crate and re-used between `sh`, `otto-agent`, and
the local orchestrator
This is basically me stumbling over a shortcut that I left in the parser code
when I first started on it. Now that I am integration testing, I tripped right
into this.
Fixes#44
Steps are always expecting keyword arguments, but for developer convenience they
can use positional arguments.
This commit properly converts those positional arguments.
Run with my local integration test:
#!/usr/bin/env ruby
require 'json'
require 'net/http'
pipeline = nil
# Parse it
Net::HTTP.start('localhost', 7672) do |http|
otto = File.read('Ottofile')
response = http.post('/v1/parse', otto)
if response.code.to_i != 200
puts 'Failed to parse file'
exit 1
end
pipeline = JSON.parse(response.read_body)
end
# Hit the local-orchestrator
Net::HTTP.start('localhost', 7673) do |http|
contexts = []
pipeline['batches'].each do |batch|
contexts += (batch['contexts'])
end
payload = JSON.dump({
:pipeline => pipeline['uuid'],
:contexts => contexts,
})
puts payload
res = http.post('/v1/run', payload)
if res.code.to_i != 200
puts "Failed to orchestrate! #{res.code} #{res.read_body}"
exit 1
end
puts 'Enqueued'
end
Fixes#42
YAML is a bonkers format with a lot of leeway for bugs, JSON is dumb but very
predictable.
Since these are machine-read files, they can be JSON. The human-editable step
library manifests will remain YAML
This will lay the ground work for more complex pipeline verbs like fanout and
parallel
This also changes the invocation file format for the agent to basically:
---
pipeline: 'some-uuid'
steps:
- symbol: sh
context: 'some-uuid'
parameters:
This means that the agents will be a little more stupid when it comes to the
overall pipeline structure
This is the manifestation of @steven-terrana's feedback. Basically the pipeline
is linear from top to bottom, and the author can put un-staged steps anywhere
along the path that they want. This should make the simplest pipelines pretty
dang simple.
These steps will have to be rendered somehow by a frontend at some point, but
that's a problem for later! 😸
This starts to make room for multiple root-level verbs in the pipeline, but also
for stages to be optional, e.g.
pipeline {
steps {
sh 'make all'
archive 'build/*.tar.gz'
}
}
(the above doesn't parse yet)
Based on a discussion about the merits of the stage block with @steven-terrana
None of the other blocks have parenthesis and take arguments the way the
stage(name) does. To make all blocks behave a bit more in line with one another,
this commit changes the behavior of naming stages to look more like a
generalized block settings.
This presages some macro work that's coming in the following commits.
In the context of the cache {} block, we don't want to use `from` to copy the
cache directives from another stage entirely, rather we want to "use" the named
cached entry.
I think it makes sense to allow caches to be pulled individually. Imagining a
case where a compilation stage might create multiple platform specific
directories or binaries. In such cases, a Windows test-specific stage would only
want to grab the cached entries for Windows, etc.
This is still just a sketch of what's in my brain. The internal representation
is likely going to have to evolve a lot more once the parser gets further along
This requires some shenanigans with the headers of the response, since
Feathers/Express likes to augment the Content-Type with the charset field. The
charset field throws dredd off for some silly reason, so this might trimming in
some express middleware later