Right now I'm just working within sqlite, but the migrations will need to be
mirrored for PostgreSQL in the future in `migrations/postgres`
Running this entails using the sqlx-cli (`cargo install sqlx-cli`)
% sqlx migrate --source migrations/sqlite run
There was a race condition where sometimes the agent would set up the control
port so fast that a git into `.` would fail because the directory wasn't empty.
The better solution is to put the control socket outside the workspace in a temp
directory
This is now the third place I have this same exact code. This should be
refactored into an otto-exec crate and re-used between `sh`, `otto-agent`, and
the local orchestrator
Basically this requires:
* `make run` to launch the service mesh
* `./scripts/local-run`
And then you wait, because log output from the agent's step invocation is not
yet streaming 😆
This is basically me stumbling over a shortcut that I left in the parser code
when I first started on it. Now that I am integration testing, I tripped right
into this.
Fixes#44
I first set this up without the "--mirror" like functionality, but unfortunately
I couldn't get my fetch updates to work properly. A future optimization would be
to remove the mirror-refspecs and instead update the refsspecs whenever a new
branch is fetched.
Fixes#40
Steps are always expecting keyword arguments, but for developer convenience they
can use positional arguments.
This commit properly converts those positional arguments.
Run with my local integration test:
#!/usr/bin/env ruby
require 'json'
require 'net/http'
pipeline = nil
# Parse it
Net::HTTP.start('localhost', 7672) do |http|
otto = File.read('Ottofile')
response = http.post('/v1/parse', otto)
if response.code.to_i != 200
puts 'Failed to parse file'
exit 1
end
pipeline = JSON.parse(response.read_body)
end
# Hit the local-orchestrator
Net::HTTP.start('localhost', 7673) do |http|
contexts = []
pipeline['batches'].each do |batch|
contexts += (batch['contexts'])
end
payload = JSON.dump({
:pipeline => pipeline['uuid'],
:contexts => contexts,
})
puts payload
res = http.post('/v1/run', payload)
if res.code.to_i != 200
puts "Failed to orchestrate! #{res.code} #{res.read_body}"
exit 1
end
puts 'Enqueued'
end
Fixes#42
This does mean that the `git` step has to re-clone every time and all work must
be repeated, but I think there are other ways to solve those problems than
sharing workspace directories between pipeline runs.
Validated with the following invocation file:
```json
{
"pipeline": "2265b5d0-1f70-46de-bf50-f1050e9fac9a",
"steps" : [
{
"symbol": "sh",
"uuid": "5599cffb-f23a-4e0f-a0b9-f74654641b2b",
"context": "3ce1f6fb-79ca-4564-a47e-98265f53ef7f",
"parameters" : {
"script" : "pwd"
}
}
]
}
```
And invoked in tmp/ via:
STEPS_DIR=$PWD ../target/debug/otto-agent ../test-steps.json
Fixes#33