Previous was not even a minute. So raising values for this purpose.
Simulating overall an in-between durations:
```
info: Attempt 1/25 done immediately
info: Attempt 2/25 after 3000 ms
info: Attempt 3/25 after 3300.0000000000005 ms
info: Attempt 4/25 after 3630.000000000001 ms
info: Attempt 5/25 after 3993.0000000000014 ms
info: Attempt 6/25 after 4392.300000000002 ms
info: Attempt 7/25 after 4831.5300000000025 ms
info: Attempt 8/25 after 5314.683000000003 ms
info: Attempt 9/25 after 5846.151300000003 ms
info: Attempt 10/25 after 6430.766430000004 ms
info: Attempt 11/25 after 7073.8430730000055 ms
info: Attempt 12/25 after 7781.227380300006 ms
info: Attempt 13/25 after 8559.350118330007 ms
info: Attempt 14/25 after 9415.285130163009 ms
info: Attempt 15/25 after 10356.81364317931 ms
info: Attempt 16/25 after 11392.495007497242 ms
info: Attempt 17/25 after 12531.744508246968 ms
info: Attempt 18/25 after 13784.918959071665 ms
info: Attempt 19/25 after 15163.410854978833 ms
info: Attempt 20/25 after 16679.751940476715 ms
info: Attempt 21/25 after 18347.72713452439 ms
info: Attempt 22/25 after 20182.49984797683 ms
info: Attempt 23/25 after 22200.749832774512 ms
info: Attempt 24/25 after 24420.824816051965 ms
info: Attempt 25/25 after 26862.907297657162 ms
info: *** Total time: 265491.9802742286 ms ***
info: **=>Total time: 4.4248663379038105 minutes ***
```
```
info: Attempt 1/10 done immediately
info: Attempt 2/10 after 2000 ms
info: Attempt 3/10 after 2500 ms
info: Attempt 4/10 after 3125 ms
info: Attempt 5/10 after 3906.25 ms
info: Attempt 6/10 after 4882.8125 ms
info: Attempt 7/10 after 6103.515625 ms
info: Attempt 8/10 after 7629.39453125 ms
info: Attempt 9/10 after 9536.7431640625 ms
info: Attempt 10/10 after 11920.928955078125 ms
info: *** Total time: 51604.644775390625 ms ***
info: **=>Total time: 0.8600774129231771 minutes ***
```
Apparently installing this errorHandler() middleware too early results in
feathers errors not being properly caught and reported.
I've verified this by running acceptance tests and seeing what gets reported in
Sentry (lots! 😄)
Fixes JENKINS-53749
I didn't check this after the data restructuring in the container, so `make run`
would fail to start up properly because it couldn't find a directory for
EVERGREEN_DATA
Go from a 5 to 1 MB plugin. Goal is to still have this code tested, but
avoid hitting download issues too often in CI.
If that still fails often, next step we'll probably need to just set up an
embedded test HTTP server.
Based on my little bit of discussion with @batmat last week, I believe that if
we're not publishing the base container (which I've removed in this commit),
then we needn't spend the compute cycles
I added wget back in the days to add the shim for downloading plugins.
Now we have the client download, I do not think it's needed anymore and
we have `curl` that can download things too anyway.
In Jest 23.x there has especially been improvements around "lost" async
calls and better stack traces.
As we are currently suffering sometimes from flaky tests, which I believe
is coming from this, I think that new --detectOpenHandles switch might
prove very useful to help us find and fix those kinds of leaks.
To upgrade here I did the following:
* remove package-lock.json
* run `./../tools/npm install --save jest@23.6.0` in both the client and
the services directories.