The benchmark script can now be set to retry upon failure, like the E2E
tests do. The default is zero, just as with the E2E tests. A retry of 2
has been set in CI to match the E2E tests as well.
The `retry` module had to be adjusted to throw an error in the case of
failure. Previously it just set the exit code, but that only worked
because it was the last thing called before the process ended. That is
no longer the case.
Our benchmark script now uses `yargs`. Functionally it should be nearly
the same as before, except that now it has more documentation and
validation. The one functional difference aside from that is that the
`--pages` flag now takes space-separated arguments rather than comma-
separated.
Previously the benchmark script would throw an error if asked to take
just 1 sample. Now it works, though the stats returned are of
dubious use.
The problem was that it was impossible to calculate the standard
deviation or margin of error of a set of 1. Instead it now returns
zero for both of those values in the single-sample case, which is what
it would return for two identical samples.
The e2e test driver used to perform the initial navigation
automatically within the `buildWebDriver` function, so that that step
wouldn't need to be repeated at the beginning of each test. However
this prevented you from doing any setup in the test before the first
navigation.
The navigation has now been moved into each individual test. It should
be functionally equivalent, except now it's possible to control exactly
when the first navigation occurs.
A 1 second delay was also removed, as it didn't seem to be necessary
when testing this. It was initially added as an attempted fix to an
intermittent failure. It did not fix that failure.
* Fix require-unicode-regexp issues
See [`require-unicode-regexp`](https://eslint.org/docs/rules/require-unicode-regexp) for more information.
This change enables `require-unicode-regexp` and fixes the issues raised by the rule.
* Remove case-insensitive flag from regexps
The fullscreen UI now shows roughly the same design as the popup UI.
A few additional changes depicted in the new fullscreen designs will
be implemented in subsequent PRs (e.g. the inline buttons on assets)
This was done now to make asset pages easier to implement. Implementing
asset pages solely for the popup UI would have been complicated by the
fact that we use viewport size to switch between the two layouts, so we
would have had to re-route upon resizing the window.
* Add benchmark to CI
The page load benchmark for Chrome is now run during CI, and the
results are collected and summarized in the `metamaskbot` comment.
Closes#6881
* Double default number of samples
The number of default samples was changed from 10 to 20. The results
from 10 samples would show statistically significant changes in page
load times between builds, so weren't a sufficiently useful metric.
A margin of error metric has been added, which is calculated from a 95%
confidence interval. This confidence interval is calculated using
Student's t-distribution, which is generally preferred for smaller
sample sizes (< ~30) of populations following a normal distribution.
The script `benchmark.js` will collect page load metrics from the
extension, and print them to a file or the console. A method for
collecting metrics was added to the web driver to help with this.
This script will calculate the min, max, average, and standard
deviation for four metrics: 'firstPaint', 'domContentLoaded', 'load',
and 'domInteractive'. The variation between samples is sometimes high,
with the results varying between samples if only 3 were taken. However,
all tests I've done locally with 5 samples have produced results within
one standard deviation of each other. The default number of samples has
been set to 10, which should be more than enough to produce consistent
results.
The benchmark can be run with the npm script `benchmark:chrome` or
`benchmark:firefox`, e.g. `yarn benchmark:chrome`.