2020-07-14 UI Testing Team Meeting Notes

Date

Attendees

Goals

  • Review objectives, high level UI testing approach and tool selection criteria.

Discussion items

TimeItemWhoNotes
10 MinUI Testing Team Objectives for 2020
  • UI testing approach for FOLIO for the next 2 years
    • Haven't met since October 2018; need to revisit the decisions made then since ecosystem has evolved since then. Agree, document, and communicate our decisions to dev teams to set their expectations about our guidelines. 
  • New UI code changes acceptance criteria
    • What will acceptance criteria for new code be? Was challenged by Tech Council on the current "80% coverage" guideline.
  • Acceptance criteria for including a UI module into FOLIO release
    • What will quality gates be for community code to be included in a community-supported official quarterly release? 
    • Aiming for a binary decision on whether to include or not
  • UI testing tools: 
    • Selection (BigTest, Jest, RTL, etc)
      • set guideline that all modules are obligated to respect
      • can't just choose own framework and expect it to be part of the project. a spike is acceptable of course, but as a spike  not as a final decision. All tools in use should be validated by this team. Not saying a firm "No" to other options, but looking to have a deliberate approach to adding new tools to the project. 
    • Folio specific documentation
    • Adoption and training  
 30 minUI Testing approach at a high level.
  • Discuss current state of UI tests
    • Want e2e tests that provide additional quality gates, not as requirements for builds
    • current approach is haphazard; we don't really a test-pyramid because we don't formally maintain Nightmare test
    • we create many test only to turn PRs green, not for any other value
    • scattershot approach of NightmareJS, BigTest, RTL, Cypress.
      • this creates problems if devs move across modules
    • don't have manual testers, and they would be expensive even if we could  get them.
  • Proposed changes
    • honeycomb approach
    • define some quality gate of unit/integration test coverage, 70%? 80%?
    • run tests per-commit (per PR) (this is already in place)
    • e2e tests: involve POs in compiling the scenarios to cover
      • run these relatively frequently: after merge to master? once daily?
      • do not  couple e2e test output to build environments
      • how to better report (or communicate) the output of these tests, e.g. reportportal.io
  • With local dev envs, will be able to run integration and e2e tests prior  to PR merges, and then verify on reference envs as well
    • can use Jenkins job, internal to rancher env i.e. separate from community Jenkins jobs, for this
  • FYI, there are specs for many e2e tests that are run manually as part of BugFest quarterly releases
  • "code coverage" is a bit of a false metric: it doesn't prove that things perform correctly, only that the code was run during part of test execution. 
  • e2e tests have very high value in terms of overall functionality, but should result in a widely-accessible report, not a blocked build
  • per-team rancher envs will provide the PO the ability to preview work pre-PR-merge.

10 min

Introduction of tool selection criteria

Take emotions out of the process by agreeing on selection criteria that will be applied to every proposed solution (tool group). Folks have strong feelings, strong opinions, but this needs to be done impartially. 

Each proposed tool should go through a spike and presented to the UI Testing team for review with ratings for each selection criteria. 

  • Speed: must run FAST
  • Reliability: must not make issues further down in the suite opaque
  • Relevance: 
  • Mocking facility (Sharing mocks for core modules)
    • at present, every module has to build own facility for this → lots of redundancy for mocks of core modules. WOULD BE SO VERY VERY NICE  but may be hard to achieve. 
  • Integration vs Unit vs e2e tests (can the same tool to be used for both?)
    • e.g. can use the same tool with both real backend and with mocks?
    • if yes, this is a huge win: reduces the amount of tooling folks need to know
  • Cost to migrate/rebuild existing tests
    • what is the cost of transitioning from one tool to another if we go this route? 
    • Frontside is thinking about tools for (semi-)automated BT v1 to v2 conversion
  • Multi browser support
    • not necessary now, but likely required in the future
    • implicit in this statement is that some amount of real-browser testing is necessary for some tests (NB: Nightmare, Jest both do not)
    • maybe unit tests can/should run headless/Electron/Whatever, but functional tests and e2e need a real browser. What belongs where, what kind of testing do we want to do? 
  • Anything else?
10 minDiscuss Spikes and assignments for next meetingAnton Emelianov (Deactivated)
  • what are the spikes we can write to start thinking about these tools/our testing env?

Action items

  •