SPIKE [FAT-1635] mod-inn-reach: Using Inn-Reach sandbox environment and (or) mock server in integration tests

FAT-1635 - Getting issue details... STATUS

Preface

Integration testing (sometimes called integration and testing, abbreviated I&T) is the phase in software testing in which individual software modules are combined and tested as a group. Integration testing is conducted to evaluate the compliance of a system or component with specified functional requirements. (https://en.wikipedia.org/wiki/Integration_testing)


INTEGRATION TESTING is defined as a type of testing where software modules are integrated logically and tested as a group. A typical software project consists of multiple software modules, coded by different programmers. The purpose of this level of testing is to expose defects in the interaction between these software modules when they are integrated (https://www.guru99.com/integration-testing.html)

Possible solutions:

  1. Have a dedicated Inn-Reach environment for integration testing

    (warning) The option is not feasible at the moment. III has refused to provide a separate environment or additional local server configuration 

    1. the environment should be independent from the sandbox environment, the one which currently in use on Rancher (volaris and volaris-2nd)
    2. alternatively, configure additional pair of local servers on the sandbox
  1. Share existing Inn-Reach environment between Rancher and integration tests
  2. Use mock server to completely substitute real Inn-Reach environment
  3. Use mock server to partially replace Inn-Reach server –

    (minus) looks like it's unlikely realistic case because mocked and real scenarios should be completely independent which seams impossible – leave this option just for historical reference
    1. primary use case is to mock simple interactions with D2IR (like settings), leaving more complicated scenarios (like contributions) for real D2IR system
    2. possible only if existing Inn-Reach environment can be reused for integration testing

Shareable Inn-Reach environment

Pros

  1. it's already present, there is no need to somehow set this up.
  2. real system with real interaction and behavior.
    1. up to date API definition – instant recognition of integration issues

Cons

  1. more attention should be paid to setting up configuration data for a test as well as test flow data. the data shouldn't affect existing configuration or transactions on the server. this brings additional level of complexity into the tests.
    1. carefully clean up all types of data added by tests. might be tricky in some cases (for instance removal option via API calls is not available)
  2. karate tests have to executed at a specific time (to be defined) to not to overlap with regular testing activities on Rancher envs
    1. this might require additional configuration/customization of karate job on Jenkins
  3. could be unavailable due to some maintenance activities or issues on Inn-Reach side, although this should be a rare case
    1. availability time might vary between hours up to several days

Investigation points / open questions

  1. identify test execution time frame
    1. talk to PO and the team to negotiate the time for test execution so that it doesn't effect PO acceptance testings and development (question)
  2. understand modifications to Jenkins jobs running the tests
    1. collaborate with devops to figure out what should be done to execute the tests during custom defined period of time – [task]
  3. creating central server with sandbox's secret keys in karate tests
    1. investigate ways of storing and accessing the keys from the secret key store (AWS / Vault / other). should be discussed with devops – [task]
  4. collect statistics about mod-inn-reach endpoints that are posting date into Inn-Reach system – [task]:
    1. what the endpoint is –
      1. name plus a related user story (if present)
    2. what information is posted into Inn-Reach system
    3. how this information can be removed from Inn-Reach system –
      1. the knowledge is very important for clean up procedures (when test(s) is finished). if there is no easy way to remove the data then the possibility is high that the endpoint cannot be covered with karate tests.
      2. the list of endpoints that cannot be tested should be documented or somehow highlighted in the statistics
  5. try to implement different business scenarios (sort of PoC)
    • configuration
    • circulation action
    • circulation event processing
    • contribution

Mock server

Pros

  1. managed on Folio side – total control on deployment and availability
  2. expected to be faster than a real system 
    1. test execution takes less time due to preset nature of mocks. although it shouldn't be considered as a primary advantage.
  3. there is no need to remove data created during test execution
    1. depending on the test it might be required to simply restart the server or do nothing at all
  4. no specific requirements related to test execution time frame, since it's going to be an independent environment

Cons

  1. test development time higher due to necessity of defining mocked responses apart from writing test code
  2. mocks complexity for combined flows (circulation/contribution) that includes sequential interaction with other Folio modules and Inn-Reach
    1. returned mocks could depend on some business conditions raised by the flow
    2. mock data could be dynamically changed depending on incoming request(s) and current step in the flow
  3. potential inconsistency in behavior and returned data between real system and mock server
    1. API of the real system could be changed – but this cannot be tracked down automatically to mock server
    2. if API is changed mock(s) should be adjusted manually
  4. additional resources required from maintenance and deployment perspectives

Investigation points / open questions

  1. study existing solutions for mocking (mock servers) and choose the appropriate one –  FAT-1652 - Getting issue details... STATUS
    1. some candidates are:
      1. wiremock (standalone) - https://wiremock.org/docs/
      2. prism - https://meta.stoplight.io/docs/prism/ZG9jOjYx-overview
      3. karate mocks - https://github.com/karatelabs/karate/tree/master/karate-netty

      4. TBD
    2. capabilities of interest
      1. deployment options (docker, command line etc)
      2. automatic mocking from provided API spec
      3. supporting of dynamic/conditional content
      4. TBD
  2. find out if Inn-Reach API can be provided as standard API specification (Swagger/Open API, RAML etc) (question)
    1. right now we've only been given a pdf file which describes the API
  3. understand modifications to Jenkins jobs running the tests
    1. collaborate with devops to figure out what should be done to setup and deploy the mock server –  FAT-1653 - Getting issue details... STATUS
  4. try to implement different business scenarios (sort of PoC)
    • configuration
    • circulation action
    • circulation event processing
    • contribution