...
Agenda and Minutes
2023-11-20
Attendees: Maccabee Levine Tod Olson Craig McNally Jenn Colt
Agenda:
- Incorporate OST process changes into criteria.
- Craig McNally summarizing a discussion at Wed 11/8 TC meeting: "Relax restrictions about OST list as it pertains to first-party frameworks / technologies, because the deadline for accepting new modules into the release happens before we even have feature freeze on those versions."
- *** Tag each PR mention for whether we have to adjust it for this.
- Craig McNally summarizing a discussion at Wed 11/8 TC meeting: "Relax restrictions about OST list as it pertains to first-party frameworks / technologies, because the deadline for accepting new modules into the release happens before we even have feature freeze on those versions."
- Continue working through Matt Weaver's feedback.
2023-11-06
Attendees: Maccabee Levine Jenn Colt Craig McNally
Agenda:
- Incorporate TC's feedback over the last week on the suggestions listed in recent TCRs under "TCR Process Improvements" into the PR. I posted various threads in #tech-council, I believe Jenn reached out on some other things. There was also discussion on things already changed in the PR.
mod-fqm-manager:
- The criterion: "Module descriptor MUST include interface requirements for all consumed APIs" could be improved to address implicit module-to-module dependencies such as found in this module.
- Added this to the PR.
mod-lists:
- Consider adjusting the sonarqube rule about number of levels of inheritence allowed (currently 5)
- Added this to the PR.
- Consider adding criteria about the naming of interfaces, referencing http://ssbp-devdoc-prod.s3-website.us-east-1.amazonaws.com/guidelines/naming-conventions/#interfaces. The guidence linked does read more like a suggestion than a hard guideline. Should we also consider rewording so it's more of a requirement than a suggestion?
- Added this to the PR
edge-courses:
- The module evaluation criteria should be modified to address esge modules explicetly.
- Probably looking for something simpler for the edge modules.
- Matt Weaver: Edge modules are a little different (they tend to be very simple, have no storage, deal with permissions differently, different requirements for module descriptors, etc), so it might be worth handling them a little differently from other backend modules
- Flag whoever submitted edge-course (Radhakrishnan Gopalakrishnan (Vijay) ) and edge-fqm (Matt Weaver ). What should we be checking for on those modules?
- Backend shared libraries? If something is just not applicable, can we say that? So evaluators are consistent with which criteria they ignore for shared libraries. Zero experience, none submitted. Post in TC.
- FE shared libraries? Post in TC.
- The module evaluation criteria should be modified to address esge modules explicetly.
ui-lists:
- We should have recommended tools for evaluating license compliance and not ask evaluators to assess license compliance
- Jenn: As far as licensing this is one of those places where I do think we should ask CC to be active. Licensing issues are a business risk/threat not a technical one, imo and most relevant for attracting new contributors and hosting providers. Jenn Colt will review the tools and go from there. Maybe a recommendation to give. May run by developers and CC.
- Maybe use a "software bill of materials"? GitHub generates automatically for BE modules. NPM does something as well, built-in.
- Still need an opinion from CC. Whether or not the tools we use give an opinion or not.
- Related issue is when dependencies & licenses change after a module exists.
- Jenn Colt will formulate a question for CC.
- This "Use the latest release of Stripes at the time of evaluation" criterion is problematic; we want to evaluate modules against the collection of versions they are _going_ to be a part of if they are accepted rather than the versions that were part of a previous release. On other hand, just as we ask for a specific commit from submitters in order to avoid the moving target of the main branch, it would be unfair to expect submitters to reference our moving targets. The officially approved technologies page may provide some guidance here, but then we also have to make sure it is accurate and up to date.
- We should have recommended tools for evaluating license compliance and not ask evaluators to assess license compliance
- Start working through Matt Weaver's feedback.
- Clarification around deadlines
- We misunderstood the 3 week window for TCRs as starting at the submission date, rather than with the assignment of an evaluator. As a result, we rushed to get the TCRs submitted before the wrong deadline and accidentally put extra burden on the TC. I'm not sure if we really could have submitted much earlier, but we may have prioritized work differently to try and submit earlier if we hadn't misunderstood the deadlines.
- https://github.com/folio-org/tech-council/blob/master/NEW_MODULE_TECH_EVAL.MD - "A maximum duration (3 weeks) from the submission date for the initial review."
- This should be reworded to remove any ambiguity
- Added a straw man to the PR.
- SNAPSHOTs are tricky, as they inherently create a "moving target" that we try to avoid by specifying a specific commit to review, so they should obviously be discouraged, but sometimes they are necessary. Since this came up as a very real problem in the edge-fqm TCR, it may be worth documenting a process or stance or something.
- Added to PR.
- Clarification around deadlines
2023-10-30
Attendees: Maccabee Levine Jenn Colt Tod Olson
Subgroup scope / goals
- Consider the items listed in recent TCRs under "TCR Process Improvements".
- Consider other feedback provided by recent TCR submitters.
- Consider process issues raised at TC meetings (hopefully in the minutes) during our recent TCR discussions. This would include "meta-process" issues such as communication around the TCR process, timing issues, interaction with the RFC process, etc.
mod-fqm-manager:
- The criterion: "Module descriptor MUST include interface requirements for all consumed APIs" could be improved to address implicit module-to-module dependencies such as found in this module.
- Ask Jeremy Huff Matt Weaver for clarification. Why did he fail this criteria in mod-fqm-manager? What change might be made to this criteria? Can this be separated from the 'shared database' issue which is a separate criteria? Is there something else we're trying to capture as a dependency, that is not so obvious? What is a reasonable way to document that? It might not be module-descriptor.
mod-lists:
- Consider adding lombok to the Officially Supported Technologies list since it is used extensively througout the project, despite NOT having an Apache 2.0 license. This could help evaluators in the future.
- Already addressed!
- Consider adjusting the sonarqube rule about number of levels of inheritence allowed (currently 5)
- Craig McNally and Tod Olson discussed previously. Maybe just get an alert if we blow that number and get an explanation of why it is ok to ignore. Get Craig McNally 's opinion first, get feedback from others like Taras Spashchenko Olamide Kolawole.
- Consider adding MinIO to the Offiicially Supported Technologies list, and approving the decision here: https://wiki.folio.org/pages/viewpage.action?pageId=96419874
- Already addressed!
- It would be helpful to have the module name, possibly other metadata listed at the top of this form
- Added to PR.
- Consider adding criteria about the naming of interfaces, referencing http://ssbp-devdoc-prod.s3-website.us-east-1.amazonaws.com/guidelines/naming-conventions/#interfaces. The guidence linked does read more like a suggestion than a hard guideline. Should we also consider rewording so it's more of a requirement than a suggestion?
- Tod Olson Craig McNally Looks like you did this TCR evaluation. What specific criteria would you add? Zak_Burke and Maccabee Levine have talked about the naming of modules re: future-proofing and understanding what they meant, i.e. mod-entities-links and such, but the specific guidelines at that wiki link are more about the syntax of interface names, not the substance of them.
edge-courses:
- The module evaluation criteria should be modified to address esge modules explicetly.
- Probably looking for something simpler for the edge modules. Maccabee Levine look through TC notes for specific opinions on this.
ui-lists:
- TypeScript is a superset of JavaScript. TC should make an explicit statement about whether it is permitted in FOLIO modules.
- Already addressed!
- We should have recommended tools for evaluating license compliance and not ask evaluators to assess license compliance
- also mentioned in ui-service-interactions:
- I am increasingly strongly uncomfortable evaluating license compatibility. I suggest we change the line
Third party dependencies use an Apache 2.0 compatible license
to>Includes a report of the licenses used by third-party dependencies
and we can delegate evaluation of that list to a person/body with appropriate credentials for this kind of thing. IOW, IANAL and I really really don't want to be responsible for making definitive statements about license compatibility. Example tools in NPM-land:- npx apache2-license-checker
- license-checker
- I am increasingly strongly uncomfortable evaluating license compatibility. I suggest we change the line
- Jenn: As far as licensing this is one of those places where I do think we should ask CC to be active. Licensing issues are a business risk/threat not a technical one, imo and most relevant for attracting new contributors and hosting providers. Jenn Colt will review the tools and go from there. Maybe a recommendation to give. May run by developers and CC.
- also mentioned in ui-service-interactions:
- This "Use the latest release of Stripes at the time of evaluation" criterion is problematic; we want to evaluate modules against the collection of versions they are _going_ to be a part of if they are accepted rather than the versions that were part of a previous release. On other hand, just as we ask for a specific commit from submitters in order to avoid the moving target of the main branch, it would be unfair to expect submitters to reference our moving targets. The officially approved technologies page may provide some guidance here, but then we also have to make sure it is accurate and up to date.
- Added to PR
ui-service-interactions:
- Do we/should we have a Source Of Record for application documentation? Is it OK to link to the wiki or should apps point to https://docs.folio.org/?
- Process for updating docs.folio.org would slow down changes. And future app store modules might be documented elsewhere. It seems odd to dictate where documentation has to live. Downside of linking to random pages is you might have broken links if the destination changes – but you can always resolve that on the destination, with edits or redirects.
- Added to PR.
Consideration To-do list
Listed in recent TCRs
edge-courses:
- The module evaluation criteria should be modified to address esge modules explicetly.
ui-lists:
- We should have recommended tools for evaluating license compliance and not ask evaluators to assess license compliance
- This "Use the latest release of Stripes at the time of evaluation" criterion is problematic; we want to evaluate modules against the collection of versions they are _going_ to be a part of if they are accepted rather than the versions that were part of a previous release. On other hand, just as we ask for a specific commit from submitters in order to avoid the moving target of the main branch, it would be unfair to expect submitters to reference our moving targets. The officially approved technologies page may provide some guidance here, but then we also have to make sure it is accurate and up to date.
From Matt Weaver
...
- Related: The submitter criteria (member of the PC or a PC-appointed delegate) seems overly strict and unnecessary. If the goal is to ensure PC approval happens first, then we already have that by just saying that the PC has to approve it first.
...
- There was some confusion around PC approval and whether or not the PC's approval of the Lists app includes FQM. It'd be nice if this was clarified a bit, if possible. I'm not sure what that would look like...
- If we had application formalization, we'd have a way to link the two. But we are not there yet. And unclear if new functionality would be part of a new application; and how PC would want to review it (or not).
- PC approval of Lists app would never have considered backend issues like FQM.
- There is a gap from what PC is looking at and what TC looks at. Exact transition is "mushy". Hopefully application stuff will help that.
- Consensus to let this one go until/unless application formalization makes the question simpler to understand.
- There was some confusion around PC approval and whether or not the PC's approval of the Lists app includes FQM. It'd be nice if this was clarified a bit, if possible. I'm not sure what that would look like...
- Related: The submitter criteria (member of the PC or a PC-appointed delegate) seems overly strict and unnecessary. If the goal is to ensure PC approval happens first, then we already have that by just saying that the PC has to approve it first.
- History: at the time, PC did not have a process yet, so asked PC to be input into our process. Has never worked out this way, in a dozen+ TCRs. Should change language to be clearer on process / interaction with PC. PC has just been leaving a comment in the TCR saying they approved the relevant functionality.
- Addressed in PR.
- Related: The submitter criteria (member of the PC or a PC-appointed delegate) seems overly strict and unnecessary. If the goal is to ensure PC approval happens first, then we already have that by just saying that the PC has to approve it first.
- Clarification around exceptions and any process for that
- The idea of exceptions has come up a lot of times in the FQM/Lists TCRs, so a formal stance on exceptions should probably be more clearly defined (the only thing we saw prior to submitting the TCRs is that the TC is free to make a decision that doesn't align with the evaluator's results)
- Possible that we should leave exceptions generally possible, to be flexible.
- Big exceptions (fundamental architectural stuff) require a lot of discussion and a TCR is not an ideal place for them. But they can happen. I feel like it's important to make room for these exceptions, but it felt like there was **strong** resistance to any big exceptions at all, which is a bit concerning (I'm of the opinion that we shouldn't need to change the process or criteria to make an exception, but that's me). Further discussion and documentation around this would probably be very useful for everyone and help clarify things quite a bit
- The place for big exceptions is an RFC. And we have no lever for requiring an RFC in the future. People are busy, things change.
- Small ones (sonarqube violations, etc) seem a lot easier. Personally, I feel that the evaluators should be free to make these exceptions, as long as they document their reasoning in their evaluation. The TC is free to accept or reject that reasoning. The existing documentation seems to support this, but it'd be useful if it was more explicit.
- Note: this came up in the mod-lists TCR, where it seemed like the TC rejected the idea that evaluators have any real say, which was a little worrying. I feel like an evaluator should be empowered to say that a module passed its evaluation despite failed criteria, as long as they are transparent about it.
- Related to small exceptions: The module descriptor criterion may be overly strict in some cases. If the MD is invalid, then it can't be released anyway. It's also a trivial issue to fix. So even if the MD is completely missing, there's not really much downside to accepting a module despite not having a valid MD. Maybe be less strict in this criterion or remove it entirely?
- Options: 1) double down on being strict. 2) keep flexibility and change nothing, TC still has final say, and is simple & flexible, but not transparent. 3) formalize the exceptions process. Work regardless on the option chosen, some stickier than others. Can also stick with option 2 but increase communication aspect, esp about related RFC process.
- ** Group will brainstorm over next week, come back and try to come up with an approach to #2 or #3 for discussion next week.
- The idea of exceptions has come up a lot of times in the FQM/Lists TCRs, so a formal stance on exceptions should probably be more clearly defined (the only thing we saw prior to submitting the TCRs is that the TC is free to make a decision that doesn't align with the evaluator's results)
- The discussions between me the evaluators has been super useful! There were a lot of small things that the evaluators identified that we were able to fix, and we really appreciate that we were allowed to make a few small changes after the evaluation began. I feel like this should be actively encouraged, so some documentation around this process would be great!
- How do we iterate, and how much process for that? Initially, after eval you get approval or rejection. Resubmit if there was a failure of any kind. Things haven't worked out that way, there is resubmission during the eval process, new commit hash submitted. There also might have been a negative connotation with 'failure', people upset; informal convo with submitter has a better impact.
- Addressed in the PR.
2023-11-06
Attendees: Maccabee Levine Jenn Colt Craig McNally
Agenda:
- Incorporate TC's feedback over the last week on the suggestions listed in recent TCRs under "TCR Process Improvements" into the PR. I posted various threads in #tech-council, I believe Jenn reached out on some other things. There was also discussion on things already changed in the PR.
mod-fqm-manager:
- The criterion: "Module descriptor MUST include interface requirements for all consumed APIs" could be improved to address implicit module-to-module dependencies such as found in this module.
- Added this to the PR.
mod-lists:
- Consider adjusting the sonarqube rule about number of levels of inheritence allowed (currently 5)
- Added this to the PR.
- Consider adding criteria about the naming of interfaces, referencing http://ssbp-devdoc-prod.s3-website.us-east-1.amazonaws.com/guidelines/naming-conventions/#interfaces. The guidence linked does read more like a suggestion than a hard guideline. Should we also consider rewording so it's more of a requirement than a suggestion?
- Added this to the PR
edge-courses:
- The module evaluation criteria should be modified to address esge modules explicetly.
- Probably looking for something simpler for the edge modules.
- Matt Weaver: Edge modules are a little different (they tend to be very simple, have no storage, deal with permissions differently, different requirements for module descriptors, etc), so it might be worth handling them a little differently from other backend modules
- Flag whoever submitted edge-course (Radhakrishnan Gopalakrishnan (Vijay) ) and edge-fqm (Matt Weaver ). What should we be checking for on those modules?
- Backend shared libraries? If something is just not applicable, can we say that? So evaluators are consistent with which criteria they ignore for shared libraries. Zero experience, none submitted. Post in TC.
- FE shared libraries? Post in TC.
- The module evaluation criteria should be modified to address esge modules explicetly.
ui-lists:
- We should have recommended tools for evaluating license compliance and not ask evaluators to assess license compliance
- Jenn: As far as licensing this is one of those places where I do think we should ask CC to be active. Licensing issues are a business risk/threat not a technical one, imo and most relevant for attracting new contributors and hosting providers. Jenn Colt will review the tools and go from there. Maybe a recommendation to give. May run by developers and CC.
- Maybe use a "software bill of materials"? GitHub generates automatically for BE modules. NPM does something as well, built-in.
- Still need an opinion from CC. Whether or not the tools we use give an opinion or not.
- Related issue is when dependencies & licenses change after a module exists.
- Jenn Colt will formulate a question for CC.
- This "Use the latest release of Stripes at the time of evaluation" criterion is problematic; we want to evaluate modules against the collection of versions they are _going_ to be a part of if they are accepted rather than the versions that were part of a previous release. On other hand, just as we ask for a specific commit from submitters in order to avoid the moving target of the main branch, it would be unfair to expect submitters to reference our moving targets. The officially approved technologies page may provide some guidance here, but then we also have to make sure it is accurate and up to date.
- We should have recommended tools for evaluating license compliance and not ask evaluators to assess license compliance
- Start working through Matt Weaver's feedback.
- Clarification around deadlines
- We misunderstood the 3 week window for TCRs as starting at the submission date, rather than with the assignment of an evaluator. As a result, we rushed to get the TCRs submitted before the wrong deadline and accidentally put extra burden on the TC. I'm not sure if we really could have submitted much earlier, but we may have prioritized work differently to try and submit earlier if we hadn't misunderstood the deadlines.
- https://github.com/folio-org/tech-council/blob/master/NEW_MODULE_TECH_EVAL.MD - "A maximum duration (3 weeks) from the submission date for the initial review."
- This should be reworded to remove any ambiguity
- Added a straw man to the PR.
- SNAPSHOTs are tricky, as they inherently create a "moving target" that we try to avoid by specifying a specific commit to review, so they should obviously be discouraged, but sometimes they are necessary. Since this came up as a very real problem in the edge-fqm TCR, it may be worth documenting a process or stance or something.
- Added to PR.
- Clarification around deadlines
2023-10-30
Attendees: Maccabee Levine Jenn Colt Tod Olson
Subgroup scope / goals
- Consider the items listed in recent TCRs under "TCR Process Improvements".
- Consider other feedback provided by recent TCR submitters.
- Consider process issues raised at TC meetings (hopefully in the minutes) during our recent TCR discussions. This would include "meta-process" issues such as communication around the TCR process, timing issues, interaction with the RFC process, etc.
mod-fqm-manager:
- The criterion: "Module descriptor MUST include interface requirements for all consumed APIs" could be improved to address implicit module-to-module dependencies such as found in this module.
- Ask Jeremy Huff Matt Weaver for clarification. Why did he fail this criteria in mod-fqm-manager? What change might be made to this criteria? Can this be separated from the 'shared database' issue which is a separate criteria? Is there something else we're trying to capture as a dependency, that is not so obvious? What is a reasonable way to document that? It might not be module-descriptor.
mod-lists:
- Consider adding lombok to the Officially Supported Technologies list since it is used extensively througout the project, despite NOT having an Apache 2.0 license. This could help evaluators in the future.
- Already addressed!
- Consider adjusting the sonarqube rule about number of levels of inheritence allowed (currently 5)
- Craig McNally and Tod Olson discussed previously. Maybe just get an alert if we blow that number and get an explanation of why it is ok to ignore. Get Craig McNally 's opinion first, get feedback from others like Taras Spashchenko Olamide Kolawole.
- Consider adding MinIO to the Offiicially Supported Technologies list, and approving the decision here: https://wiki.folio.org/pages/viewpage.action?pageId=96419874
- Already addressed!
- It would be helpful to have the module name, possibly other metadata listed at the top of this form
- Added to PR.
- Consider adding criteria about the naming of interfaces, referencing http://ssbp-devdoc-prod.s3-website.us-east-1.amazonaws.com/guidelines/naming-conventions/#interfaces. The guidence linked does read more like a suggestion than a hard guideline. Should we also consider rewording so it's more of a requirement than a suggestion?
- Tod Olson Craig McNally Looks like you did this TCR evaluation. What specific criteria would you add? Zak_Burke and Maccabee Levine have talked about the naming of modules re: future-proofing and understanding what they meant, i.e. mod-entities-links and such, but the specific guidelines at that wiki link are more about the syntax of interface names, not the substance of them.
edge-courses:
- The module evaluation criteria should be modified to address esge modules explicetly.
- Probably looking for something simpler for the edge modules. Maccabee Levine look through TC notes for specific opinions on this.
ui-lists:
- TypeScript is a superset of JavaScript. TC should make an explicit statement about whether it is permitted in FOLIO modules.
- Already addressed!
- We should have recommended tools for evaluating license compliance and not ask evaluators to assess license compliance
- also mentioned in ui-service-interactions:
- I am increasingly strongly uncomfortable evaluating license compatibility. I suggest we change the line
Third party dependencies use an Apache 2.0 compatible license
to>Includes a report of the licenses used by third-party dependencies
and we can delegate evaluation of that list to a person/body with appropriate credentials for this kind of thing. IOW, IANAL and I really really don't want to be responsible for making definitive statements about license compatibility. Example tools in NPM-land:- npx apache2-license-checker
- license-checker
- I am increasingly strongly uncomfortable evaluating license compatibility. I suggest we change the line
- Jenn: As far as licensing this is one of those places where I do think we should ask CC to be active. Licensing issues are a business risk/threat not a technical one, imo and most relevant for attracting new contributors and hosting providers. Jenn Colt will review the tools and go from there. Maybe a recommendation to give. May run by developers and CC.
- also mentioned in ui-service-interactions:
- This "Use the latest release of Stripes at the time of evaluation" criterion is problematic; we want to evaluate modules against the collection of versions they are _going_ to be a part of if they are accepted rather than the versions that were part of a previous release. On other hand, just as we ask for a specific commit from submitters in order to avoid the moving target of the main branch, it would be unfair to expect submitters to reference our moving targets. The officially approved technologies page may provide some guidance here, but then we also have to make sure it is accurate and up to date.
- Added to PR
ui-service-interactions:
- Do we/should we have a Source Of Record for application documentation? Is it OK to link to the wiki or should apps point to https://docs.folio.org/?
- Process for updating docs.folio.org would slow down changes. And future app store modules might be documented elsewhere. It seems odd to dictate where documentation has to live. Downside of linking to random pages is you might have broken links if the destination changes – but you can always resolve that on the destination, with edits or redirects.
- Added to PR.
Consideration To-do list
From Matt Weaver
...
- Note: this came up in the mod-lists TCR, where it seemed like the TC rejected the idea that evaluators have any real say, which was a little worrying. I feel like an evaluator should be empowered to say that a module passed its evaluation despite failed criteria, as long as they are transparent about it.
...
- Related to exceptions: clarification around how strict the evaluation criteria are and who is empowered to make an exception. E.g., if a criterion is failed, is the evaluator obligated to say that the evaluation failed or can they still recommend TCR acceptance? In effect, it seems like evaluators are empowered to recommend acceptance despite failed criteria, but I'm not sure that this is actually documented anywhere.
- Related: I noticed a recurring theme in our 4 TCR evaluations, where technologies outside the supported tech list were used and the evaluators were okay with that. It happened enough that it might be worthwhile to update the criteria to address this.the evaluators were okay with that. It happened enough that it might be worthwhile to update the criteria to address this.
- Process around updates that happen mid-evaluationProcess around updates that happen mid-evaluation
- The discussions between me the evaluators has been super useful! There were a lot of small things that the evaluators identified that we were able to fix, and we really appreciate that we were allowed to make a few small changes after the evaluation began. I feel like this should be actively encouraged, so some documentation around this process would be great!
- The rough process we used was basically this, which seemed to work well (note: this wasn't ever really formalized or followed exactly, but it's pretty much what ended up happening in every case):
- The evaluators point something out either in a comment or some other channel like slack
- We fix it and let the evaluator know
- If the issue is major enough to meaningfully impact the evaluation:
- If the evaluator is okay with changing the commit being evaluated, we update the TCR ticket with the updated hash in the ticket description. Also, document this change with a comment on the ticket, explaining what all is different between the old command the the new one. The end result is that the evaluation formally includes the changes, as if they were there from the beginning
- Example: fixing dependencies in ui-lists to be compatible with poppy
- Example: downgrading away from a SNAPSHOT dependency in edge-fqm due to a breaking change
- If the differences between the two commits are too much to change mid-eval, don't change the description, but document the issue and fix in a comment. The evaluator should still take this into account, but the context is fundamentally different, so it shouldn't have as much weight as it would if it was in the commit under review. It's worth mentioning in the evaluation, but whether it turns a failed criterion into a passed on is up to the evaluator.
- Example: adding a test to edge-fqm that increased the test coverage by ~25% after the evaluation was basically done. It would have been unfair to the evaluator to move the goalpost at that time.
- If the evaluator is okay with changing the commit being evaluated, we update the TCR ticket with the updated hash in the ticket description. Also, document this change with a comment on the ticket, explaining what all is different between the old command the the new one. The end result is that the evaluation formally includes the changes, as if they were there from the beginning
- If the issue isn't major enough to meaningfully impact the evaluation, the evaluator is free to handle it however they want. In these case, it's also reasonable to create a ticket for the issue and treat it like any other bug (i.e., prioritize it and fix it later).
- The rough process we used was basically this, which seemed to work well (note: this wasn't ever really formalized or followed exactly, but it's pretty much what ended up happening in every case):
- Clarification around communication - I added some comments to some of the TCR evaluation PRs, addressing what I believe to be inaccuracies or to add relevant information. I have no idea if this is an appropriate place for that or not. I think I saw something somewhere saying to use jira for public communication with the evaluator like that, but the PR feels like a more appropriate place.
- Related (this might just be because of the tight timeline in our TCRs): it would have been nice to have a little more time to go over the evaluations (maybe even while they are in-progress) to have a chance to address any results directly prior to the TC discussion/vote. In some cases, failed criteria may be a result of simple misunderstandings; in those cases, addressing the issues during the TC discussion would largely just be a waste of time (or worse, it could be confusing, since the evaluator could end up disagreeing with their own written evaluation during the meeting). I don't think not really having a defined opportunity to respond to the eval was ever actually a big problem in the FQM/Lists TCRs, but it seems like it could be beneficial to the process.
- Building feedback into the process for TCR process improvements is a great idea, but I think it's in the wrong place right now (from the submitter standpoint). We did not include any TCR process feedback in our self evaluations because we hadn't done through most of the process yet; feedback from TCR submitters can really only happen at the end of the process, not the beginning. Hence this wall of text now
- It's not clear if/when an already accepted module needs to go through the TCR process. E.g., if we decide to change the architecture of FQM to use APIs exclusively, would that require a TCR? It seems like substantial changes like that probably should require review, but I don't know if that's documented anywhere. This also seems relevant for stuff like RTR, where the question was raised in the TC of whether it was even something that needed voted on. I actually wanted to ask about this one at WOLFcon in the new module session, but we ran out of time. It seems like a bit of a hole in module evaluation framework.
...