Univention Bugzilla – Full Text Bug Listing |
Summary: | Test the ucs template migration status to py3 | ||
---|---|---|---|
Product: | UCS Test | Reporter: | Max Pohle <pohle> |
Component: | UCR | Assignee: | Max Pohle <pohle> |
Status: | CLOSED FIXED | QA Contact: | Johannes Keiser <keiser> |
Severity: | normal | ||
Priority: | P5 | CC: | best, botner, keiser |
Version: | unspecified | ||
Target Milestone: | --- | ||
Hardware: | Other | ||
OS: | Linux | ||
See Also: |
https://forge.univention.org/bugzilla/show_bug.cgi?id=49060 https://forge.univention.org/bugzilla/show_bug.cgi?id=51107 |
||
What kind of report is it?: | Development Internal | What type of bug is this?: | --- |
Who will be affected by this bug?: | --- | How will those affected feel about the bug?: | --- |
User Pain: | Enterprise Customer affected?: | ||
School Customer affected?: | ISV affected?: | ||
Waiting Support: | Flags outvoted (downgraded) after PO Review: | ||
Ticket number: | Bug group (optional): | ||
Max CVSS v3 score: |
Description
Max Pohle
2020-03-04 19:08:32 CET
- part of tag 'basic' and 'apptest' - can be used in branch tests OK: locating template files OK: test + output of test -> verified Just a question, the test fails currently. My understanding is that the tests fails as long as we have not migrated all templates to py3. How long until the test is expected to succeed? Do we really want a test that fails regularly? Maybe we should use another jenkins job to not tained the normal errata tests? It was decided to hard code a static 'success value' to represent the exact number of successfully migrated templates. Hard coding may not be the prettiest solution, but the advantage lays in traceability. The test is supposed to be modified after each template migration in order to represent the new number of migrated templates and it simultaneously ensures, that other changes in the product would not hurt this compatibility again. That is because some templates use `include` and can thus be made more incompatible again by adding Python2 only code into the module. The summary under each test shows how many templates produced the same output with python2 as they did with python3 and a so called `limit`, which represents the current value of `SUCCESS_MIN` from the script. This is the hard coded value to be adjusted. The return value of the test comes from `exit $(test $success -ge $SUCCESS_MIN)` and my impression is, that the test does not always fail. Some notes: The minimum number depends on the installed packages. As we have different test scenarios with different numbers of installed software, the minimum number is useless as it is different for every scenario. Since yesterday, every UCR conffile in UCS is migrated and python 3 compatible. The problem why some test fail nevertheless is because they import python 3 modules which aren't installed in Jenkins yet. I suggest to adjust the test case and remove the minimum number, print every traceback which occurs, set every template which fails with an ImportError to SKIP and fail the test if any other test fails. Please don't set the bug status to WORKSFORME if there were changes made ;-). This exists and has been rewritten as `03_ucr/37check-ucr-templates-py3-migration-status.py`. |