Metadata-Version: 2.1
Name: testarchiver2
Version: 1.0.0
Summary: Tools for serialising test results to SQL database
Home-page: https://github.com/aaltat/TestArchiver
Author: Tatu Aalto
Author-email: aalto.tatu@gmail.com
License: Apache License 2.0
Description: # TestArchiver
        
        This is fork from [salabs TestArchiver](https://github.com/salabs/TestArchiver), it contains enhancements
        for submitting changes from a json file. See `testarchiver2 --help` for more details.
        
        TestArchiver is a tool for archiving your test results to a SQL database.
        
        And [Epimetheus](https://github.com/salabs/Epimetheus) is the tool for browsing the results you archived.
        
        ## Testing framework support
        
        | Framework       | Status                    | Fixture test status | Parser option |
        | --------------- | ------------------------- | ------------------- | ------------- |
        | Robot Framework | [Supported](robot_tests/) | Done                | robot         |
        | Mocha           | [Supported](mocha_tests/) | Done                | mocha-junit   |
        | JUnit           | Experimental              | Missing             | junit         |
        | xUnit           | Experimental              | Missing             | xunit         |
        | MSTest          | Experimental              | Missing             | mstest        |
        | pytest          | [Supported](pytest/)      | Done                | pytest-junit  |
        
        Experimental status here means that there is a parser that can take in e.g. generic JUnit formatted output but there is no specific test set or any extensive testing or active development for the parser.
        
        Contributions for output parsers or listeners for different testing frameworks are appreciated. Contributing simply a fixture test set (that can be used to generate output files for developing a specific parser) is extremely helpful for any new framework.
        
        ## Installation
        `sudo -H python3 -m pip install testarchiver2`
        
        ## Supported databases
        
        ### SQLite
        
        [SQLite](https://www.sqlite.org) default database for the archiver and is mainly useful for testing and demo purposes. Sqlite3 driver is part of the python standard library so there are no additional dependencies for trying out the archiver.
        
        ### PostgreSQL
        
        [PostgreSQL](https://www.postgresql.org) is the currently supported database for real projects. For example [Epimetheus](https://github.com/salabs/Epimetheus) service uses a PosrgreSQL database. For accessing PostgreSQL databases the script uses psycopg2 module: `pip install psycopg2-binary` (comes with pip install)
        
        ## Basic usage
        
        The output files from different testing frameworks can be parsed into a database using `test_archiver/output_parser.py` script.
        
        ```
        testarchiver2 --database test_archive.db output.xml
        ```
        
        Assuming that `output.xml` is a output file generated by Robot Framework (the default parser option), this will create a SQLite database file named `test_archive.db` that contains the results.
        
        For list of other options: `testarchiver2 --help`
        ```
        positional arguments:
          output_files          list of test output files to parse in to the test
                                archive
        
        optional arguments:
          -h, --help            show this help message and exit
          --version, -v         show program's version number and exit
          --config CONFIG_FILE  path to JSON config file containing database
                                credentials
          --dbengine DB_ENGINE  Database engine, postgresql or sqlite (default)
          --database DATABASE   database name
          --host HOST           database host name
          --user USER           database user
          --pw PASSWORD, --password PASSWORD
                                database password
          --port PORT           database port (default: 5432)
          --dont-require-ssl    Disable the default behavior to require ssl from the
                                target database.
          --allow-minor-schema-updates
                                Allow TestArchiver to perform MINOR (backwards
                                compatible) schema updates the test archive
          --allow-major-schema-updates
                                Allow TestArchiver to perform MAJOR (backwards
                                incompatible) schema updates the test archive
          --no-keywords         Do not archive keyword data
          --no-keyword-stats    Do not archive keyword statistics
          --ignore-logs-below {TRACE,DEBUG,INFO,WARN}
                                Sets a cut off level for archived log messages. By
                                default archives all available log messages.
          --ignore-logs         Do not archive any log messages
          --format {robot,robotframework,xunit,junit,mocha-junit,pytest-junit,mstest}
                                output format (default: robotframework)
          --repository REPOSITORY
                                The repository of the test cases. Used to
                                differentiate between test with same name in different
                                projects.
          --team TEAM           Team name for the test series
          --series SERIES       Name of the test series (and optionally build number
                                'SERIES_NAME#BUILD_NUM' or build id
                                'SERIES_NAME#BUILD_ID')
          --metadata NAME:VALUE
                                Adds given metadata to the test run. Expected format:
                                'NAME:VALUE'
          --change-engine-url CHANGE_ENGINE_URL
                                Starts a listener that feeds results to ChangeEngine
          --execution-context EXECUTION_CONTEXT
                                To separate data from different build pipelines for ChangeEngine
                                prioritization. Example if same changes or tests may be used to verify app
                                in Android and iOS platforms, then it would be good to separate the result
                                from different builds pipelines/platforms. The ChangeEngine prioritization
                                might not give correct result if different results from different platforms
                                are mixed together.
          --changes CHANGES     Json file which contains information from the changed files for each repo.
                                The file should be formatted like this: 
                                {
        	                        "context": "The execution context, same as --execution-context and command line will override this setting.",
        	                        "changes": [{
                                        "name": "string representing the changed item, for example file path",
                                        "repository": "Repository (optional), for separating between changed items with identical names.",
                                        "item_type": "Separating items (optional) and for filtering subsets when prioritising",
                                        "subtype": "(optional, for separating items for filtering subsets when prioritising"
        	                         }]
                                }
        ```
        
        ## Data model
        
        [Schema and data model](https://github.com/aaltat/TestArchiver/tree/master/test_archiver/schemas) (NOTICE: this points to latest version)
        
        ## Useful metadata
        
        There are meta data that are useful to add with the results. Some testing frameworks allow adding metadata to your test results and for those frameworks (e.g. Robot Framework) it is recommended to add that metadata already to the tests so the same information is also available in the results. Additional metadata can be added when parsing the results using the `--metadata` option. Metadata given during the parsing is linked to the top level test suite.
        
        `--metadata NAME:VALUE`
        
        ## Test series and teams
        
        In the data model, each test result file is represented as single test run. These test runs are linked and organized into builds in in different result series. Depending on the situation the series can be e.g. CI build jobs or different branches. By default if no series is specified the results are linked to a default series with autoincrementing build numbers. Different test runs (from different testing frameworks or parallel executions) that belong together can be organized into the same build. Different test series are additionally organized by team. Series name and build number/id are separated by `#`.
        
        Some examples using the `--series` and `--team` options of `testarchiver2`
        
        -   `--series ${JENKINS_JOB_NAME}#${BUILD_NUMBER}`
        -   `--series "UI tests"#<commit hash>`
        -   `--series ${CURRENT_BRANCH}#${BUILD_ID} --team Team-A`
        -   `--series manually_run`
        
        Each build will have a build number in the series. If the build number is specified then that number is used. If the build number/id is omitted then the build number will be checked from the previous build in that series and incremented. If the build number/id is not a number it is considered a build identifier string. If that id is new to the series the build number is incremented just as if it no build number was specified. If the same build id is found in the same test series then the results are added under that same previously archived build.
        
        If the tests are executed in a CI environment the build number/id is an excellent way to link the archived results to the actual builds.
        
        The series can also be indicated using metadata. Any metadata with name prefixed with `series` are interpreted as series information. This is especially useful when using listeners. For example when using Robot Framework metadata `--metadata team:A-Team --metadata series:JENKINS_JOB_NAME#BUILD_NUMBER`
        
        ## Timestamp adjustment
        
        Some test frameworks use local time in their timestamps. For archiving into databases this can be problematic if tests
        are viewed and or run in different timezones. To address this two ways to adjust the time back to GMT/UTC are provided.
        
        The first allows the user to apply an adjustment of a fixed time in seconds of their choosing. This is useful for cases
        where tests were already run and the place/timezone where they were run are known. This option is useful if you are
        archiving in a different location to where tests are  run. The time value provided as an option is added to the 
        timestamp. Care must be taken with places where summer time is different (usually +1hr).
        
        For example if test were run in Finland (GMT+2), plus 1 hour in summer, calculate total hours by minutes and seconds 
        and invert to adjust in correct direction, i.e. -(2+1)*60*60, so --time-adjust-secs -10800 in summer time, 
        and -7200 otherwise. 
        
        The second provides for automated adjustment based on the system timezone and/or daylight savings if it applies. This
        is useful if the tests and archiving are performed in the same place and time.
        This assumes that if multiple computers are used that their timezone and daylight savings settings are identical.
        Care must also be taken that tests are not run just before a daylight savings time adjust and archived just after
        as times will be out by one hour. This could easily happen if long running tests cross a timezone adjust boundary.
        This can be set using --time-adjust-with-system-timezone.
        
        The ArchiverRobotListener allows for the second option if its adjust_with_system_timezone argument is set to True.
        
        To ensure any of the optional adjustments are traceable, two meta data values are added to the suites' test run.
        If time-adjust-secs is set to a value, time_adjust_secs with that value is written to the suite_metadata table.
        If `--time-adjust-with-system-timezone` option is included, then the addition of the time-adjust-secs and the
        system timezone is written to the suite_metadata tables as time_adjust_secs_total.
        
        e.g with command line
        
        `output_parser.py --time-adjust-secs -3600 --time-adjust-with-system-timezone ...`
        
        the following values would be added to suite_metadata table for (GMT+2)
        
         - time_adjust_secs with value -3600
         - time_adjust_secs_total with -10800.
        
        This example is mimicking adding daylight savings (1hr = 3600 secs) onto
        a system offset secs of 7200 (GMT+2). i.e. if the computer being used had the 'daylight savings' setting
        of and you want to manually add it during archiving.
        
        
        # Release notes
        - 1.0.0 (2020-12-18)
          * --execution-context command line parameter. 
          * --changes command line parameter to support submitting changes in json file format.
        
Keywords: robotframework test report history
Platform: UNKNOWN
Classifier: Programming Language :: Python :: 3
Classifier: Operating System :: OS Independent
Classifier: Topic :: Software Development :: Testing
Requires-Python: >=3.5
Description-Content-Type: text/markdown
