Metadata-Version: 2.1
Name: akerbp.mlops
Version: 0.20210127165736
Summary: MLOps framework
Home-page: https://bitbucket.org/akerbp/mlops/
Author: Alfonso M. Canterla
Author-email: alfonso.canterla@soprasteria.com
License: UNKNOWN
Description: ## MLOps Framework
        This is a framework for MLOps. Currently it deploys models as CDF Functions.
        
        ## Getting Started:
        Follow these steps:
        - Install package: `pip install akerbp.mlops`
        - Define `ENV` and `COGNITE_*` environmental variables as described in
          https://akerbp.atlassian.net/wiki/spaces/SIMDev/pages/1181384729/MLOps
        - Become familiar with the model template
        - Copy pipeline file `bitbucket-pipelines.yml` and config file
          `mlops_settings.py` to your repo
        - Fill in user settings. 
        - Model artifacts should not be committed to the repo. 
        - Follow the file and folder structure (described later)
        - It's possible to have several models per repo: they need to be registered in
          the settings, and then they need to have their own model and test files.
        - Follow the import guidelines (described later)
        - Make sure the prediction service gets access to model artifacts (described
          later)
        - Add the new files to your repository, commit and push
        - Follow or request the Bitbucket setup (described later)
        
        A this point every git push in master branch will trigger a deployment in the
        test environment. More information about the deployments pipelines is provided
        later.
        
        ## User Guide 
        
        ### MLOps Files and Folders
        These are the files and folders from the MLOps framework:
        - `mlops_settings.py` contains the user settings
        - Folder `model_code` is a model template included to show the model interface.
          It is not needed by the framework, but it is recommended to become familiar
          with it.
        - `model_artifact` stores the artifacts for the model shown in  `model_code`.
          This is to help to test the model and learn the framework.
        - `mlops` contains deployment code
        - `bitbucket-pipelines.yml` describes the deployment pipeline in Bitbucket
        
        ### Import Guidelines
        The repo's root folder is the base folder when importing. For example, assume
        you have these files in the model code folder: `model_code/model.py`,
        `model_code/helper.py` and `model_code/data.csv`. If `model.py` needs to import
        `helper.py`, use: `import model_code.helper`. If `model.py` needs to read
        `data.csv`, the right path is `os.path.join('model_code', 'data.csv')`. 
        
        Mlops library can of course be imported as well, e.g. its logger:
        ``` python
        from akerbp.mlops.utils import logger 
        logging=logger.get_logger()
        logging.debug("This is a debug log")
        ```
        
        ### Files and Folders Structure
        All the model code and files should be under a single folder. Required files:
        - `model.py`: implements our standard model interface
        - `test_model.py`: tests to verify that the model code is correct and to verify
          correct deployment
        - `requirements.model`: libraries needed (with specific version numbers),  can't
          be called `requirements.txt`.
        
        Projects with multiple models: models can be in different folders, but if there
        are common files this structure should be followed (when deploying a model the
        top folder of the path is chosen as parent model code folder):
        - `models/model1/`
        - `models/model2/`
        - `models/common_code/`
        
        ### Services
        We consider two types of services: prediction and training. Prediction services
        are deployed with model artifacts so that they are available at prediction time
        (downloading would require waiting time, and files written during run time
        consume ram memory). Services output has a status field ('ok' or 'error'). If
        they are 'ok', they have also a 'prediction'/'training' field. The former is
        determined by the `predict` method of the model, while the latter combines
        artifact metadata and model metadata produced by the `train` function.
        Prediction service has also a 'model_id' field to keep track of which model was
        used to predict.
        
        ### Model Artifacts for the Prediction Service
        Deployment of a prediction service requires a model artifact folder. Model
        artifacts are segregated by environment (e.g. only production models can be
        deployed to production). Model artifacts are versioned and stored in CDF Files
        together with user-defined metadata. Uploading a new model increases the version
        count by 1 for that model and environment. It's important not to delete model
        files manually, since that would mess with the model manager. When deploying a
        model service, the latest model version is chosen (however, we can discuss the
        possibility of deploying specific versions or filtering by metadata).
        
        The general rule is that model artifacts have to be uploaded manually before
        deployment. If there are multiple models, you need to do this one at at time.
        Code example: 
        ```python 
        from akerbp.mlops.cdf.helpers import set_up_cdf_client
        from akerbp.mlops.cdf.helpers import upload_new_model_version 
        
        set_up_cdf_client()
        metadata = train(model_dir, secrets) # or define it directly
        
        folder_info = upload_new_model_version(
          model_name, 
          env,
          folder_path, 
          metadata
        )
        ```
        Note that `model_name` corresponds to one of the elements in `model names`
        defined in `mlops_settings.py`, `env` is the target environment (where the model
        should be available), `folder_path` is the local model artifact folder and
        `metadata` is a dictionary with artifact metadata, e.g. performance, git commit,
        etc. Each model update adds a new version (environment dependent) and note that
        updating a model doesn't modify the models used in existing prediction services.
        
        Recommended process to update a model:
        1. New model features implemented in a feature branch
        2. New artifact generated and uploaded to test environment
        3. Feature branch merged with master
        4. Test deployment is triggered automatically: prediction service is deployed to
           test with the latest artifacts
        5. Prediction service in test is verified, and if things go well
        6. New artifact uploaded to prod environment
        7. Production deployment is triggered manually: prediction service is deployed
           to prod with the latest artifacts
        
        However, in projects with a training service, you can rely on it to upload a
        first version of the model. The first prediction service deployment will fail,
        but you can deploy again after the training service has produced a model.
        
        Another exception is that, when you deploy from the development environment
        (covered later in this document), the model artifacts in the settings file can
        point to existing local folders. These will then be used for the deployment.
        Version is then fixed to `model_name/dev/1`. Note that these artifacts are not
        uploaded to CDF Files.
        
        ### Local Testing and Deployment
        It's possible to tests the functions locally, which can help you debug errors
        quickly. This is recommended before a deployment. From your repo's root folder:
        - `python -m pytest model_code` (replace `model_code` by your model code folder
          name)
        - `bash deploy_prediction_service.sh`
        - `bash deploy_training_service.sh` (if there's a training service)
        
        The first one will run your model tests. The last two run model tests but also
        the service tests implemented in the framework and simulate deployment.
        
        If you really want to deploy from your development environment, you can run
        this: `LOCAL_DEPLOYMENT=True bash deploy_prediction_service.sh`
        
        ### Automated Deployments from Bitbucket
        Deployments to the test environment are triggered by commits (you need to push
        them). Deployments to the production environment are enabled manually from the
        Bitbucket pipeline dashboard. Branches that match 'deploy/*' behave as master. 
        
        It is assumed that most projects won't include a training service. A branch that
        matches 'trainpred/*' deploys both prediction and training services. If a
        project includes both services, the pipeline file could instead be edited so
        that master deployed both services.
        
        It is possible to schedule the training service in CDF, and then it can make
        sense to schedule the deployment pipeline of the model service (as often as new
        models are trained)
        
        ### Bitbucket Setup
        The following environments need to be defined in repository settings >
        deployments: 
        - test deployments: test-prediction and test-training, each with ENV=test
        - production deployments: production-prediction and production-training, each
          with ENV=prod
        
        The following need to be defined in respository settings > repository variables:
        COGNITE_API_KEY_DATA, COGNITE_API_KEY_FUNCTIONS, COGNITE_API_KEY_FILES
        
        The pipeline needs to be enabled.
        
        ## Developer Guide
        
        ## Build and Upload Package
        Edit `setup.py` file and note the following:
         - Register dependencies
         - Bash scripts will be installed in a `bin` folder in the PATH. Create an
           account in pypi, then create a token and a `$HOME/.pypirc` file. It's
           possible to build and upload the library from the development environment:
        ```bash
        bash build.sh
        ```
        However, there's no need to do that since the pipeline is setup to run that
        script before the service steps.
        
        The library can be installed locally in developer mode (installed package links
        to the source code, so that it can be modified without the need to reinstall).
        From the package folder:
        ```bash
        pip install -e .
        ```
        
        ### TODO
        Decisions/Findings:
        - can we install custom libraries in CDF? -> yes, if we upload them to pypi
        - Generate function-specific tests -> data scientist
        - Can we define environmental variables in CDF functions? -> not currently, but
          we can use a settings file
Platform: UNKNOWN
Classifier: Programming Language :: Python :: 3
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Requires-Python: >=3.8
Description-Content-Type: text/markdown
