Metadata-Version: 2.1
Name: baytune
Version: 0.3.9
Summary: Bayesian Tuning and Bandits
Home-page: https://github.com/HDI-Project/BTB
Author: MIT Data To AI Lab
Author-email: dailabmit@gmail.com
License: MIT license
Description: <p align="left">
        <img width="15%" src="https://dai.lids.mit.edu/wp-content/uploads/2018/06/Logo_DAI_highres.png" alt="BTB" />
        <i>An open source project from Data to AI Lab at MIT.</i>
        </p>
        
        ![](https://raw.githubusercontent.com/HDI-Project/BTB/master/docs/images/BTB-Icon-small.png)
        
        A simple, extensible backend for developing auto-tuning systems.
        
        [![Development Status](https://img.shields.io/badge/Development%20Status-2%20--%20Pre--Alpha-yellow)](https://pypi.org/search/?c=Development+Status+%3A%3A+2+-+Pre-Alpha)
        [![PyPi Shield](https://img.shields.io/pypi/v/baytune.svg)](https://pypi.python.org/pypi/baytune)
        [![Travis CI Shield](https://travis-ci.org/HDI-Project/BTB.svg?branch=master)](https://travis-ci.org/HDI-Project/BTB)
        [![Coverage Status](https://codecov.io/gh/HDI-Project/BTB/branch/master/graph/badge.svg)](https://codecov.io/gh/HDI-Project/BTB)
        [![Downloads](https://pepy.tech/badge/baytune)](https://pepy.tech/project/baytune)
        
        * License: [MIT](https://github.com/HDI-Project/BTB/blob/master/LICENSE)
        * Development Status: [Pre-Alpha](https://pypi.org/search/?c=Development+Status+%3A%3A+2+-+Pre-Alpha)
        * Documentation: https://HDI-Project.github.io/BTB
        * Homepage: https://github.com/HDI-Project/BTB
        
        # Overview
        
        BTB ("Bayesian Tuning and Bandits") is a simple, extensible backend for developing auto-tuning
        systems such as AutoML systems. It provides an easy-to-use interface for *tuning* and *selection*.
        
        It is currently being used in several AutoML systems:
        - [ATM](https://github.com/HDI-Project/ATM), distributed, multi-tenant AutoML system for
        classifier tuning
        - [mit-d3m-ta2](https://github.com/HDI-Project/mit-d3m-ta2/), MIT's system for the DARPA
        [Data-driven discovery of models](https://www.darpa.mil/program/data-driven-discovery-of-models) (D3M) program
        - [AutoBazaar](https://github.com/HDI-Project/AutoBazaar), a flexible, general-purpose
        AutoML system
        
        # Install
        
        ## Requirements
        
        **BTB** has been developed and tested on [Python 3.5, 3.6 and 3.7](https://www.python.org/downloads/)
        
        Also, although it is not strictly required, the usage of a
        [virtualenv](https://virtualenv.pypa.io/en/latest/) is highly recommended in order to avoid
        interfering with other software installed in the system where **BTB** is run.
        
        ## Install with pip
        
        The easiest and recommended way to install **BTB** is using [pip](
        https://pip.pypa.io/en/stable/):
        
        ```bash
        pip install baytune
        ```
        
        This will pull and install the latest stable release from [PyPi](https://pypi.org/).
        
        If you want to install from source or contribute to the project please read the
        [Contributing Guide](https://hdi-project.github.io/BTB/contributing.html#get-started).
        
        # Quickstart
        
        In this short tutorial we will guide you through the necessary steps to get started using BTB
        to select and tune the best model to solve a Machine Learning problem.
        
        In particular, in this example we will be using ``BTBSession`` to perform solve the [Wine](
        https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data) classification problem
        by selecting between the `DecisionTreeClassifier` and the `SGDClassifier` models from
        [scikit-learn](https://scikit-learn.org/) while also searching for their best hyperparameter
        configuration.
        
        ## Prepare a scoring function
        
        The first step in order to use the `BTBSession` class is to develop a scoring function.
        
        This is a Python function that, given a model name and a hyperparameter configuration,
        evaluates the performance of the model on your data and returns a score.
        
        ```python3
        from sklearn.datasets import load_wine
        from sklearn.linear_model import SGDClassifier
        from sklearn.metrics import f1_score, make_scorer
        from sklearn.model_selection import cross_val_score
        from sklearn.tree import DecisionTreeClassifier
        
        
        dataset = load_wine()
        models = {
            'DTC': DecisionTreeClassifier,
            'SGDC': SGDClassifier,
        }
        
        def scoring_function(model_name, hyperparameter_values):
            model_class = models[model_name]
            model_instance = model_class(**hyperparameter_values)
            scores = cross_val_score(
                estimator=model_instance,
                X=dataset.data,
                y=dataset.target,
                scoring=make_scorer(f1_score, average='macro')
            )
            return scores.mean()
        ```
        
        ## Define the tunable hyperparameters
        
        The second step is to define the hyperparameters that we want to tune for each model as `Tunables`.
        
        ```python3
        from btb.tuning import Tunable
        from btb.tuning import hyperparams as hp
        
        tunables = {
            'DTC': Tunable({
                'max_depth': hp.IntHyperParam(min=3, max=200),
                'min_samples_split': hp.FloatHyperParam(min=0.01, max=1)
            }),
            'SGDC': Tunable({
                'max_iter': hp.IntHyperParam(min=1, max=5000, default=1000),
                'tol': hp.FloatHyperParam(min=1e-3, max=1, default=1e-3),
            })
        }
        ```
        
        ## Start the searching process
        
        Once you have defined a scoring function and the tunable hyperparameters specification of your
        models, you can start the searching for the best model and hyperparameter configuration by using
        the `btb.BTBSession`.
        
        All you need to do is create an instance passing the tunable hyperparameters scpecification
        and the scoring function.
        
        ```python3
        from btb import BTBSession
        
        session = BTBSession(
            tunables=tunables,
            scorer=scoring_function
        )
        ```
        
        And then call the `run` method indicating how many tunable iterations you want the Session to
        perform:
        
        
        ```python3
        best_proposal = session.run(20)
        ```
        
        The result will be a dictionary indicating the name of the best model that could be found
        and the hyperparameter configuration that was used:
        
        ```
        {
            'id': '826aedc2eff31635444e8104f0f3da43',
            'name': 'DTC',
            'config': {
                'max_depth': 21,
                'min_samples_split': 0.044010284821858835
            },
            'score': 0.907229308339589
        }
         ```
        
        # How does BTB perform?
        
        We have a comprehensive [benchmarking framework](https://github.com/HDI-Project/BTB/tree/master/benchmark)
        that we use to evaluate the performance of our `Tuners`. For every release, we perform benchmarking
        against 100's of challenges, comparing tuners against each other in terms of number of wins.
        We present the latest leaderboard from latest release below:
        
        ## Number of Wins on latest Version
        
        | tuner                   | with ties | without ties |
        |-------------------------|-----------|--------------|
        | `Ax.optimize`           |    177    |            7 |
        | `BTB.GPEiTuner`         |    265    |           55 |
        | `BTB.GPTuner`           |  **296**  |       **86** |
        | `BTB.UniformTuner`      |    204    |           13 |
        | `HyperOpt.tpe.suggest`  |    241    |           44 |
        
        - Detailed results from which this summary emerged are available [here](https://docs.google.com/spreadsheets/d/1npsvf97W8HrayVmehc-ph_Vsrq_Lcn_d/).
        - If you want to compare your own tuner, follow the steps in our benchmarking framework [here](https://github.com/HDI-Project/BTB/tree/master/benchmark).
        - If you have a proposal for tuner that we should include in our benchmarking get in touch
        with us at [dailabmit@gmail.com](mailto:dailabmit@gmail.com).
        
        # What's next?
        
        For more details about **BTB** and all its possibilities and features, please check the
        [project documentation site](https://HDI-Project.github.io/BTB/)!
        
        Also do not forget to have a look at the [notebook tutorials](notebooks).
        
        # Citing BTB
        
        If you use BTB, please consider citing our related paper:
        
          Micah J. Smith, Carles Sala, James Max Kanter, and Kalyan Veeramachaneni. ["The Machine Learning Bazaar: Harnessing the ML Ecosystem for Effective System Development."](https://arxiv.org/abs/1905.08942) arXiv Preprint 1905.08942. 2019.
        
          ```bibtex
          @article{smith2019mlbazaar,
            author = {Smith, Micah J. and Sala, Carles and Kanter, James Max and Veeramachaneni, Kalyan},
            title = {The Machine Learning Bazaar: Harnessing the ML Ecosystem for Effective System Development},
            journal = {arXiv e-prints},
            year = {2019},
            eid = {arXiv:1905.08942},
            pages = {arXiv:1905.08942},
            archivePrefix = {arXiv},
            eprint = {1905.08942},
          }
          ```
        
        
        # History
        
        ## 0.3.9 - 2020-05-18
        
        With this release we integrate a new tuning library, `Ax`, with our benchmarking process. A new
        leaderboard including this library has been generated.
        
        ### Resolved Issues
        
        * Issue #194: Integrate `Ax` for benchmarking.
        
        ## 0.3.8 - 2020-05-08
        
        This version adds a new functionality which allows running the benchmarking framework on a
        Kubernetes cluster. By doing this, the benchmarking process can be executed distributedly, which
        reduces the time necessary to generate a new leaderboard.
        
        ### Internal improvements
        
        * `btb_benchmark.kubernetes.run_dask_function`: Run dask function inside a pod using the given
        config.
        * `btb_benchmark.kubernetes.run_on_kubernetes`: Start a Dask Cluster using dask-kubernetes and
        run a function.
        * Documentation updated.
        * Jupyter notebooks with examples on how to run the benchmarking process and how to run it on
        kubernetes.
        
        ## 0.3.7 - 2020-04-15
        
        This release brings a new `benchmark` framework with public leaderboard.
        As part of our benchmarking efforts we will run the framework at every release and make the results
        public. In each run we compare it to other tuners and optimizer libraries. We are constantly adding
        new libraries for comparison. If you have suggestions for a tuner library we should include in our
        compraison, please contact us via email at [dailabmit@gmail.com](mailto:dailabmit@gmail.com).
        
        ### Resolved Issues
        
        * Issue #159: Implement more `MLChallenges` and generate a public leaderboard.
        * Issue #180: Update BTB Benchmarking module.
        * Issue #182: Integrate HyperOPT with benchmarking.
        * Issue #184: Integrate dask to bencharking.
        
        ## 0.3.6 - 2020-03-04
        
        This release improves `BTBSession` error handling and allows `Tunables` with cardinality
        equal to 1 to be scored with `BTBSession`. Also, we provide a new documentation for
        this version of `BTB`.
        
        ### Internal Improvements
        
        Improved documentation, unittests and integration tests.
        
        ### Resolved Issues
        
        * Issue #164: Improve documentation for `v0.3.5+`.
        * Issue #166: Wrong erro raised by BTBSession on too many errors.
        * Issue #170: Tuner has no scores attribute until record is run once.
        * Issue #175: BTBSession crashes when record is not performed.
        * Issue #176: BTBSession fails to select a proper Tunable when normalized_scores becomse None.
        
        ## 0.3.5 - 2020-01-21
        
        With this release we are improving `BTBSession` by adding private attributes, or not intended to
        be public / modified by the user and also improving the documentation of it.
        
        ### Internal Improvements
        
        Improved docstrings, unittests and public interface of `BTBSession`.
        
        ### Resolved Issues
        
        * Issue #162: Fix session with the given comments on PR 156.
        
        ## 0.3.4 - 2019-12-24
        
        With this release we introduce a `BTBSession` class. This class represents the process of selecting
        and tuning several tunables until the best possible configuration fo a specific `scorer` is found.
        We also have improved and fixed some minor bugs arround the code (described in the issues below).
        
        ### New Features
        
        * `BTBSession` that makes `BTB` more user friendly.
        
        ### Internal Improvements
        
        Improved unittests, removed old dependencies, added more `MLChallenges` and fixed an issue with
        the bound methods.
        
        ### Resolved Issues
        
        * Issue #145: Implement `BTBSession`.
        * Issue #155: Set defaut to `None` for `CategoricalHyperParam` is not possible.
        * Issue #157: Metamodel `_MODEL_KWARGS_DEFAULT` becomes mutable.
        * Issue #158: Remove `mock` dependency from the package.
        * Issue #160: Add more Machine Learning Challenges and more estimators.
        
        
        ## 0.3.3 - 2019-12-11
        
        Fix a bug where creating an instance of `Tuner` ends in an error.
        
        ### Internal Improvements
        
        Improve unittests to use `spec_set` in order to detect errors while mocking an object.
        
        ### Resolved Issues
        
        * Issue #153: Bug with tunner logger message that avoids creating the Tunner.
        
        ## 0.3.2 - 2019-12-10
        
        With this release we add the new `benchmark` challenge `MLChallenge` which allows users to
        perform benchmarking over datasets with machine learning estimators, and also some new
        features to make the workflow easier.
        
        ### New Features
        
        * New `MLChallenge` challenge that allows performing crossvalidation over datasets and machine
        learning estimators.
        * New `from_dict` function for `Tunable` class in order to instantiate from a dictionary that
        contains information over hyperparameters.
        * New `default` value for each hyperparameter type.
        
        ### Resolved Issues
        
        * Issue #68: Remove `btb.tuning.constants` module.
        * Issue #120: Tuner repr not helpful.
        * Issue #121: HyperParameter repr not helpful.
        * Issue #141: Imlement propper logging to the tuning section.
        * Issue #150: Implement Tunable `from_dict`.
        * Issue #151: Add default value for hyperparameters.
        * Issue #152: Support `None` as a choice in `CategoricalHyperPrameters`.
        
        ## 0.3.1 - 2019-11-25
        
        With this release we introduce a `benchmark` module for `BTB` which allows the users to perform
        a benchmark over a series of `challenges`.
        
        ### New Features
        
        * New `benchmark` module.
        * New submodule named `challenges` to work toghether with `benchmark` module.
        
        ### Resolved Issues
        
        * Issue #139: Implement a Benchmark for BTB
        
        ## 0.3.0 - 2019-11-11
        
        With this release we introduce an improved `BTB` that has a major reorganization of the project
        with emphasis on an easier way of interacting with `BTB` and an easy way of developing, testing and
        contributing new acquisition functions, metamodels, tuners  and hyperparameters.
        
        ### New project structure
        
        The new major reorganization comes with the `btb.tuning` module. This module provides everything
        needed for the `tuning` process and comes with three new additions `Acquisition`, `Metamodel` and
        `Tunable`. Also there is an update to the `Hyperparamters` and `Tuners`. This changes are meant
        to help developers and contributors to easily develop, test and contribute new `Tuners`.
        
        ### New API
        
        There is a slightly new way of using `BTB` as the new `Tunable` class is introduced, that is meant
        to be the only requiered object to instantiate a `Tuner`. This `Tunable` class represents a
        collection of `HyperParams` that need to be tuned as a whole, at once. Now, in order to create a
        `Tuner`, a `Tunable` instance must be created first with the `hyperparameters` of the
        `objective function`.
        
        ### New Features
        
        * New `Hyperparameters` that allow an easier interaction for the final user.
        * New `Tunable` class that manages a collection of `Hyperparameters`.
        * New `Tuner` class that is a python mixin that requieres of `Acquisition` and `Metamodel` as
        parents. Also now works with a single `Tunable` object.
        * New `Acquisition` class, meant to implement an acquisition function to be inherit by a `Tuner`.
        * New `Metamodel` class, meant to implement everything that a certain `model` needs and be inherit
        by the `Tuner`.
        * Reorganization of the `selection` module to follow a similar `API` to `tuning`.
        
        ### Resolved Issues
        
        * Issue #131: Reorganize the project structure.
        * Issue #133: Implement Tunable class to control a list of hyperparameters.
        * Issue #134: Implementation of Tuners for the new structure.
        * Issue #140: Reorganize selectors.
        
        ## 0.2.5
        
        ### Bug Fixes
        
        * Issue #115: HyperParameter subclass instantiation not working properly
        
        ## 0.2.4
        
        ### Internal Improvements
        
        * Issue #62: Test for `None` in `HyperParameter.cast` instead of `HyperParameter.__init__`
        
        ### Bug fixes
        
        * Issue #98: Categorical hyperparameters do not support `None` as input
        * Issue #89: Fix the computation of `avg_rewards` in `BestKReward`
        
        ## 0.2.3
        
        ### Bug Fixes
        
        * Issue #84: Error in GP tuning when only one parameter is present bug
        * Issue #96: Fix pickling of HyperParameters
        * Issue #98: Fix implementation of the GPEi tuner
        
        ## 0.2.2
        
        ### Internal Improvements
        
        * Updated documentation
        
        ### Bug Fixes
        
        * Issue #94: Fix unicode `param_type` caused error on python 2.
        
        ## 0.2.1
        
        ### Bug fixes
        
        * Issue #74: `ParamTypes.STRING` tunables do not work
        
        ## 0.2.0
        
        ### New Features
        
        * New Recommendation module
        * New HyperParameter types
        * Improved documentation and examples
        * Fully tested Python 2.7, 3.4, 3.5 and 3.6 compatibility
        * HyperParameter copy and deepcopy support
        * Replace print statements with logging
        
        ### Internal Improvements
        
        * Integrated with Travis-CI
        * Exhaustive unit testing
        * New implementation of HyperParameter
        * Tuner builds a grid of real values instead of indices
        * Resolve Issue #29: Make args explicit in `__init__` methods
        * Resolve Issue #34: make all imports explicit
        
        ### Bug Fixes
        
        * Fix error from mixing string/numerical hyperparameters
        * Inverse transform for categorical hyperparameter returns single item
        
        ## 0.1.2
        
        * Issue #47: Add missing requirements in v0.1.1 setup.py
        * Issue #46: Error on v0.1.1: 'GP' object has no attribute 'X'
        
        ## 0.1.1
        
        * First release.
        
Keywords: machine learning hyperparameters tuning classification
Platform: UNKNOWN
Classifier: Development Status :: 2 - Pre-Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Natural Language :: English
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.5
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
Requires-Python: >=3.5
Description-Content-Type: text/markdown
Provides-Extra: test
Provides-Extra: dev
