Metadata-Version: 2.1
Name: torchcrepe
Version: 0.0.1
Summary: Pytorch implementation of CREPE pitch tracker
Home-page: https://github.com/maxrmorrison/torchcrepe
Author: Max Morrison
Author-email: maxrmorrison@gmail.com
License: MIT
Description: # torchcrepe
        Pytorch implementation of the CREPE pitch tracker. The original Tensorflow
        implementation can be found [here](https://github.com/marl/crepe/). The
        provided model weights were obtained by converting the "tiny" and "full" models
        using [MMdnn](https://github.com/microsoft/MMdnn), an open-source model
        management framework.
        
        
        ### Installation
        
        Clone this repo and run `pip install .` in the `torchcrepe` directory.
        
        
        ### Usage
        
        ##### Computing pitch and harmonicity from audio
        
        
        ```
        import torchaudio
        import torchcrepe
        
        # Load audio
        audio, sr = torchaudio.load( ... )
        
        # Place the audio on the device you want CREPE to run on
        audio = audio.to( ... )
        
        # Here we'll use a 5 millisecond hop length
        hop_length = int(sr / 200.)
        
        # Provide a sensible frequency range for your domain (upper limit is 2006 Hz)
        # This would be a reasonable range for speech
        fmin = 50
        fmax = 550
        
        # Select a model capacity--one of "tiny" or "full"
        model = 'tiny'
        
        # Compute pitch and harmonicity
        pitch = torchcrepe.predict(audio, sr, hop_length, fmin, fmax, model)
        ```
        
        A harmonicity metric similar to the Crepe confidence score can also be
        extracted by passing `return_harmonicity=True` to `torchcrepe.predict`.
        
        By default, `torchcrepe` uses Viterbi decoding on the softmax of the network
        output. This is different than the original implementation, which uses a
        weighted average near the argmax of binary cross-entropy probabilities.
        The argmax operation can cause double/half frequency errors. These can be
        removed by penalizing large pitch jumps via Viterbi decoding. The `decode`
        submodule provides some options for decoding.
        
        ```
        # Decode using viterbi decoding (default)
        torchcrepe.predict(..., decoder=torchcrepe.decode.viterbi)
        
        # Decode using weighted argmax (as in the original implementation)
        torchcrepe.predict(..., decoder=torchcrepe.decode.weighted_argmax)
        
        # Decode using argmax
        torchcrepe.predict(..., decoder=torchcrepe.decode.argmax)
        ```
        
        When harmonicity is low, the pitch is less reliable. For some problems, it
        makes sense to mask these less reliable pitch values. However, the harmonicity
        can be noisy and the pitch has quantization artifacts. `torchcrepe` provides
        submodules `filter` and `threshold` for this purpose. The filter and threshold
        parameters should be tuned to your data. For clean speech, a 10-20 millisecond
        window with a threshold of 0.21 has worked.
        
        ```
        # We'll use a 15 millisecond window assuming a hop length of 5 milliseconds
        win_length = 3
        
        # Median filter noisy confidence value
        harmonicity = torchcrepe.filter.median(harmonicity, win_length)
        
        # Remove inharmonic regions
        pitch = torchcrepe.threshold.At(.21)(pitch, harmonicity)
        
        # Optionally smooth pitch to remove quantization artifacts
        pitch = torchcrepe.filter.mean(pitch, win_length)
        ```
        
        For more fine-grained control over pitch thresholding, see
        `torchcrepe.threshold.Hysteresis`. This is especially useful for removing
        spurious voiced regions caused by noise in the harmonicity values, but
        has more parameters and may require more manual tuning to your data.
        
        
        ##### Computing the CREPE model output activations
        
        ```
        probabilities = torchcrepe.infer(torchcrepe.preprocess(audio, sr, hop_length))
        ```
        
        
        ##### Computing the CREPE embedding space
        
        As in Differentiable Digital Signal Processing, this uses the output of the
        fifth max-pooling layer as a pretrained pitch embedding
        
        ```
        embeddings = torchcrepe.embed(audio, sr, hop_length)
        ```
        
        
        ### Tests
        
        The module tests can be run as follows.
        
        ```
        pip install pytest
        pytest
        ```
        
Keywords: pitch,audio,speech,music,pytorch,crepe
Platform: UNKNOWN
Classifier: License :: OSI Approved :: MIT License
Description-Content-Type: text/markdown
