Metadata-Version: 2.1
Name: jina
Version: 3.14.1
Summary: Build multimodal AI services via cloud native technologies · Neural Search · Generative AI · MLOps
Home-page: https://github.com/jina-ai/jina/
Author: Jina AI
Author-email: hello@jina.ai
License: Apache 2.0
Download-URL: https://github.com/jina-ai/jina/tags
Project-URL: Documentation, https://docs.jina.ai
Project-URL: Source, https://github.com/jina-ai/jina/
Project-URL: Tracker, https://github.com/jina-ai/jina/issues
Description: <p align="center">
        <!-- survey banner start -->
        <a href="https://10sw1tcpld4.typeform.com/to/EGAEReM7?utm_source=readme&utm_medium=github&utm_campaign=user%20experience&utm_term=feb2023&utm_content=survey">
          <img src="./.github/banner.svg?raw=true">
        </a>
        <!-- survey banner start -->
        
        <p align="center">
        <a href="https://docs.jina.ai"><img src="https://github.com/jina-ai/jina/blob/master/docs/_static/logo-light.svg?raw=true" alt="Jina logo: Build multimodal AI services via cloud native technologies · Neural Search · Generative AI · Cloud Native" width="150px"></a>
        <br><br><br>
        </p>
        
        <p align="center">
        <b>Build multimodal AI services with cloud native technologies</b>
        </p>
        
        
        <p align=center>
        <a href="https://pypi.org/project/jina/"><img alt="PyPI" src="https://img.shields.io/pypi/v/jina?label=Release&style=flat-square"></a>
        <a href="https://codecov.io/gh/jina-ai/jina"><img alt="Codecov branch" src="https://img.shields.io/codecov/c/github/jina-ai/jina/master?&logo=Codecov&logoColor=white&style=flat-square"></a>
        <a href="https://jina.ai/slack"><img src="https://img.shields.io/badge/Slack-3.6k-blueviolet?logo=slack&amp;logoColor=white&style=flat-square"></a>
        <a href="https://pypistats.org/packages/jina"><img alt="PyPI - Downloads from official pypistats" src="https://img.shields.io/pypi/dm/jina?style=flat-square"></a>
        <a href="https://github.com/jina-ai/jina/actions/workflows/cd.yml"><img alt="Github CD status" src="https://github.com/jina-ai/jina/actions/workflows/cd.yml/badge.svg"></a>
        </p>
        
        <!-- start jina-description -->
        
        Jina is an MLOps framework to build multimodal AI **services** and **pipelines** then **serve**, **scale** and **deploy** them to a production-ready environment like Kubernetes or Jina AI Cloud. Jina handles the infrastructure complexity, making advanced solution engineering and cloud-native technologies accessible to every developer.
        
        <p align="center">
        <strong><a href="#build-ai--ml-services">Build and deploy a gRPC microservice</a> • <a href="#build-a-pipeline">Build and deploy a pipeline</a></strong>
        </p>
        
        Applications built with Jina enjoy the following features out of the box:
        
        🌌 **Universal**
          - Build applications that deliver fresh insights from multiple data types such as text, image, audio, video, 3D mesh, PDF with [LF's DocArray](https://github.com/docarray/docarray).
          - Support for all mainstream deep learning frameworks.
          - Polyglot gateway that supports gRPC, Websockets, HTTP, GraphQL protocols with TLS.
        
        ⚡ **Performance**
          - Intuitive design pattern for high-performance microservices.
          - Easy scaling: set replicas, sharding in one line. 
          - Duplex streaming between client and server.
          - Async and non-blocking data processing over dynamic flows.
        
        ☁️ **Cloud native**
          - Seamless Docker container integration: sharing, exploring, sandboxing, versioning and dependency control via [Executor Hub](https://cloud.jina.ai).
          - Full observability via OpenTelemetry, Prometheus and Grafana.
          - Fast deployment to Kubernetes and Docker Compose.
        
        🍱 **Ecosystem**
          - Improved engineering efficiency thanks to the Jina AI ecosystem, so you can focus on innovating with the data applications you build.
          - Free CPU/GPU hosting via [Jina AI Cloud](https://cloud.jina.ai).
        
        <!-- end jina-description -->
        
        <p align="center">
        <a href="#"><img src="https://github.com/jina-ai/jina/blob/master/.github/readme/core-tree-graph.svg?raw=true" alt="Jina in Jina AI neural search ecosystem" width="100%"></a>
        </p>
        
        ## [Documentation](https://docs.jina.ai)
        
        ## Install 
        
        ```bash
        pip install jina
        ```
        
        Find more install options on [Apple Silicon](https://docs.jina.ai/get-started/install/apple-silicon-m1-m2/)/[Windows](https://docs.jina.ai/get-started/install/windows/).
        
        ## Get Started
        
        ### Basic Concepts
        
        Jina has three fundamental concepts:
        
        - A [**Document**](https://docarray.jina.ai/) (from [DocArray](https://github.com/docarray/docarray)) is the input/output format in Jina.
        - An [**Executor**](https://docs.jina.ai/concepts/executor/) is a Python class that transforms and processes Documents.
        - A [**Deployment**](https://docs.jina.ai/concepts/executor/serve/#serve-directly) serves a single Executor, while a [**Flow**](https://docs.jina.ai/concepts/flow/) serves Executors chained into a pipeline.
        
        [The full glossary is explained here](https://docs.jina.ai/concepts/preliminaries/#).
        
        ---
        
        <p align="center">
        <a href="https://docs.jina.ai"><img src="https://github.com/jina-ai/jina/blob/master/.github/readme/streamline-banner.png?raw=true" alt="Jina: Streamline AI & ML Product Delivery" width="100%"></a>
        </p>
        
        ### Build AI & ML Services
        <!-- start build-ai-services -->
        
        [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jina-ai/jina/blob/master/.github/getting-started/notebook.ipynb)
        
        Let's build a fast, reliable and scalable gRPC-based AI service. In Jina we call this an **[Executor](https://docs.jina.ai/concepts/executor/)**. Our simple Executor will use Facebook's mBART-50 model to translate French to English. We'll then use a **Deployment** to serve it.
        
        > **Note**
        > A Deployment serves just one Executor. To combine multiple Executors into a pipeline and serve that, use a [Flow](#build-a-pipeline).
        
        > **Note**
        > Run the [code in Colab](https://colab.research.google.com/assets/colab-badge.svg) to install all dependencies.
        
        Let's implement the service's logic:
        
        <table>
        <tr>
        <th><code>translate_executor.py</code> </th> 
        <tr>
        <td>
        
        ```python
        from docarray import DocumentArray
        from jina import Executor, requests
        from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
        
        
        class Translator(Executor):
            def __init__(self, **kwargs):
                super().__init__(**kwargs)
                self.tokenizer = AutoTokenizer.from_pretrained(
                    "facebook/mbart-large-50-many-to-many-mmt", src_lang="fr_XX"
                )
                self.model = AutoModelForSeq2SeqLM.from_pretrained(
                    "facebook/mbart-large-50-many-to-many-mmt"
                )
        
            @requests
            def translate(self, docs: DocumentArray, **kwargs):
                for doc in docs:
                    doc.text = self._translate(doc.text)
        
            def _translate(self, text):
                encoded_en = self.tokenizer(text, return_tensors="pt")
                generated_tokens = self.model.generate(
                    **encoded_en, forced_bos_token_id=self.tokenizer.lang_code_to_id["en_XX"]
                )
                return self.tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[
                    0
                ]
        ```
        
        </td>
        </tr>
        </table>
        
        Then we deploy it with either the Python API or YAML:
        <div class="table-wrapper">
        <table>
        <tr>
        <th> Python API: <code>deployment.py</code> </th> 
        <th> YAML: <code>deployment.yml</code> </th>
        </tr>
        <tr>
        <td>
        
        ```python
        from jina import Deployment
        
        with Deployment(uses=Translator, timeout_ready=-1) as dep:
            dep.block()
        ```
        
        </td>
        <td>
        
        ```yaml
        jtype: Deployment
        with:
          uses: Translator
          py_modules:
            - translate_executor.py # name of the module containing Translator
          timeout_ready: -1
        ```
        
        And run the YAML Deployment with the CLI: `jina deployment --uses deployment.yml`
        
        </td>
        </tr>
        </table>
        </div>
        
        ```text
        ──────────────────────────────────────── 🎉 Deployment is ready to serve! ─────────────────────────────────────────
        ╭────────────── 🔗 Endpoint ───────────────╮
        │  ⛓      Protocol                   GRPC │
        │  🏠        Local          0.0.0.0:12345  │
        │  🔒      Private      172.28.0.12:12345  │
        │  🌍       Public    35.230.97.208:12345  │
        ╰──────────────────────────────────────────╯
        ```
        
        Use [Jina Client](https://docs.jina.ai/concepts/client/) to make requests to the service:
        
        ```python
        from docarray import Document
        from jina import Client
        
        french_text = Document(
            text='un astronaut est en train de faire une promenade dans un parc'
        )
        
        client = Client(port=12345)  # use port from output above
        response = client.post(on='/', inputs=[french_text])
        
        print(response[0].text)
        ```
        
        ```text
        an astronaut is walking in a park
        ```
        
        <!-- end build-ai-services -->
        
        ### Build a pipeline
        
        <!-- start build-pipelines -->
        [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jina-ai/jina/blob/docs-readme-changes/.github/getting-started/notebook.ipynb)
        
        Sometimes you want to chain microservices together into a pipeline. That's where a [Flow](https://docs.jina.ai/concepts/flow/) comes in.
        
        A Flow is a [DAG](https://de.wikipedia.org/wiki/DAG) pipeline, composed of a set of steps, It orchestrates a set of [Executors](https://docs.jina.ai/concepts/executor/) and a [Gateway](https://docs.jina.ai/concepts/gateway/) to offer an end-to-end service.
        
        > **Note**
        > If you just want to serve a single Executor, you can use a [Deployment](#build-ai--ml-services).
        
        For instance, let's combine [our French translation service](#build-ai--ml-services) with a Stable Diffusion image generation service from Jina AI's [Executor Hub](https://cloud.jina.ai/executors). Chaining these services together into a [Flow](https://docs.jina.ai/concepts/flow/) will give us a multilingual image generation service.
        
        Build the Flow with either Python or YAML:
        
        <div class="table-wrapper">
        <table>
        <tr>
        <th> Python API: <code>flow.py</code> </th> 
        <th> YAML: <code>flow.yml</code> </th>
        </tr>
        <tr>
        <td>
        
        ```python
        from jina import Flow
        
        flow = (
            Flow()
            .add(uses=Translator, timeout_ready=-1)
            .add(
                uses='jinaai://jina-ai/TextToImage',
                timeout_ready=-1,
                install_requirements=True,
            )
        )  # use the Executor from Executor hub
        
        with flow:
            flow.block()
        ```
        
        </td>
        <td>
        
        ```yaml
        jtype: Flow
        executors:
          - uses: Translator
            timeout_ready: -1
            py_modules:
              - translate_executor.py
          - uses: jinaai://jina-ai/TextToImage
            timeout_ready: -1
            install_requirements: true
        ```
        
        Then run the YAML Flow with the CLI: `jina flow --uses flow.yml`
        
        </td>
        </tr>
        </table>
        </div>
        
        ```text
        ─────────────────────────────────────────── 🎉 Flow is ready to serve! ────────────────────────────────────────────
        ╭────────────── 🔗 Endpoint ───────────────╮
        │  ⛓      Protocol                   GRPC  │
        │  🏠        Local          0.0.0.0:12345  │
        │  🔒      Private      172.28.0.12:12345  │
        │  🌍       Public    35.240.201.66:12345  │
        ╰──────────────────────────────────────────╯
        ```
        
        Then, use [Jina Client](https://docs.jina.ai/concepts/client/) to make requests to the Flow:
        
        ```python
        from jina import Client, Document
        
        client = Client(port=12345)  # use port from output above
        
        french_text = Document(
            text='un astronaut est en train de faire une promenade dans un parc'
        )
        
        response = client.post(on='/', inputs=[french_text])
        
        response[0].display()
        ```
        
        
        ![stable-diffusion-output.png](https://raw.githubusercontent.com/jina-ai/jina/master/.github/stable-diffusion-output.png)
        
        
        You can also deploy a Flow to JCloud.
        
        First, turn the `flow.yml` file into a [JCloud-compatible YAML](https://docs.jina.ai/concepts/jcloud/yaml-spec/) by specifying resource requirements and using containerized Hub Executors.
        
        Then, use `jina cloud deploy` command to deploy to the cloud:
        
        
        ```shell
        wget https://raw.githubusercontent.com/jina-ai/jina/master/.github/getting-started/jcloud-flow.yml
        jina cloud deploy jcloud-flow.yml
        ```
        
        ⚠️ **Caution: Make sure to delete/clean up the Flow once you are done with this tutorial to save resources and credits.**
        
        Read more about [deploying Flows to JCloud](https://docs.jina.ai/concepts/jcloud/#deploy).
        
        <!-- end build-pipelines -->
        
        Check [the getting-started project source code](https://github.com/jina-ai/jina/tree/master/.github/getting-started).
        
        ---
        
        <p align="center">
        <a href="https://docs.jina.ai"><img src="https://github.com/jina-ai/jina/blob/master/.github/readme/no-complexity-banner.png?raw=true" alt="Jina: No Infrastructure Complexity, High Engineering Efficiency" width="100%"></a>
        </p>
        
        Why not just use standard Python to build that microservice and pipeline? Jina accelerates time to market of your application by making it more scalable and cloud-native. Jina also handles the infrastructure complexity in production and other Day-2 operations so that you can focus on the data application itself.
        
        <p align="center">
        <a href="https://docs.jina.ai"><img src="https://github.com/jina-ai/jina/blob/master/.github/readme/scalability-banner.png?raw=true" alt="Jina: Scalability and concurrency with ease" width="100%"></a>
        </p>
        
        ### Easy scalability and concurrency
        
        Jina comes with scalability features out of the box like [replicas](https://docs.jina.ai/concepts/flow/scale-out/#replicate-executors), [shards](https://docs.jina.ai/concepts/flow/scale-out/#customize-polling-behaviors) and [dynamic batching](https://docs.jina.ai/concepts/executor/dynamic-batching/).
        This lets you easily increase your application's throughput.
        
        Let's scale a Stable Diffusion Executor deployment with replicas and dynamic batching:
        
        * Create two replicas, with [a GPU assigned for each](https://docs.jina.ai/concepts/flow/scale-out/#replicate-on-multiple-gpus).
        * Enable dynamic batching to process incoming parallel requests together with the same model inference.
        
        
        <div class="table-wrapper">
        <table>
        <tr>
        <th> Normal Deployment </th> 
        <th> Scaled Deployment </th>
        </tr>
        <tr>
        <td>
        
        ```yaml
        jtype: Deployment
        with:
          timeout_ready: -1
          uses: jinaai://jina-ai/TextToImage
          install_requirements: true
        ```
        
        </td>
        <td>
        
        ```yaml
        jtype: Deployment
        with:
          timeout_ready: -1
          uses: jinaai://jina-ai/TextToImage
          install_requirements: true
          env:
           CUDA_VISIBLE_DEVICES: RR
          replicas: 2
          uses_dynamic_batching: # configure dynamic batching
            /default:
              preferred_batch_size: 10
              timeout: 200
        ```
        
        </td>
        </tr>
        </table>
        </div>
        
        
        Assuming your machine has two GPUs, using the scaled deployment YAML will give better throughput compared to the normal deployment.
        
        These features apply to both [Deployment YAML](https://docs.jina.ai/concepts/executor/deployment-yaml-spec/#deployment-yaml-spec) and [Flow YAML](https://docs.jina.ai/concepts/flow/yaml-spec/). Thanks to the YAML syntax, you can inject deployment configurations regardless of Executor code.
        
        ---
        
        <p align="center">
        <a href="https://docs.jina.ai"><img src="https://github.com/jina-ai/jina/blob/master/.github/readme/container-banner.png?raw=true" alt="Jina: Seamless Container Integration" width="100%"></a>
        </p>
        
        ### Seamless container integration
        
        Use [Executor Hub](https://cloud.jina.ai) to share your Executors or use public/private Executors, with no need to worry about dependencies.
        
        To create an Executor:
        
        ```bash
        jina hub new 
        ```
        
        To push it to Executor Hub:
        
        ```bash
        jina hub push .
        ```
        
        To use a Hub Executor in your Flow:
        
        |        | Docker container                           | Sandbox                                     | Source                              |
        |--------|--------------------------------------------|---------------------------------------------|-------------------------------------|
        | YAML   | `uses: jinaai+docker://<username>/MyExecutor`        | `uses: jinaai+sandbox://<username>/MyExecutor`        | `uses: jinaai://<username>/MyExecutor`        |
        | Python | `.add(uses='jinaai+docker://<username>/MyExecutor')` | `.add(uses='jinaai+sandbox://<username>/MyExecutor')` | `.add(uses='jinaai://<username>/MyExecutor')` |
        
        Executor Hub manages everything on the backend:
        
        - Automated builds on the cloud
        - Store, deploy, and deliver Executors cost-efficiently;
        - Automatically resolve version conflicts and dependencies;
        - Instant delivery of any Executor via [Sandbox](https://docs.jina.ai/concepts/executor/hub/sandbox/) without pulling anything to local.
        
        ---
        
        <p align="center">
        <a href="https://docs.jina.ai"><img src=".github/readme/cloud-native-banner.png?raw=true" alt="Jina: Seamless Container Integration" width="100%"></a>
        </p>
        
        ### Get on the fast lane to cloud-native
        
        Using Kubernetes with Jina is easy:
        
        ```bash
        jina export kubernetes flow.yml ./my-k8s
        kubectl apply -R -f my-k8s
        ```
        
        And so is Docker Compose:
        
        ```bash
        jina export docker-compose flow.yml docker-compose.yml
        docker-compose up
        ```
        
        > **Note**
        > You can also export Deployment YAML to [Kubernetes](https://docs.jina.ai/concepts/executor/serve/#serve-via-kubernetes) and [Docker Compose](https://docs.jina.ai/concepts/executor/serve/#serve-via-docker-compose).
        
        Likewise, tracing and monitoring with OpenTelemetry is straightforward:
        
        ```python
        from docarray import DocumentArray
        from jina import Executor, requests
        
        
        class Encoder(Executor):
            @requests
            def encode(self, docs: DocumentArray, **kwargs):
                with self.tracer.start_as_current_span(
                    'encode', context=tracing_context
                ) as span:
                    with self.monitor(
                        'preprocessing_seconds', 'Time preprocessing the requests'
                    ):
                        docs.tensors = preprocessing(docs)
                    with self.monitor(
                        'model_inference_seconds', 'Time doing inference the requests'
                    ):
                        docs.embedding = model_inference(docs.tensors)
        ```
        
        You can integrate Jaeger or any other distributed tracing tools to collect and visualize request-level and application level service operation attributes. This helps you analyze request-response lifecycle, application behavior and performance.
        
        To use Grafana, [download this JSON](https://github.com/jina-ai/example-grafana-prometheus/blob/main/grafana-dashboards/flow-histogram-metrics.json) and import it into Grafana:
        
        <p align="center">
        <a href="https://docs.jina.ai"><img src=".github/readme/grafana-histogram-metrics.png?raw=true" alt="Jina: Seamless Container Integration" width="70%"></a>
        </p>
        
        To trace requests with Jaeger:
        <p align="center">
        <a href="https://docs.jina.ai"><img src=".github/readme/jaeger-tracing-example.png?raw=true" alt="Jina: Seamless Container Integration" width="70%"></a>
        </p>
        
        What cloud-native technology is still challenging to you? [Tell us](https://github.com/jina-ai/jina/issues) and we'll handle the complexity and make it easy for you.
        
        <!-- start support-pitch -->
        
        ## Support
        
        - Join our [Slack community](https://jina.ai/slack) and chat with other community members about ideas.
        - Join our [Engineering All Hands](https://youtube.com/playlist?list=PL3UBBWOUVhFYRUa_gpYYKBqEAkO4sxmne) meet-up to discuss your use case and learn Jina's new features.
            - **When?** The second Tuesday of every month
            - **Where?**
              Zoom ([see our public events calendar](https://calendar.google.com/calendar/embed?src=c_1t5ogfp2d45v8fit981j08mcm4%40group.calendar.google.com&ctz=Europe%2FBerlin)/[.ical](https://calendar.google.com/calendar/ical/c_1t5ogfp2d45v8fit981j08mcm4%40group.calendar.google.com/public/basic.ics))
              and [live stream on YouTube](https://youtube.com/c/jina-ai)
        - Subscribe to the latest video tutorials on our [YouTube channel](https://youtube.com/c/jina-ai)
        
        ## Join Us
        
        Jina is backed by [Jina AI](https://jina.ai) and licensed under [Apache-2.0](./LICENSE).
        
        <!-- end support-pitch -->
        
Keywords: jina cloud-native cross-modal multimodal neural-search query search index elastic neural-network encoding embedding serving docker container image video audio deep-learning mlops
Platform: UNKNOWN
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Education
Classifier: Intended Audience :: Science/Research
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Unix Shell
Classifier: Environment :: Console
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Operating System :: OS Independent
Classifier: Topic :: Database :: Database Engines/Servers
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Internet :: WWW/HTTP :: Indexing/Search
Classifier: Topic :: Scientific/Engineering :: Image Recognition
Classifier: Topic :: Multimedia :: Video
Classifier: Topic :: Scientific/Engineering
Classifier: Topic :: Scientific/Engineering :: Mathematics
Classifier: Topic :: Software Development
Classifier: Topic :: Software Development :: Libraries
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Description-Content-Type: text/markdown
Provides-Extra: numpy
Provides-Extra: core
Provides-Extra: protobuf
Provides-Extra: grpcio
Provides-Extra: grpcio-reflection
Provides-Extra: grpcio-health-checking
Provides-Extra: pyyaml
Provides-Extra: packaging
Provides-Extra: docarray
Provides-Extra: jina-hubble-sdk
Provides-Extra: jcloud
Provides-Extra: opentelemetry-api
Provides-Extra: opentelemetry-instrumentation-grpc
Provides-Extra: perf
Provides-Extra: uvloop
Provides-Extra: devel
Provides-Extra: standard
Provides-Extra: prometheus_client
Provides-Extra: opentelemetry-sdk
Provides-Extra: opentelemetry-exporter-otlp
Provides-Extra: opentelemetry-exporter-prometheus
Provides-Extra: opentelemetry-instrumentation-aiohttp-client
Provides-Extra: opentelemetry-instrumentation-fastapi
Provides-Extra: standrad
Provides-Extra: opentelemetry-exporter-otlp-proto-grpc
Provides-Extra: fastapi
Provides-Extra: uvicorn[standard]
Provides-Extra: docker
Provides-Extra: pathspec
Provides-Extra: filelock
Provides-Extra: requests
Provides-Extra: websockets
Provides-Extra: pydantic
Provides-Extra: python-multipart
Provides-Extra: aiofiles
Provides-Extra: aiohttp
Provides-Extra: aiostream
Provides-Extra: scipy
Provides-Extra: test
Provides-Extra: Pillow
Provides-Extra: pytest
Provides-Extra: pytest-timeout
Provides-Extra: pytest-mock
Provides-Extra: pytest-cov
Provides-Extra: coverage
Provides-Extra: pytest-repeat
Provides-Extra: pytest-asyncio
Provides-Extra: pytest-reraise
Provides-Extra: flaky
Provides-Extra: mock
Provides-Extra: requests-mock
Provides-Extra: pytest-custom_exit_code
Provides-Extra: black
Provides-Extra: kubernetes
Provides-Extra: pytest-kind
Provides-Extra: pytest-lazy-fixture
Provides-Extra: torch
Provides-Extra: cicd
Provides-Extra: psutil
Provides-Extra: strawberry-graphql
Provides-Extra: sgqlc
Provides-Extra: bs4
Provides-Extra: jsonschema
Provides-Extra: portforward
Provides-Extra: tensorflow
Provides-Extra: opentelemetry-test-utils
Provides-Extra: prometheus-api-client
Provides-Extra: watchfiles
Provides-Extra: all
