Metadata-Version: 2.1
Name: cdk-eks-karpenter
Version: 0.0.6
Summary: CDK construct library that allows you install Karpenter in an AWS EKS cluster
Home-page: https://github.com/aws-samples/cdk-eks-karpenter.git
Author: Andreas Lindh<elindh@amazon.com>
License: Apache-2.0
Project-URL: Source, https://github.com/aws-samples/cdk-eks-karpenter.git
Platform: UNKNOWN
Classifier: Intended Audience :: Developers
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: JavaScript
Classifier: Programming Language :: Python :: 3 :: Only
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Typing :: Typed
Classifier: Development Status :: 5 - Production/Stable
Classifier: License :: OSI Approved
Requires-Python: >=3.6
Description-Content-Type: text/markdown
License-File: LICENSE
License-File: NOTICE

# cdk-eks-karpenter

This construct configures the necessary dependencies and installs [Karpenter](https://karpenter.sh)
on an EKS cluster managed by AWS CDK.

## Prerequisites

### Usage with EC2 Spot Capacity

If you have not used EC2 spot in your AWS account before, follow the instructions
[here](https://karpenter.sh/v0.6.3/getting-started/#create-the-ec2-spot-service-linked-role) to create
the service linked role in your account allowing Karpenter to provision EC2 Spot Capacity.

## Using

In your CDK project, initialize a new Karpenter construct for your EKS cluster, like this:

```python
const cluster = new Cluster(this, 'testCluster', {
  vpc: vpc,
  role: clusterRole,
  version: KubernetesVersion.V1_21,
  defaultCapacity: 1
});

const karpenter = new Karpenter(this, 'Karpenter', {
  cluster: cluster
});
```

This will install and configure Karpenter in your cluster. To have Karpenter do something useful, you
also need to create a [provisioner for AWS](https://karpenter.sh/v0.6.3/aws/provisioning/). You can
do that from CDK using `addProvisioner()`, similar to the example below:

```python
karpenter.addProvisioner('spot-provisioner', {
  requirements: [{
    key: 'karpenter.sh/capacity-type',
    operator: 'In',
    values: ['spot']
  }],
  limits: {
    resources: {
      cpu: 20
    }
  },
  provider: {
    subnetSelector: {
      Name: 'PublicSubnet*'
    },
    securityGroupSelector: {
      'aws:eks:cluster-name': cluster.clusterName
    }
  }
});
```

## Known issues

### Versions earlier than v0.6.1 fails to install

As of [aws/karpenter#1145](https://github.com/aws/karpenter/pull/1145) the Karpenter Helm chart is
refactored to specify `clusterEndpoint` and `clusterName` on the root level of the chart values, previously
these values was specified under the key `controller`.

## Testing

This construct adds a custom task to [projen](https://projen.io/), so you can test a full deployment
of an EKS cluster with Karpenter installed as specified in `test/integ.karpenter.ts` by running the
following:

```sh
export CDK_DEFAULT_REGION=<aws region>
export CDK_DEFAULT_ACCOUNT=<account id>
npx projen test:deploy
```

As the above will create a cluster without EC2 capacity, with CoreDNS and Karpenter running as Fargate
pods, you can test out the functionality of Karpenter by deploying an inflation deployment, which will
spin up a number of pods that will trigger Karpenter creation of worker nodes:

```sh
kubectl apply -f test/inflater-deployment.yml
```

You can clean things up by deleting the deployment and the CDK test stack:

```sh
kubectl delete -f test/inflater-deployment.yml
npx projen test:destroy
```

## FAQ

### I'm not able to launch spot instances

1. Ensure you have the appropriate linked role available in your account, for more details,
   see [the karpenter documentation](https://karpenter.sh/v0.6.3/getting-started/#create-the-ec2-spot-service-linked-role)


