Terraform local-exec provisioner on an EC2 instance fails with “Permission denied”
up vote
0
down vote
favorite
Trying to provision EKS cluster with Terraform.
terraform apply
fails with:
module.eks_node.null_resource.export_rendered_template: Provisioning with 'local-exec'...
module.eks_node.null_resource.export_rendered_template (local-exec): Executing: ["/bin/sh" "-c" "cat > /data_output.sh <<EOLn#!/bin/bash -xennCA_CERTIFICATE_DIRECTORY=/etc/kubernetes/pkinCA_CERTIFICATE_FILE_PATH=$CA_CERTIFICATE_DIRECTORY/ca.crtnmkdir -p $CA_CERTIFICATE_DIRECTORYnecho "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNE1URXhPVEUxTXpFeE5Wb1hEVEk0TVRFeE5qRTFNekV4TlZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTG5MClhyT3ByL0dpeDFNdXJKTE1Za3Z5Mk83TDdnR1U2VVp0a2QvMmg1WmxYMUxUNVBtUk5XeTllaEErMW9KWE9MaHgKNWQ5L2tRMWcxdHJ3LzlpV0lPeENMN1oydjh5WVU2bjFuUTV3VXRSUWlIelFkcTlDL0ZNMmUzNVFGOFE5QWpFbQpkcGFob1JKVWk4UHJVYXRwV1NmYmZqM2F6RVFDcWgrMFF6TDdzUE1tS2dlOVpqbUw2VmFqNTNBSHZtcUkweUJYClQyR1ZySFJLUW9zZ2JwTHdIZE95andCejlvS3RScml6UnN2U2dPSVNNdDRIbVhDcDBwVGxBK2NGUFE4azBYdFoKaTFlcEc4aklCMGw0VFV3eGJROFFEUUxET25iUHFEdTFVV3cxSmIvaUZIZkV2Z0JrUTFpVjJDQmRhNzZkZjhDSgpSYzFqOERzeCtnYkFYdjhadzZNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFIcWxLZUdoN3dUM2U0N0RweEZhVW91WnYvbGwKbUtLR1V3Ly9RNWhSblVySEN6NU1Pc1gvcU5aWVM1aGFocjNJS2xqZnlBMkNJbk82cE9KeEcxVEVQOVRvdkRlcApnWlZhYVE0d3dqQTI0R2gwb1hrNGJ5TWNKaEhpRURmT0QvemlWMlN4ZEN4MHhGS0hQL3NRcm9RR1JyM0RPeTFwCmUvRzN6cndHN0FoSEg4dWJTQlZFMUVpZ0tvaXlpWTVnMGJVZTZUT2FobEdaMkxkTytheVg4UWdYOTNXOUVzdWoKTTIzaFA2T3pnRjhhbVpGZEpFRGNkR0dhdm5wMFBlbTJGb1dpY3hvZlFsWUhacnA4WGk3Z1JHY0RjakJuWFZxcAptQkpnSkNuSkJ2UWZ0WXJPbG1mZUZMMXpKckM5WUdUYmxjbUNuTi95UW5VcVdZZHJDLzNTYWVHOEZTbz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=" | base64 -d > $CA_CERTIFICATE_FILE_PATHnINTERNAL_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)nsed -i s,MASTER_ENDPOINT,https://232F45B8AFA817FD332477B84D964B2F.yl4.eu-west-1.eks.amazonaws.com,g /var/lib/kubelet/kubeconfignsed -i s,CLUSTER_NAME,terra-dev,g /var/lib/kubelet/kubeconfignsed -i s,REGION,eu-west-1,g /etc/systemd/system/kubelet.servicensed -i s,MAX_PODS,20,g /etc/systemd/system/kubelet.servicensed -i s,MASTER_ENDPOINT,https://232F45B8AFA817FD332477B84D964B2F.yl4.eu-west-1.eks.amazonaws.com,g /etc/systemd/system/kubelet.servicensed -i s,INTERNAL_IP,$INTERNAL_IP,g /etc/systemd/system/kubelet.servicenDNS_CLUSTER_IP=10.100.0.10nif [[ $INTERNAL_IP == 10.* ]] ; then DNS_CLUSTER_IP=172.20.0.10; finsed -i s,DNS_CLUSTER_IP,$DNS_CLUSTER_IP,g /etc/systemd/system/kubelet.servicensed -i s,CERTIFICATE_AUTHORITY_FILE,$CA_CERTIFICATE_FILE_PATH,g /var/lib/kubelet/kubeconfignsed -i s,CLIENT_CA_FILE,$CA_CERTIFICATE_FILE_PATH,g /etc/systemd/system/kubelet.servicensystemctl daemon-reloadnsystemctl restart kubeletnnEOL"]
module.eks_node.null_resource.export_rendered_template (local-exec): /bin/sh: /data_output.sh: Permission denied
Error: Error applying plan:
1 error(s) occurred:
* module.eks_node.null_resource.export_rendered_template: Error running command 'cat > /data_output.sh <<EOL
#!/bin/bash -xe
CA_CERTIFICATE_DIRECTORY=/etc/kubernetes/pki
CA_CERTIFICATE_FILE_PATH=$CA_CERTIFICATE_DIRECTORY/ca.crt
mkdir -p $CA_CERTIFICATE_DIRECTORY
echo "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNE1URXhPVEUxTXpFeE5Wb1hEVEk0TVRFeE5qRTFNekV4TlZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTG5MClhyT3ByL0dpeDFNdXJKTE1Za3Z5Mk83TDdnR1U2VVp0a2QvMmg1WmxYMUxUNVBtUk5XeTllaEErMW9KWE9MaHgKNWQ5L2tRMWcxdHJ3LzlpV0lPeENMN1oydjh5WVU2bjFuUTV3VXRSUWlIelFkcTlDL0ZNMmUzNVFGOFE5QWpFbQpkcGFob1JKVWk4UHJVYXRwV1NmYmZqM2F6RVFDcWgrMFF6TDdzUE1tS2dlOVpqbUw2VmFqNTNBSHZtcUkweUJYClQyR1ZySFJLUW9zZ2JwTHdIZE95andCejlvS3RScml6UnN2U2dPSVNNdDRIbVhDcDBwVGxBK2NGUFE4azBYdFoKaTFlcEc4aklCMGw0VFV3eGJROFFEUUxET25iUHFEdTFVV3cxSmIvaUZIZkV2Z0JrUTFpVjJDQmRhNzZkZjhDSgpSYzFqOERzeCtnYkFYdjhadzZNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFIcWxLZUdoN3dUM2U0N0RweEZhVW91WnYvbGwKbUtLR1V3Ly9RNWhSblVySEN6NU1Pc1gvcU5aWVM1aGFocjNJS2xqZnlBMkNJbk82cE9KeEcxVEVQOVRvdkRlcApnWlZhYVE0d3dqQTI0R2gwb1hrNGJ5TWNKaEhpRURmT0QvemlWMlN4ZEN4MHhGS0hQL3NRcm9RR1JyM0RPeTFwCmUvRzN6cndHN0FoSEg4dWJTQlZFMUVpZ0tvaXlpWTVnMGJVZTZUT2FobEdaMkxkTytheVg4UWdYOTNXOUVzdWoKTTIzaFA2T3pnRjhhbVpGZEpFRGNkR0dhdm5wMFBlbTJGb1dpY3hvZlFsWUhacnA4WGk3Z1JHY0RjakJuWFZxcAptQkpnSkNuSkJ2UWZ0WXJPbG1mZUZMMXpKckM5WUdUYmxjbUNuTi95UW5VcVdZZHJDLzNTYWVHOEZTbz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=" | base64 -d > $CA_CERTIFICATE_FILE_PATH
INTERNAL_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
sed -i s,MASTER_ENDPOINT,https://232F45B8AFA817FD332477B84D964B2F.yl4.eu-west-1.eks.amazonaws.com,g /var/lib/kubelet/kubeconfig
sed -i s,CLUSTER_NAME,terra-dev,g /var/lib/kubelet/kubeconfig
sed -i s,REGION,eu-west-1,g /etc/systemd/system/kubelet.service
sed -i s,MAX_PODS,20,g /etc/systemd/system/kubelet.service
sed -i s,MASTER_ENDPOINT,https://232F45B8AFA817FD332477B84D964B2F.yl4.eu-west-1.eks.amazonaws.com,g /etc/systemd/system/kubelet.service
sed -i s,INTERNAL_IP,$INTERNAL_IP,g /etc/systemd/system/kubelet.service
DNS_CLUSTER_IP=10.100.0.10
if [[ $INTERNAL_IP == 10.* ]] ; then DNS_CLUSTER_IP=172.20.0.10; fi
sed -i s,DNS_CLUSTER_IP,$DNS_CLUSTER_IP,g /etc/systemd/system/kubelet.service
sed -i s,CERTIFICATE_AUTHORITY_FILE,$CA_CERTIFICATE_FILE_PATH,g /var/lib/kubelet/kubeconfig
sed -i s,CLIENT_CA_FILE,$CA_CERTIFICATE_FILE_PATH,g /etc/systemd/system/kubelet.service
systemctl daemon-reload
systemctl restart kubelet
EOL': exit status 1. Output: /bin/sh: /data_output.sh: Permission denied
My eks_node module:
data "aws_ami" "eks-worker" {
filter {
name = "name"
values = ["amazon-eks-node-v*"]
}
most_recent = true
owners = ["602401143452"] # Amazon
}
data "aws_region" "current" {}
data "template_file" "user_data" {
template = "${file("${path.module}/userdata.tpl")}"
vars {
eks_certificate_authority = "${var.eks_certificate_authority}"
eks_endpoint = "${var.eks_endpoint}"
eks_cluster_name = "${var.eks_cluster_name}"
workspace = "${terraform.workspace}"
aws_region_current_name = "${data.aws_region.current.name}"
}
}
resource "null_resource" "export_rendered_template" {
provisioner "local-exec" {
command = "cat > /data_output.sh <<EOLn${data.template_file.user_data.rendered}nEOL"
}
}
resource "aws_launch_configuration" "terra" {
associate_public_ip_address = true
iam_instance_profile = "${var.iam_instance_profile}"
image_id = "${data.aws_ami.eks-worker.id}"
instance_type = "t2.medium"
name_prefix = "terraform-eks"
key_name = "Dev1"
security_groups = ["${var.security_group_node}"]
user_data = "${data.template_file.user_data.rendered}"
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "terra" {
desired_capacity = 1
launch_configuration = "${aws_launch_configuration.terra.id}"
max_size = 2
min_size = 1
name = "terraform-eks"
vpc_zone_identifier = ["${var.subnets}"]
tag {
key = "Name"
value = "terraform-eks"
propagate_at_launch = true
}
tag {
key = "kubernetes.io/cluster/${var.eks_cluster_name}-${terraform.workspace}"
value = "owned"
propagate_at_launch = true
}
}
EKS currently documents this required userdata for EKS worker nodes to
properly configure Kubernetes applications on the EC2 instance.
https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/amazon-eks-nodegroup.yaml
My userdata.tpl:
#!/bin/bash -xe
CA_CERTIFICATE_DIRECTORY=/etc/kubernetes/pki
CA_CERTIFICATE_FILE_PATH=$CA_CERTIFICATE_DIRECTORY/ca.crt
mkdir -p $CA_CERTIFICATE_DIRECTORY
echo "${eks_certificate_authority}" | base64 -d > $CA_CERTIFICATE_FILE_PATH
INTERNAL_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
sed -i s,MASTER_ENDPOINT,${eks_endpoint},g /var/lib/kubelet/kubeconfig
sed -i s,CLUSTER_NAME,${eks_cluster_name}-${workspace},g /var/lib/kubelet/kubeconfig
sed -i s,REGION,${aws_region_current_name},g /etc/systemd/system/kubelet.service
sed -i s,MAX_PODS,20,g /etc/systemd/system/kubelet.service
sed -i s,MASTER_ENDPOINT,${eks_endpoint},g /etc/systemd/system/kubelet.service
sed -i s,INTERNAL_IP,$INTERNAL_IP,g /etc/systemd/system/kubelet.service
DNS_CLUSTER_IP=10.100.0.10
if [[ $INTERNAL_IP == 10.* ]] ; then DNS_CLUSTER_IP=172.20.0.10; fi
sed -i s,DNS_CLUSTER_IP,$DNS_CLUSTER_IP,g /etc/systemd/system/kubelet.service
sed -i s,CERTIFICATE_AUTHORITY_FILE,$CA_CERTIFICATE_FILE_PATH,g /var/lib/kubelet/kubeconfig
sed -i s,CLIENT_CA_FILE,$CA_CERTIFICATE_FILE_PATH,g /etc/systemd/system/kubelet.service
systemctl daemon-reload
systemctl restart kubelet
terraform terraform-provider-aws amazon-eks terraform-template-file aws-eks
add a comment |
up vote
0
down vote
favorite
Trying to provision EKS cluster with Terraform.
terraform apply
fails with:
module.eks_node.null_resource.export_rendered_template: Provisioning with 'local-exec'...
module.eks_node.null_resource.export_rendered_template (local-exec): Executing: ["/bin/sh" "-c" "cat > /data_output.sh <<EOLn#!/bin/bash -xennCA_CERTIFICATE_DIRECTORY=/etc/kubernetes/pkinCA_CERTIFICATE_FILE_PATH=$CA_CERTIFICATE_DIRECTORY/ca.crtnmkdir -p $CA_CERTIFICATE_DIRECTORYnecho "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNE1URXhPVEUxTXpFeE5Wb1hEVEk0TVRFeE5qRTFNekV4TlZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTG5MClhyT3ByL0dpeDFNdXJKTE1Za3Z5Mk83TDdnR1U2VVp0a2QvMmg1WmxYMUxUNVBtUk5XeTllaEErMW9KWE9MaHgKNWQ5L2tRMWcxdHJ3LzlpV0lPeENMN1oydjh5WVU2bjFuUTV3VXRSUWlIelFkcTlDL0ZNMmUzNVFGOFE5QWpFbQpkcGFob1JKVWk4UHJVYXRwV1NmYmZqM2F6RVFDcWgrMFF6TDdzUE1tS2dlOVpqbUw2VmFqNTNBSHZtcUkweUJYClQyR1ZySFJLUW9zZ2JwTHdIZE95andCejlvS3RScml6UnN2U2dPSVNNdDRIbVhDcDBwVGxBK2NGUFE4azBYdFoKaTFlcEc4aklCMGw0VFV3eGJROFFEUUxET25iUHFEdTFVV3cxSmIvaUZIZkV2Z0JrUTFpVjJDQmRhNzZkZjhDSgpSYzFqOERzeCtnYkFYdjhadzZNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFIcWxLZUdoN3dUM2U0N0RweEZhVW91WnYvbGwKbUtLR1V3Ly9RNWhSblVySEN6NU1Pc1gvcU5aWVM1aGFocjNJS2xqZnlBMkNJbk82cE9KeEcxVEVQOVRvdkRlcApnWlZhYVE0d3dqQTI0R2gwb1hrNGJ5TWNKaEhpRURmT0QvemlWMlN4ZEN4MHhGS0hQL3NRcm9RR1JyM0RPeTFwCmUvRzN6cndHN0FoSEg4dWJTQlZFMUVpZ0tvaXlpWTVnMGJVZTZUT2FobEdaMkxkTytheVg4UWdYOTNXOUVzdWoKTTIzaFA2T3pnRjhhbVpGZEpFRGNkR0dhdm5wMFBlbTJGb1dpY3hvZlFsWUhacnA4WGk3Z1JHY0RjakJuWFZxcAptQkpnSkNuSkJ2UWZ0WXJPbG1mZUZMMXpKckM5WUdUYmxjbUNuTi95UW5VcVdZZHJDLzNTYWVHOEZTbz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=" | base64 -d > $CA_CERTIFICATE_FILE_PATHnINTERNAL_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)nsed -i s,MASTER_ENDPOINT,https://232F45B8AFA817FD332477B84D964B2F.yl4.eu-west-1.eks.amazonaws.com,g /var/lib/kubelet/kubeconfignsed -i s,CLUSTER_NAME,terra-dev,g /var/lib/kubelet/kubeconfignsed -i s,REGION,eu-west-1,g /etc/systemd/system/kubelet.servicensed -i s,MAX_PODS,20,g /etc/systemd/system/kubelet.servicensed -i s,MASTER_ENDPOINT,https://232F45B8AFA817FD332477B84D964B2F.yl4.eu-west-1.eks.amazonaws.com,g /etc/systemd/system/kubelet.servicensed -i s,INTERNAL_IP,$INTERNAL_IP,g /etc/systemd/system/kubelet.servicenDNS_CLUSTER_IP=10.100.0.10nif [[ $INTERNAL_IP == 10.* ]] ; then DNS_CLUSTER_IP=172.20.0.10; finsed -i s,DNS_CLUSTER_IP,$DNS_CLUSTER_IP,g /etc/systemd/system/kubelet.servicensed -i s,CERTIFICATE_AUTHORITY_FILE,$CA_CERTIFICATE_FILE_PATH,g /var/lib/kubelet/kubeconfignsed -i s,CLIENT_CA_FILE,$CA_CERTIFICATE_FILE_PATH,g /etc/systemd/system/kubelet.servicensystemctl daemon-reloadnsystemctl restart kubeletnnEOL"]
module.eks_node.null_resource.export_rendered_template (local-exec): /bin/sh: /data_output.sh: Permission denied
Error: Error applying plan:
1 error(s) occurred:
* module.eks_node.null_resource.export_rendered_template: Error running command 'cat > /data_output.sh <<EOL
#!/bin/bash -xe
CA_CERTIFICATE_DIRECTORY=/etc/kubernetes/pki
CA_CERTIFICATE_FILE_PATH=$CA_CERTIFICATE_DIRECTORY/ca.crt
mkdir -p $CA_CERTIFICATE_DIRECTORY
echo "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNE1URXhPVEUxTXpFeE5Wb1hEVEk0TVRFeE5qRTFNekV4TlZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTG5MClhyT3ByL0dpeDFNdXJKTE1Za3Z5Mk83TDdnR1U2VVp0a2QvMmg1WmxYMUxUNVBtUk5XeTllaEErMW9KWE9MaHgKNWQ5L2tRMWcxdHJ3LzlpV0lPeENMN1oydjh5WVU2bjFuUTV3VXRSUWlIelFkcTlDL0ZNMmUzNVFGOFE5QWpFbQpkcGFob1JKVWk4UHJVYXRwV1NmYmZqM2F6RVFDcWgrMFF6TDdzUE1tS2dlOVpqbUw2VmFqNTNBSHZtcUkweUJYClQyR1ZySFJLUW9zZ2JwTHdIZE95andCejlvS3RScml6UnN2U2dPSVNNdDRIbVhDcDBwVGxBK2NGUFE4azBYdFoKaTFlcEc4aklCMGw0VFV3eGJROFFEUUxET25iUHFEdTFVV3cxSmIvaUZIZkV2Z0JrUTFpVjJDQmRhNzZkZjhDSgpSYzFqOERzeCtnYkFYdjhadzZNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFIcWxLZUdoN3dUM2U0N0RweEZhVW91WnYvbGwKbUtLR1V3Ly9RNWhSblVySEN6NU1Pc1gvcU5aWVM1aGFocjNJS2xqZnlBMkNJbk82cE9KeEcxVEVQOVRvdkRlcApnWlZhYVE0d3dqQTI0R2gwb1hrNGJ5TWNKaEhpRURmT0QvemlWMlN4ZEN4MHhGS0hQL3NRcm9RR1JyM0RPeTFwCmUvRzN6cndHN0FoSEg4dWJTQlZFMUVpZ0tvaXlpWTVnMGJVZTZUT2FobEdaMkxkTytheVg4UWdYOTNXOUVzdWoKTTIzaFA2T3pnRjhhbVpGZEpFRGNkR0dhdm5wMFBlbTJGb1dpY3hvZlFsWUhacnA4WGk3Z1JHY0RjakJuWFZxcAptQkpnSkNuSkJ2UWZ0WXJPbG1mZUZMMXpKckM5WUdUYmxjbUNuTi95UW5VcVdZZHJDLzNTYWVHOEZTbz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=" | base64 -d > $CA_CERTIFICATE_FILE_PATH
INTERNAL_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
sed -i s,MASTER_ENDPOINT,https://232F45B8AFA817FD332477B84D964B2F.yl4.eu-west-1.eks.amazonaws.com,g /var/lib/kubelet/kubeconfig
sed -i s,CLUSTER_NAME,terra-dev,g /var/lib/kubelet/kubeconfig
sed -i s,REGION,eu-west-1,g /etc/systemd/system/kubelet.service
sed -i s,MAX_PODS,20,g /etc/systemd/system/kubelet.service
sed -i s,MASTER_ENDPOINT,https://232F45B8AFA817FD332477B84D964B2F.yl4.eu-west-1.eks.amazonaws.com,g /etc/systemd/system/kubelet.service
sed -i s,INTERNAL_IP,$INTERNAL_IP,g /etc/systemd/system/kubelet.service
DNS_CLUSTER_IP=10.100.0.10
if [[ $INTERNAL_IP == 10.* ]] ; then DNS_CLUSTER_IP=172.20.0.10; fi
sed -i s,DNS_CLUSTER_IP,$DNS_CLUSTER_IP,g /etc/systemd/system/kubelet.service
sed -i s,CERTIFICATE_AUTHORITY_FILE,$CA_CERTIFICATE_FILE_PATH,g /var/lib/kubelet/kubeconfig
sed -i s,CLIENT_CA_FILE,$CA_CERTIFICATE_FILE_PATH,g /etc/systemd/system/kubelet.service
systemctl daemon-reload
systemctl restart kubelet
EOL': exit status 1. Output: /bin/sh: /data_output.sh: Permission denied
My eks_node module:
data "aws_ami" "eks-worker" {
filter {
name = "name"
values = ["amazon-eks-node-v*"]
}
most_recent = true
owners = ["602401143452"] # Amazon
}
data "aws_region" "current" {}
data "template_file" "user_data" {
template = "${file("${path.module}/userdata.tpl")}"
vars {
eks_certificate_authority = "${var.eks_certificate_authority}"
eks_endpoint = "${var.eks_endpoint}"
eks_cluster_name = "${var.eks_cluster_name}"
workspace = "${terraform.workspace}"
aws_region_current_name = "${data.aws_region.current.name}"
}
}
resource "null_resource" "export_rendered_template" {
provisioner "local-exec" {
command = "cat > /data_output.sh <<EOLn${data.template_file.user_data.rendered}nEOL"
}
}
resource "aws_launch_configuration" "terra" {
associate_public_ip_address = true
iam_instance_profile = "${var.iam_instance_profile}"
image_id = "${data.aws_ami.eks-worker.id}"
instance_type = "t2.medium"
name_prefix = "terraform-eks"
key_name = "Dev1"
security_groups = ["${var.security_group_node}"]
user_data = "${data.template_file.user_data.rendered}"
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "terra" {
desired_capacity = 1
launch_configuration = "${aws_launch_configuration.terra.id}"
max_size = 2
min_size = 1
name = "terraform-eks"
vpc_zone_identifier = ["${var.subnets}"]
tag {
key = "Name"
value = "terraform-eks"
propagate_at_launch = true
}
tag {
key = "kubernetes.io/cluster/${var.eks_cluster_name}-${terraform.workspace}"
value = "owned"
propagate_at_launch = true
}
}
EKS currently documents this required userdata for EKS worker nodes to
properly configure Kubernetes applications on the EC2 instance.
https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/amazon-eks-nodegroup.yaml
My userdata.tpl:
#!/bin/bash -xe
CA_CERTIFICATE_DIRECTORY=/etc/kubernetes/pki
CA_CERTIFICATE_FILE_PATH=$CA_CERTIFICATE_DIRECTORY/ca.crt
mkdir -p $CA_CERTIFICATE_DIRECTORY
echo "${eks_certificate_authority}" | base64 -d > $CA_CERTIFICATE_FILE_PATH
INTERNAL_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
sed -i s,MASTER_ENDPOINT,${eks_endpoint},g /var/lib/kubelet/kubeconfig
sed -i s,CLUSTER_NAME,${eks_cluster_name}-${workspace},g /var/lib/kubelet/kubeconfig
sed -i s,REGION,${aws_region_current_name},g /etc/systemd/system/kubelet.service
sed -i s,MAX_PODS,20,g /etc/systemd/system/kubelet.service
sed -i s,MASTER_ENDPOINT,${eks_endpoint},g /etc/systemd/system/kubelet.service
sed -i s,INTERNAL_IP,$INTERNAL_IP,g /etc/systemd/system/kubelet.service
DNS_CLUSTER_IP=10.100.0.10
if [[ $INTERNAL_IP == 10.* ]] ; then DNS_CLUSTER_IP=172.20.0.10; fi
sed -i s,DNS_CLUSTER_IP,$DNS_CLUSTER_IP,g /etc/systemd/system/kubelet.service
sed -i s,CERTIFICATE_AUTHORITY_FILE,$CA_CERTIFICATE_FILE_PATH,g /var/lib/kubelet/kubeconfig
sed -i s,CLIENT_CA_FILE,$CA_CERTIFICATE_FILE_PATH,g /etc/systemd/system/kubelet.service
systemctl daemon-reload
systemctl restart kubelet
terraform terraform-provider-aws amazon-eks terraform-template-file aws-eks
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
Trying to provision EKS cluster with Terraform.
terraform apply
fails with:
module.eks_node.null_resource.export_rendered_template: Provisioning with 'local-exec'...
module.eks_node.null_resource.export_rendered_template (local-exec): Executing: ["/bin/sh" "-c" "cat > /data_output.sh <<EOLn#!/bin/bash -xennCA_CERTIFICATE_DIRECTORY=/etc/kubernetes/pkinCA_CERTIFICATE_FILE_PATH=$CA_CERTIFICATE_DIRECTORY/ca.crtnmkdir -p $CA_CERTIFICATE_DIRECTORYnecho "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNE1URXhPVEUxTXpFeE5Wb1hEVEk0TVRFeE5qRTFNekV4TlZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTG5MClhyT3ByL0dpeDFNdXJKTE1Za3Z5Mk83TDdnR1U2VVp0a2QvMmg1WmxYMUxUNVBtUk5XeTllaEErMW9KWE9MaHgKNWQ5L2tRMWcxdHJ3LzlpV0lPeENMN1oydjh5WVU2bjFuUTV3VXRSUWlIelFkcTlDL0ZNMmUzNVFGOFE5QWpFbQpkcGFob1JKVWk4UHJVYXRwV1NmYmZqM2F6RVFDcWgrMFF6TDdzUE1tS2dlOVpqbUw2VmFqNTNBSHZtcUkweUJYClQyR1ZySFJLUW9zZ2JwTHdIZE95andCejlvS3RScml6UnN2U2dPSVNNdDRIbVhDcDBwVGxBK2NGUFE4azBYdFoKaTFlcEc4aklCMGw0VFV3eGJROFFEUUxET25iUHFEdTFVV3cxSmIvaUZIZkV2Z0JrUTFpVjJDQmRhNzZkZjhDSgpSYzFqOERzeCtnYkFYdjhadzZNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFIcWxLZUdoN3dUM2U0N0RweEZhVW91WnYvbGwKbUtLR1V3Ly9RNWhSblVySEN6NU1Pc1gvcU5aWVM1aGFocjNJS2xqZnlBMkNJbk82cE9KeEcxVEVQOVRvdkRlcApnWlZhYVE0d3dqQTI0R2gwb1hrNGJ5TWNKaEhpRURmT0QvemlWMlN4ZEN4MHhGS0hQL3NRcm9RR1JyM0RPeTFwCmUvRzN6cndHN0FoSEg4dWJTQlZFMUVpZ0tvaXlpWTVnMGJVZTZUT2FobEdaMkxkTytheVg4UWdYOTNXOUVzdWoKTTIzaFA2T3pnRjhhbVpGZEpFRGNkR0dhdm5wMFBlbTJGb1dpY3hvZlFsWUhacnA4WGk3Z1JHY0RjakJuWFZxcAptQkpnSkNuSkJ2UWZ0WXJPbG1mZUZMMXpKckM5WUdUYmxjbUNuTi95UW5VcVdZZHJDLzNTYWVHOEZTbz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=" | base64 -d > $CA_CERTIFICATE_FILE_PATHnINTERNAL_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)nsed -i s,MASTER_ENDPOINT,https://232F45B8AFA817FD332477B84D964B2F.yl4.eu-west-1.eks.amazonaws.com,g /var/lib/kubelet/kubeconfignsed -i s,CLUSTER_NAME,terra-dev,g /var/lib/kubelet/kubeconfignsed -i s,REGION,eu-west-1,g /etc/systemd/system/kubelet.servicensed -i s,MAX_PODS,20,g /etc/systemd/system/kubelet.servicensed -i s,MASTER_ENDPOINT,https://232F45B8AFA817FD332477B84D964B2F.yl4.eu-west-1.eks.amazonaws.com,g /etc/systemd/system/kubelet.servicensed -i s,INTERNAL_IP,$INTERNAL_IP,g /etc/systemd/system/kubelet.servicenDNS_CLUSTER_IP=10.100.0.10nif [[ $INTERNAL_IP == 10.* ]] ; then DNS_CLUSTER_IP=172.20.0.10; finsed -i s,DNS_CLUSTER_IP,$DNS_CLUSTER_IP,g /etc/systemd/system/kubelet.servicensed -i s,CERTIFICATE_AUTHORITY_FILE,$CA_CERTIFICATE_FILE_PATH,g /var/lib/kubelet/kubeconfignsed -i s,CLIENT_CA_FILE,$CA_CERTIFICATE_FILE_PATH,g /etc/systemd/system/kubelet.servicensystemctl daemon-reloadnsystemctl restart kubeletnnEOL"]
module.eks_node.null_resource.export_rendered_template (local-exec): /bin/sh: /data_output.sh: Permission denied
Error: Error applying plan:
1 error(s) occurred:
* module.eks_node.null_resource.export_rendered_template: Error running command 'cat > /data_output.sh <<EOL
#!/bin/bash -xe
CA_CERTIFICATE_DIRECTORY=/etc/kubernetes/pki
CA_CERTIFICATE_FILE_PATH=$CA_CERTIFICATE_DIRECTORY/ca.crt
mkdir -p $CA_CERTIFICATE_DIRECTORY
echo "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNE1URXhPVEUxTXpFeE5Wb1hEVEk0TVRFeE5qRTFNekV4TlZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTG5MClhyT3ByL0dpeDFNdXJKTE1Za3Z5Mk83TDdnR1U2VVp0a2QvMmg1WmxYMUxUNVBtUk5XeTllaEErMW9KWE9MaHgKNWQ5L2tRMWcxdHJ3LzlpV0lPeENMN1oydjh5WVU2bjFuUTV3VXRSUWlIelFkcTlDL0ZNMmUzNVFGOFE5QWpFbQpkcGFob1JKVWk4UHJVYXRwV1NmYmZqM2F6RVFDcWgrMFF6TDdzUE1tS2dlOVpqbUw2VmFqNTNBSHZtcUkweUJYClQyR1ZySFJLUW9zZ2JwTHdIZE95andCejlvS3RScml6UnN2U2dPSVNNdDRIbVhDcDBwVGxBK2NGUFE4azBYdFoKaTFlcEc4aklCMGw0VFV3eGJROFFEUUxET25iUHFEdTFVV3cxSmIvaUZIZkV2Z0JrUTFpVjJDQmRhNzZkZjhDSgpSYzFqOERzeCtnYkFYdjhadzZNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFIcWxLZUdoN3dUM2U0N0RweEZhVW91WnYvbGwKbUtLR1V3Ly9RNWhSblVySEN6NU1Pc1gvcU5aWVM1aGFocjNJS2xqZnlBMkNJbk82cE9KeEcxVEVQOVRvdkRlcApnWlZhYVE0d3dqQTI0R2gwb1hrNGJ5TWNKaEhpRURmT0QvemlWMlN4ZEN4MHhGS0hQL3NRcm9RR1JyM0RPeTFwCmUvRzN6cndHN0FoSEg4dWJTQlZFMUVpZ0tvaXlpWTVnMGJVZTZUT2FobEdaMkxkTytheVg4UWdYOTNXOUVzdWoKTTIzaFA2T3pnRjhhbVpGZEpFRGNkR0dhdm5wMFBlbTJGb1dpY3hvZlFsWUhacnA4WGk3Z1JHY0RjakJuWFZxcAptQkpnSkNuSkJ2UWZ0WXJPbG1mZUZMMXpKckM5WUdUYmxjbUNuTi95UW5VcVdZZHJDLzNTYWVHOEZTbz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=" | base64 -d > $CA_CERTIFICATE_FILE_PATH
INTERNAL_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
sed -i s,MASTER_ENDPOINT,https://232F45B8AFA817FD332477B84D964B2F.yl4.eu-west-1.eks.amazonaws.com,g /var/lib/kubelet/kubeconfig
sed -i s,CLUSTER_NAME,terra-dev,g /var/lib/kubelet/kubeconfig
sed -i s,REGION,eu-west-1,g /etc/systemd/system/kubelet.service
sed -i s,MAX_PODS,20,g /etc/systemd/system/kubelet.service
sed -i s,MASTER_ENDPOINT,https://232F45B8AFA817FD332477B84D964B2F.yl4.eu-west-1.eks.amazonaws.com,g /etc/systemd/system/kubelet.service
sed -i s,INTERNAL_IP,$INTERNAL_IP,g /etc/systemd/system/kubelet.service
DNS_CLUSTER_IP=10.100.0.10
if [[ $INTERNAL_IP == 10.* ]] ; then DNS_CLUSTER_IP=172.20.0.10; fi
sed -i s,DNS_CLUSTER_IP,$DNS_CLUSTER_IP,g /etc/systemd/system/kubelet.service
sed -i s,CERTIFICATE_AUTHORITY_FILE,$CA_CERTIFICATE_FILE_PATH,g /var/lib/kubelet/kubeconfig
sed -i s,CLIENT_CA_FILE,$CA_CERTIFICATE_FILE_PATH,g /etc/systemd/system/kubelet.service
systemctl daemon-reload
systemctl restart kubelet
EOL': exit status 1. Output: /bin/sh: /data_output.sh: Permission denied
My eks_node module:
data "aws_ami" "eks-worker" {
filter {
name = "name"
values = ["amazon-eks-node-v*"]
}
most_recent = true
owners = ["602401143452"] # Amazon
}
data "aws_region" "current" {}
data "template_file" "user_data" {
template = "${file("${path.module}/userdata.tpl")}"
vars {
eks_certificate_authority = "${var.eks_certificate_authority}"
eks_endpoint = "${var.eks_endpoint}"
eks_cluster_name = "${var.eks_cluster_name}"
workspace = "${terraform.workspace}"
aws_region_current_name = "${data.aws_region.current.name}"
}
}
resource "null_resource" "export_rendered_template" {
provisioner "local-exec" {
command = "cat > /data_output.sh <<EOLn${data.template_file.user_data.rendered}nEOL"
}
}
resource "aws_launch_configuration" "terra" {
associate_public_ip_address = true
iam_instance_profile = "${var.iam_instance_profile}"
image_id = "${data.aws_ami.eks-worker.id}"
instance_type = "t2.medium"
name_prefix = "terraform-eks"
key_name = "Dev1"
security_groups = ["${var.security_group_node}"]
user_data = "${data.template_file.user_data.rendered}"
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "terra" {
desired_capacity = 1
launch_configuration = "${aws_launch_configuration.terra.id}"
max_size = 2
min_size = 1
name = "terraform-eks"
vpc_zone_identifier = ["${var.subnets}"]
tag {
key = "Name"
value = "terraform-eks"
propagate_at_launch = true
}
tag {
key = "kubernetes.io/cluster/${var.eks_cluster_name}-${terraform.workspace}"
value = "owned"
propagate_at_launch = true
}
}
EKS currently documents this required userdata for EKS worker nodes to
properly configure Kubernetes applications on the EC2 instance.
https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/amazon-eks-nodegroup.yaml
My userdata.tpl:
#!/bin/bash -xe
CA_CERTIFICATE_DIRECTORY=/etc/kubernetes/pki
CA_CERTIFICATE_FILE_PATH=$CA_CERTIFICATE_DIRECTORY/ca.crt
mkdir -p $CA_CERTIFICATE_DIRECTORY
echo "${eks_certificate_authority}" | base64 -d > $CA_CERTIFICATE_FILE_PATH
INTERNAL_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
sed -i s,MASTER_ENDPOINT,${eks_endpoint},g /var/lib/kubelet/kubeconfig
sed -i s,CLUSTER_NAME,${eks_cluster_name}-${workspace},g /var/lib/kubelet/kubeconfig
sed -i s,REGION,${aws_region_current_name},g /etc/systemd/system/kubelet.service
sed -i s,MAX_PODS,20,g /etc/systemd/system/kubelet.service
sed -i s,MASTER_ENDPOINT,${eks_endpoint},g /etc/systemd/system/kubelet.service
sed -i s,INTERNAL_IP,$INTERNAL_IP,g /etc/systemd/system/kubelet.service
DNS_CLUSTER_IP=10.100.0.10
if [[ $INTERNAL_IP == 10.* ]] ; then DNS_CLUSTER_IP=172.20.0.10; fi
sed -i s,DNS_CLUSTER_IP,$DNS_CLUSTER_IP,g /etc/systemd/system/kubelet.service
sed -i s,CERTIFICATE_AUTHORITY_FILE,$CA_CERTIFICATE_FILE_PATH,g /var/lib/kubelet/kubeconfig
sed -i s,CLIENT_CA_FILE,$CA_CERTIFICATE_FILE_PATH,g /etc/systemd/system/kubelet.service
systemctl daemon-reload
systemctl restart kubelet
terraform terraform-provider-aws amazon-eks terraform-template-file aws-eks
Trying to provision EKS cluster with Terraform.
terraform apply
fails with:
module.eks_node.null_resource.export_rendered_template: Provisioning with 'local-exec'...
module.eks_node.null_resource.export_rendered_template (local-exec): Executing: ["/bin/sh" "-c" "cat > /data_output.sh <<EOLn#!/bin/bash -xennCA_CERTIFICATE_DIRECTORY=/etc/kubernetes/pkinCA_CERTIFICATE_FILE_PATH=$CA_CERTIFICATE_DIRECTORY/ca.crtnmkdir -p $CA_CERTIFICATE_DIRECTORYnecho "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNE1URXhPVEUxTXpFeE5Wb1hEVEk0TVRFeE5qRTFNekV4TlZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTG5MClhyT3ByL0dpeDFNdXJKTE1Za3Z5Mk83TDdnR1U2VVp0a2QvMmg1WmxYMUxUNVBtUk5XeTllaEErMW9KWE9MaHgKNWQ5L2tRMWcxdHJ3LzlpV0lPeENMN1oydjh5WVU2bjFuUTV3VXRSUWlIelFkcTlDL0ZNMmUzNVFGOFE5QWpFbQpkcGFob1JKVWk4UHJVYXRwV1NmYmZqM2F6RVFDcWgrMFF6TDdzUE1tS2dlOVpqbUw2VmFqNTNBSHZtcUkweUJYClQyR1ZySFJLUW9zZ2JwTHdIZE95andCejlvS3RScml6UnN2U2dPSVNNdDRIbVhDcDBwVGxBK2NGUFE4azBYdFoKaTFlcEc4aklCMGw0VFV3eGJROFFEUUxET25iUHFEdTFVV3cxSmIvaUZIZkV2Z0JrUTFpVjJDQmRhNzZkZjhDSgpSYzFqOERzeCtnYkFYdjhadzZNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFIcWxLZUdoN3dUM2U0N0RweEZhVW91WnYvbGwKbUtLR1V3Ly9RNWhSblVySEN6NU1Pc1gvcU5aWVM1aGFocjNJS2xqZnlBMkNJbk82cE9KeEcxVEVQOVRvdkRlcApnWlZhYVE0d3dqQTI0R2gwb1hrNGJ5TWNKaEhpRURmT0QvemlWMlN4ZEN4MHhGS0hQL3NRcm9RR1JyM0RPeTFwCmUvRzN6cndHN0FoSEg4dWJTQlZFMUVpZ0tvaXlpWTVnMGJVZTZUT2FobEdaMkxkTytheVg4UWdYOTNXOUVzdWoKTTIzaFA2T3pnRjhhbVpGZEpFRGNkR0dhdm5wMFBlbTJGb1dpY3hvZlFsWUhacnA4WGk3Z1JHY0RjakJuWFZxcAptQkpnSkNuSkJ2UWZ0WXJPbG1mZUZMMXpKckM5WUdUYmxjbUNuTi95UW5VcVdZZHJDLzNTYWVHOEZTbz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=" | base64 -d > $CA_CERTIFICATE_FILE_PATHnINTERNAL_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)nsed -i s,MASTER_ENDPOINT,https://232F45B8AFA817FD332477B84D964B2F.yl4.eu-west-1.eks.amazonaws.com,g /var/lib/kubelet/kubeconfignsed -i s,CLUSTER_NAME,terra-dev,g /var/lib/kubelet/kubeconfignsed -i s,REGION,eu-west-1,g /etc/systemd/system/kubelet.servicensed -i s,MAX_PODS,20,g /etc/systemd/system/kubelet.servicensed -i s,MASTER_ENDPOINT,https://232F45B8AFA817FD332477B84D964B2F.yl4.eu-west-1.eks.amazonaws.com,g /etc/systemd/system/kubelet.servicensed -i s,INTERNAL_IP,$INTERNAL_IP,g /etc/systemd/system/kubelet.servicenDNS_CLUSTER_IP=10.100.0.10nif [[ $INTERNAL_IP == 10.* ]] ; then DNS_CLUSTER_IP=172.20.0.10; finsed -i s,DNS_CLUSTER_IP,$DNS_CLUSTER_IP,g /etc/systemd/system/kubelet.servicensed -i s,CERTIFICATE_AUTHORITY_FILE,$CA_CERTIFICATE_FILE_PATH,g /var/lib/kubelet/kubeconfignsed -i s,CLIENT_CA_FILE,$CA_CERTIFICATE_FILE_PATH,g /etc/systemd/system/kubelet.servicensystemctl daemon-reloadnsystemctl restart kubeletnnEOL"]
module.eks_node.null_resource.export_rendered_template (local-exec): /bin/sh: /data_output.sh: Permission denied
Error: Error applying plan:
1 error(s) occurred:
* module.eks_node.null_resource.export_rendered_template: Error running command 'cat > /data_output.sh <<EOL
#!/bin/bash -xe
CA_CERTIFICATE_DIRECTORY=/etc/kubernetes/pki
CA_CERTIFICATE_FILE_PATH=$CA_CERTIFICATE_DIRECTORY/ca.crt
mkdir -p $CA_CERTIFICATE_DIRECTORY
echo "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNE1URXhPVEUxTXpFeE5Wb1hEVEk0TVRFeE5qRTFNekV4TlZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTG5MClhyT3ByL0dpeDFNdXJKTE1Za3Z5Mk83TDdnR1U2VVp0a2QvMmg1WmxYMUxUNVBtUk5XeTllaEErMW9KWE9MaHgKNWQ5L2tRMWcxdHJ3LzlpV0lPeENMN1oydjh5WVU2bjFuUTV3VXRSUWlIelFkcTlDL0ZNMmUzNVFGOFE5QWpFbQpkcGFob1JKVWk4UHJVYXRwV1NmYmZqM2F6RVFDcWgrMFF6TDdzUE1tS2dlOVpqbUw2VmFqNTNBSHZtcUkweUJYClQyR1ZySFJLUW9zZ2JwTHdIZE95andCejlvS3RScml6UnN2U2dPSVNNdDRIbVhDcDBwVGxBK2NGUFE4azBYdFoKaTFlcEc4aklCMGw0VFV3eGJROFFEUUxET25iUHFEdTFVV3cxSmIvaUZIZkV2Z0JrUTFpVjJDQmRhNzZkZjhDSgpSYzFqOERzeCtnYkFYdjhadzZNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFIcWxLZUdoN3dUM2U0N0RweEZhVW91WnYvbGwKbUtLR1V3Ly9RNWhSblVySEN6NU1Pc1gvcU5aWVM1aGFocjNJS2xqZnlBMkNJbk82cE9KeEcxVEVQOVRvdkRlcApnWlZhYVE0d3dqQTI0R2gwb1hrNGJ5TWNKaEhpRURmT0QvemlWMlN4ZEN4MHhGS0hQL3NRcm9RR1JyM0RPeTFwCmUvRzN6cndHN0FoSEg4dWJTQlZFMUVpZ0tvaXlpWTVnMGJVZTZUT2FobEdaMkxkTytheVg4UWdYOTNXOUVzdWoKTTIzaFA2T3pnRjhhbVpGZEpFRGNkR0dhdm5wMFBlbTJGb1dpY3hvZlFsWUhacnA4WGk3Z1JHY0RjakJuWFZxcAptQkpnSkNuSkJ2UWZ0WXJPbG1mZUZMMXpKckM5WUdUYmxjbUNuTi95UW5VcVdZZHJDLzNTYWVHOEZTbz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=" | base64 -d > $CA_CERTIFICATE_FILE_PATH
INTERNAL_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
sed -i s,MASTER_ENDPOINT,https://232F45B8AFA817FD332477B84D964B2F.yl4.eu-west-1.eks.amazonaws.com,g /var/lib/kubelet/kubeconfig
sed -i s,CLUSTER_NAME,terra-dev,g /var/lib/kubelet/kubeconfig
sed -i s,REGION,eu-west-1,g /etc/systemd/system/kubelet.service
sed -i s,MAX_PODS,20,g /etc/systemd/system/kubelet.service
sed -i s,MASTER_ENDPOINT,https://232F45B8AFA817FD332477B84D964B2F.yl4.eu-west-1.eks.amazonaws.com,g /etc/systemd/system/kubelet.service
sed -i s,INTERNAL_IP,$INTERNAL_IP,g /etc/systemd/system/kubelet.service
DNS_CLUSTER_IP=10.100.0.10
if [[ $INTERNAL_IP == 10.* ]] ; then DNS_CLUSTER_IP=172.20.0.10; fi
sed -i s,DNS_CLUSTER_IP,$DNS_CLUSTER_IP,g /etc/systemd/system/kubelet.service
sed -i s,CERTIFICATE_AUTHORITY_FILE,$CA_CERTIFICATE_FILE_PATH,g /var/lib/kubelet/kubeconfig
sed -i s,CLIENT_CA_FILE,$CA_CERTIFICATE_FILE_PATH,g /etc/systemd/system/kubelet.service
systemctl daemon-reload
systemctl restart kubelet
EOL': exit status 1. Output: /bin/sh: /data_output.sh: Permission denied
My eks_node module:
data "aws_ami" "eks-worker" {
filter {
name = "name"
values = ["amazon-eks-node-v*"]
}
most_recent = true
owners = ["602401143452"] # Amazon
}
data "aws_region" "current" {}
data "template_file" "user_data" {
template = "${file("${path.module}/userdata.tpl")}"
vars {
eks_certificate_authority = "${var.eks_certificate_authority}"
eks_endpoint = "${var.eks_endpoint}"
eks_cluster_name = "${var.eks_cluster_name}"
workspace = "${terraform.workspace}"
aws_region_current_name = "${data.aws_region.current.name}"
}
}
resource "null_resource" "export_rendered_template" {
provisioner "local-exec" {
command = "cat > /data_output.sh <<EOLn${data.template_file.user_data.rendered}nEOL"
}
}
resource "aws_launch_configuration" "terra" {
associate_public_ip_address = true
iam_instance_profile = "${var.iam_instance_profile}"
image_id = "${data.aws_ami.eks-worker.id}"
instance_type = "t2.medium"
name_prefix = "terraform-eks"
key_name = "Dev1"
security_groups = ["${var.security_group_node}"]
user_data = "${data.template_file.user_data.rendered}"
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "terra" {
desired_capacity = 1
launch_configuration = "${aws_launch_configuration.terra.id}"
max_size = 2
min_size = 1
name = "terraform-eks"
vpc_zone_identifier = ["${var.subnets}"]
tag {
key = "Name"
value = "terraform-eks"
propagate_at_launch = true
}
tag {
key = "kubernetes.io/cluster/${var.eks_cluster_name}-${terraform.workspace}"
value = "owned"
propagate_at_launch = true
}
}
EKS currently documents this required userdata for EKS worker nodes to
properly configure Kubernetes applications on the EC2 instance.
https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/amazon-eks-nodegroup.yaml
My userdata.tpl:
#!/bin/bash -xe
CA_CERTIFICATE_DIRECTORY=/etc/kubernetes/pki
CA_CERTIFICATE_FILE_PATH=$CA_CERTIFICATE_DIRECTORY/ca.crt
mkdir -p $CA_CERTIFICATE_DIRECTORY
echo "${eks_certificate_authority}" | base64 -d > $CA_CERTIFICATE_FILE_PATH
INTERNAL_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
sed -i s,MASTER_ENDPOINT,${eks_endpoint},g /var/lib/kubelet/kubeconfig
sed -i s,CLUSTER_NAME,${eks_cluster_name}-${workspace},g /var/lib/kubelet/kubeconfig
sed -i s,REGION,${aws_region_current_name},g /etc/systemd/system/kubelet.service
sed -i s,MAX_PODS,20,g /etc/systemd/system/kubelet.service
sed -i s,MASTER_ENDPOINT,${eks_endpoint},g /etc/systemd/system/kubelet.service
sed -i s,INTERNAL_IP,$INTERNAL_IP,g /etc/systemd/system/kubelet.service
DNS_CLUSTER_IP=10.100.0.10
if [[ $INTERNAL_IP == 10.* ]] ; then DNS_CLUSTER_IP=172.20.0.10; fi
sed -i s,DNS_CLUSTER_IP,$DNS_CLUSTER_IP,g /etc/systemd/system/kubelet.service
sed -i s,CERTIFICATE_AUTHORITY_FILE,$CA_CERTIFICATE_FILE_PATH,g /var/lib/kubelet/kubeconfig
sed -i s,CLIENT_CA_FILE,$CA_CERTIFICATE_FILE_PATH,g /etc/systemd/system/kubelet.service
systemctl daemon-reload
systemctl restart kubelet
terraform terraform-provider-aws amazon-eks terraform-template-file aws-eks
terraform terraform-provider-aws amazon-eks terraform-template-file aws-eks
asked Nov 19 at 16:01
syst0m
315
315
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
up vote
1
down vote
accepted
The local-exec
provisioner is running the cat
command on your local system (the one running Terraform) to output the contents of the user-data script, presumably for later reference. However, your user running Terraform does not have permission to write to the /
directory.
If you don't care about having the rendered user-data script output to a local file, then you can comment out the whole resource "null_resource" "export_rendered_template"
block.
But if you do want it, try changing the output path from /data-output.sh
to ./data-output.sh
or some other path where your user can write to.
Note: This may not work cleanly on Windows as you may need to change paths.
1
I was under the impression, the cat command is run on the remote machine being provisioned. Thanks for the tip!
– syst0m
Nov 20 at 13:28
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
accepted
The local-exec
provisioner is running the cat
command on your local system (the one running Terraform) to output the contents of the user-data script, presumably for later reference. However, your user running Terraform does not have permission to write to the /
directory.
If you don't care about having the rendered user-data script output to a local file, then you can comment out the whole resource "null_resource" "export_rendered_template"
block.
But if you do want it, try changing the output path from /data-output.sh
to ./data-output.sh
or some other path where your user can write to.
Note: This may not work cleanly on Windows as you may need to change paths.
1
I was under the impression, the cat command is run on the remote machine being provisioned. Thanks for the tip!
– syst0m
Nov 20 at 13:28
add a comment |
up vote
1
down vote
accepted
The local-exec
provisioner is running the cat
command on your local system (the one running Terraform) to output the contents of the user-data script, presumably for later reference. However, your user running Terraform does not have permission to write to the /
directory.
If you don't care about having the rendered user-data script output to a local file, then you can comment out the whole resource "null_resource" "export_rendered_template"
block.
But if you do want it, try changing the output path from /data-output.sh
to ./data-output.sh
or some other path where your user can write to.
Note: This may not work cleanly on Windows as you may need to change paths.
1
I was under the impression, the cat command is run on the remote machine being provisioned. Thanks for the tip!
– syst0m
Nov 20 at 13:28
add a comment |
up vote
1
down vote
accepted
up vote
1
down vote
accepted
The local-exec
provisioner is running the cat
command on your local system (the one running Terraform) to output the contents of the user-data script, presumably for later reference. However, your user running Terraform does not have permission to write to the /
directory.
If you don't care about having the rendered user-data script output to a local file, then you can comment out the whole resource "null_resource" "export_rendered_template"
block.
But if you do want it, try changing the output path from /data-output.sh
to ./data-output.sh
or some other path where your user can write to.
Note: This may not work cleanly on Windows as you may need to change paths.
The local-exec
provisioner is running the cat
command on your local system (the one running Terraform) to output the contents of the user-data script, presumably for later reference. However, your user running Terraform does not have permission to write to the /
directory.
If you don't care about having the rendered user-data script output to a local file, then you can comment out the whole resource "null_resource" "export_rendered_template"
block.
But if you do want it, try changing the output path from /data-output.sh
to ./data-output.sh
or some other path where your user can write to.
Note: This may not work cleanly on Windows as you may need to change paths.
answered Nov 19 at 17:17
KJH
969516
969516
1
I was under the impression, the cat command is run on the remote machine being provisioned. Thanks for the tip!
– syst0m
Nov 20 at 13:28
add a comment |
1
I was under the impression, the cat command is run on the remote machine being provisioned. Thanks for the tip!
– syst0m
Nov 20 at 13:28
1
1
I was under the impression, the cat command is run on the remote machine being provisioned. Thanks for the tip!
– syst0m
Nov 20 at 13:28
I was under the impression, the cat command is run on the remote machine being provisioned. Thanks for the tip!
– syst0m
Nov 20 at 13:28
add a comment |
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53378456%2fterraform-local-exec-provisioner-on-an-ec2-instance-fails-with-permission-denie%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown