TODO TODO TODO ABOVE HERE NEEDS TO CHECKED/IMPLEMENTED
View SourceRelease NotesSecurity groups
This module attaches a security group to each EC2 Instance that allows inbound requests as follows:
SSH: For the SSH port (default: 22), you can use the
allowed_ssh_cidr_blocksparameter to control the list of\ CIDR blocks that will be allowed access. You can use theallowed_inbound_ssh_security_group_idsparameter to control the list of source Security Groups that will be allowed access.The ID of the security group is exported as an output variable, which you can use with the kibana-security-group-rules, elasticsearch-security-group-rules, elastalert-security-group-rules, and logstash-security-group-rules modules to open up all the ports necessary for Kibana and the respective Elasticsearch tools.
SSH access
You can associate an EC2 Key Pair with each
of the EC2 Instances in this cluster by specifying the Key Pair's name in the ssh_key_name variable. If you don't
want to associate a Key Pair with these servers, set ssh_key_name to an empty string.
How do you connect to the Kibana cluster?
Using a load balancer
If you deploy the Kibana cluster with a load balancer in front of it see: ELK multi-cluster Example
Then you can use the load balancer's DNS along with the kibana_ui_port that you specified in the variables.tf to form a URL like: http://loadbalancer_dns:kibana_ui_port/
For example, your URL will likely look something like: http://kibanaexample-lb-77641507.us-east-1.elb.amazonaws.com:5601/
Using the AWS Console UI
Without a load balancer to act as a single entry point, you will have to manually choose one of the IP addresses from the EC2 Instances
that were deployed as part of the Auto Scaling Group. You can find the IP addresses of each EC2 Instance that was deployed as part of the Kibana cluster deployment by locating
those instances in the AWS Console's Instance view. Accessing the Kibana UI would require that
the IP address you use is either public, or accessible from your local network. The URL would look something like: http://the.ip.address:kibana_ui_port/
How do you roll out updates?
If you want to deploy a new version of Kibana across the cluster, the best way to do that is to:
Rolling deploy:
Build a new AMI.
Set the
ami_idparameter to the ID of the new AMI.Run
terraform apply.Because the kibana-cluster module uses the Gruntwork asg-rolling-deploy module under the hood, running
terraform applywill automatically perform a zero-downtime rolling deployment. Specifically, new EC2 Instances will spawned, and only once the new EC2 Instances pass the Load Balancer Health Checks will the existing Instances be terminated.Note that there will be a brief period of time during which EC2 Instances based on both the old
ami_idand newami_idwill be running. Rolling upgrades docs suggest that this is acceptable for Elasticsearch version 5.6 and greater.
New cluster:
- Build a new AMI.
- Create a totally new ASG using the
kibana-clustermodule with theami_idset to the new AMI, but all other parameters the same as the old cluster. - Wait for all the nodes in the new ASG to start up and pass health checks.
- Remove each of the nodes from the old cluster.
- Remove the old ASG by removing that
kibana-clustermodule from your code.
Security
Here are some of the main security considerations to keep in mind when using this module:
Encryption in transit
Kibana can encrypt all of its network traffic. TODO: Should we recommend using X-Pack (official solution, but paid), an Nginx Reverse Proxy, a custom Elasticsearch plugin, or something else?
Encryption at rest
EC2 Instance Storage
The EC2 Instances in the cluster store their data in an EC2 Instance Store, which does not have native suport for encryption (unlike EBS Volume Encryption).
TODO: Should we implement encryption at rest uising the technique described at https://aws.amazon.com/blogs/security/how-to-protect-data-at-rest-with-amazon-ec2-instance-store-encryption/?
Elasticsearch Keystore
Some Elasticsearch settings may contain secrets and should be encrypted. You can use the Elasticsearch Keystore for such settings. The
elasticsearch.keystore is created automatically upon boot of each node, and is available for use as described in the
docs.
Sample Usage
- Terraform
- Terragrunt
# ------------------------------------------------------------------------------------------------------
# DEPLOY GRUNTWORK'S KIBANA-CLUSTER MODULE
# ------------------------------------------------------------------------------------------------------
module "kibana_cluster" {
source = "git::git@github.com:gruntwork-io/terraform-aws-elk.git//modules/kibana-cluster?ref=v0.11.1"
# ----------------------------------------------------------------------------------------------------
# REQUIRED VARIABLES
# ----------------------------------------------------------------------------------------------------
# The ID of the AMI to run in this cluster.
ami_id = <INPUT REQUIRED>
# The name of the kibana cluster (e.g. kibana-stage). This variable is used to
# namespace all resources created by this module.
cluster_name = <INPUT REQUIRED>
# The type of EC2 Instances to run for each node in the cluster (e.g. t2.micro).
instance_type = <INPUT REQUIRED>
# The maximum number of nodes to have in the kibana cluster.
max_size = <INPUT REQUIRED>
# The minimum number of nodes to have in the kibana cluster.
min_size = <INPUT REQUIRED>
# The subnet IDs into which the EC2 Instances should be deployed.
subnet_ids = <INPUT REQUIRED>
# A User Data script to execute while the server is booting.
user_data = <INPUT REQUIRED>
# The ID of the VPC in which to deploy the kibana cluster
vpc_id = <INPUT REQUIRED>
# ----------------------------------------------------------------------------------------------------
# OPTIONAL VARIABLES
# ----------------------------------------------------------------------------------------------------
# A list of IP address ranges in CIDR format from which SSH access will be
# permitted. Attempts to access SSH from all other IP addresses will be blocked.
allow_ssh_from_cidr_blocks = []
# The IDs of security groups from which SSH connections will be allowed. If you
# update this variable, make sure to update var.num_ssh_security_group_ids too!
allow_ssh_from_security_group_ids = []
# A list of IP address ranges in CIDR format from which access to the UI will be
# permitted. Attempts to access the UI from all other IP addresses will be
# blocked.
allow_ui_from_cidr_blocks = []
# The IDs of security groups from which access to the UI will be permitted. If you
# update this variable, make sure to update var.num_ui_security_group_ids too!
allow_ui_from_security_group_ids = []
# If set to true, associate a public IP address with each EC2 Instance in the
# cluster.
associate_public_ip_address = false
# The desired number of EC2 Instances to run in the ASG initially. Note that auto
# scaling policies may change this value. If you're using auto scaling policies to
# dynamically resize the cluster, you should actually leave this value as null.
desired_capacity = null
# Path in which to create the IAM instance profile.
instance_profile_path = "/"
# This is the port that is used to access kibana UI
kibana_ui_port = 5601
# Wait for this number of EC2 Instances to show up healthy in the load balancer on
# creation.
min_elb_capacity = 0
# The number of security group IDs in var.allow_ssh_from_security_group_ids. We
# should be able to compute this automatically, but due to a Terraform limitation,
# if there are any dynamic resources in var.allow_ssh_from_security_group_ids,
# then we won't be able to: https://github.com/hashicorp/terraform/pull/11482
num_ssh_security_group_ids = 0
# The number of security group IDs in var.allow_ui_from_security_group_ids. We
# should be able to compute this automatically, but due to a Terraform limitation,
# if there are any dynamic resources in var.allow_ui_from_security_group_ids, then
# we won't be able to: https://github.com/hashicorp/terraform/pull/11482
num_ui_security_group_ids = 0
# The name of an EC2 Key Pair that can be used to SSH to the EC2 Instances in this
# cluster. Set to an empty string to not associate a Key Pair.
ssh_key_name = null
# The port used for SSH connections
ssh_port = 22
# List fo extra tag blocks added to the autoscaling group configuration. Each
# element in the list is a map containing keys 'key', 'value', and
# 'propagate_at_launch' mapped to the respective values.
tags = []
# A list of target group ARNs to associate with the Kibana cluster.
target_group_arns = []
# A maximum duration that Terraform should wait for the EC2 Instances to be
# healthy before timing out.
wait_for_capacity_timeout = "10m"
}
# Coming soon!
Reference
- Inputs
- Outputs
Required
ami_idstringThe ID of the AMI to run in this cluster.
cluster_namestringThe name of the kibana cluster (e.g. kibana-stage). This variable is used to namespace all resources created by this module.
instance_typestringThe type of EC2 Instances to run for each node in the cluster (e.g. t2.micro).
max_sizenumberThe maximum number of nodes to have in the kibana cluster.
min_sizenumberThe minimum number of nodes to have in the kibana cluster.
subnet_idslist(string)The subnet IDs into which the EC2 Instances should be deployed.
user_datastringA User Data script to execute while the server is booting.
vpc_idstringThe ID of the VPC in which to deploy the kibana cluster
Optional
allow_ssh_from_cidr_blockslist(string)A list of IP address ranges in CIDR format from which SSH access will be permitted. Attempts to access SSH from all other IP addresses will be blocked.
[]allow_ssh_from_security_group_idslist(string)The IDs of security groups from which SSH connections will be allowed. If you update this variable, make sure to update num_ssh_security_group_ids too!
[]allow_ui_from_cidr_blockslist(string)A list of IP address ranges in CIDR format from which access to the UI will be permitted. Attempts to access the UI from all other IP addresses will be blocked.
[]allow_ui_from_security_group_idslist(string)The IDs of security groups from which access to the UI will be permitted. If you update this variable, make sure to update num_ui_security_group_ids too!
[]If set to true, associate a public IP address with each EC2 Instance in the cluster.
falsedesired_capacitynumberThe desired number of EC2 Instances to run in the ASG initially. Note that auto scaling policies may change this value. If you're using auto scaling policies to dynamically resize the cluster, you should actually leave this value as null.
nullinstance_profile_pathstringPath in which to create the IAM instance profile.
"/"kibana_ui_portnumberThis is the port that is used to access kibana UI
5601min_elb_capacitynumberWait for this number of EC2 Instances to show up healthy in the load balancer on creation.
0The number of security group IDs in allow_ssh_from_security_group_ids. We should be able to compute this automatically, but due to a Terraform limitation, if there are any dynamic resources in allow_ssh_from_security_group_ids, then we won't be able to: https://github.com/hashicorp/terraform/pull/11482
0The number of security group IDs in allow_ui_from_security_group_ids. We should be able to compute this automatically, but due to a Terraform limitation, if there are any dynamic resources in allow_ui_from_security_group_ids, then we won't be able to: https://github.com/hashicorp/terraform/pull/11482
0ssh_key_namestringThe name of an EC2 Key Pair that can be used to SSH to the EC2 Instances in this cluster. Set to an empty string to not associate a Key Pair.
nullssh_portnumberThe port used for SSH connections
22tagslist(object(…))List fo extra tag blocks added to the autoscaling group configuration. Each element in the list is a map containing keys 'key', 'value', and 'propagate_at_launch' mapped to the respective values.
list(object({
key = string
value = string
propagate_at_launch = bool
}))
[]Example
default = [
{
key = "foo"
value = "bar"
propagate_at_launch = true
}
]
target_group_arnslist(string)A list of target group ARNs to associate with the Kibana cluster.
[]A maximum duration that Terraform should wait for the EC2 Instances to be healthy before timing out.
"10m"