Showing posts with label AWS. Show all posts
Showing posts with label AWS. Show all posts

Friday, 1 March 2019

AWS, Spring, Localstack

Using the AWS Java client is very straightforward.  Unit testing is also quite simple by just mocking the AWS classes.  However, integration testing is more complicated.  Here is an example of using LocalStack, TestContainers and Spring to wire AWS objects to point to the LocalStack instance.

Localstack: An implementation of AWS which runs locally with natively or in a docker container
TestContainers: A java library that lets a docker container be run locally for testing

Here TestContainers is used to start the localstack docker image so that the AWS calls can be made against it.


Maven dependencies

Using the v2 dependencies for the AWS library requires bringing in the AWS bom (bill of materials) so that any dependency can just be declared and the bom takes care of getting the correct versions of each dependency.  In the example below the S3 and SQS dependencies are configured.

Dependency management & dependencies

  <dependencyManagement>
    <dependencies>
      <dependency>
        <groupId>software.amazon.awssdk</groupId>
        <artifactId>bom</artifactId>
        <version>2.4.11</version>
        <type>pom</type>
        <scope>import</scope>
      </dependency>
    </dependencies>
  </dependencyManagement>

    <dependency>
      <groupId>software.amazon.awssdk</groupId>
      <artifactId>s3</artifactId>
    </dependency>
    <dependency>
      <groupId>software.amazon.awssdk</groupId>
      <artifactId>sqs</artifactId>
    </dependency>

Test dependencies:

    <dependency>
      <groupId>org.testcontainers</groupId>
      <artifactId>testcontainers</artifactId>
      <version>1.10.6</version>
      <scope>test</scope>
    </dependency>
    <dependency>
      <groupId>org.testcontainers</groupId>
      <artifactId>localstack</artifactId>
      <version>1.10.6</version>
      <scope>test</scope>
    </dependency>
    <dependency>
      <groupId>cloud.localstack</groupId>
      <artifactId>localstack-utils</artifactId>
      <version>0.1.18</version>
      <scope>test</scope>
    </dependency>


AWS Configuration

The normal spring configuration for aws clients is very straight forward.  Here is an example of an S3Client and an SqsClient using the AWS v2 objects.

import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.sqs.SqsClient;

@Configuration
public class AwsConfiguration {

  @Bean
  public S3Client s3Client(){
    return S3Client.builder().region(Region.EU_WEST_1).build();
  }

  @Bean
  public SqsClient sqsClient(){
    return SqsClient.builder().region(Region.EU_WEST_1).build();
  }
}

These objects will use the default AwsCredentialProvider but this can be overridden here.


TestConfiguration

To create the test configuration we need to start Localstack using TestContainers.  This test configuration starts the Localstack and then uses it to configure the S3Client and SqsClient to point to localstack

import static org.testcontainers.containers.localstack.LocalStackContainer.Service.S3;
import static org.testcontainers.containers.localstack.LocalStackContainer.Service.SQS;

import java.net.URI;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.test.context.TestConfiguration;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.DependsOn;
import org.testcontainers.containers.localstack.LocalStackContainer;
import org.testcontainers.containers.wait.strategy.DockerHealthcheckWaitStrategy;
import software.amazon.awssdk.auth.credentials.AwsCredentialsProvider;
import software.amazon.awssdk.auth.credentials.AwsCredentialsProviderChain;
import software.amazon.awssdk.auth.credentials.ContainerCredentialsProvider;
import software.amazon.awssdk.auth.credentials.EnvironmentVariableCredentialsProvider;
import software.amazon.awssdk.auth.credentials.InstanceProfileCredentialsProvider;
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
import software.amazon.awssdk.auth.credentials.SystemPropertyCredentialsProvider;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.CreateBucketRequest;
import software.amazon.awssdk.services.sqs.SqsClient;
import software.amazon.awssdk.services.sqs.model.CreateQueueRequest;

@TestConfiguration
public class AwsConfigurationTest {

  @Bean
  public LocalStackContainer localStackContainer() {
    LocalStackContainer localStackContainer = new LocalStackContainer().withServices(SQS, S3);
    localStackContainer.start();
    return localStackContainer;
  }

  @Bean
  public S3Client s3Client() {

    final S3Client client = S3Client.builder()
        .endpointOverride(URI.create(localStackContainer().getEndpointConfiguration(S3).getServiceEndpoint()))
        .build();

    client.createBucket(CreateBucketRequest.builder().bucket("test_bucket").build());

    return client;
  }

  @Bean
  public SqsClient sqsClient() {

    final SqsClient sqs = SqsClient.builder()
        .endpointOverride(URI.create(localStackContainer().getEndpointConfiguration(SQS).getServiceEndpoint()))
        .build();

    sqs.createQueue(CreateQueueRequest.builder().queueName("test_queue").build());

    return sqs;
  }
}




Thursday, 22 February 2018

Packer Basics

Packer by Hashicorp (https://www.packer.io/) is used to create AWS AMI (Amazon Machine Image) which are used as images that instances are spun up from.  Packer allows you to take a base image and provision it as required.  Packer uses json so you can't add comments to your packer files which is a bit annoying.  I have commented the packer elements in the example below.

Packer will spin up the 'source_ami' specified and connect with ssh to execute the commands in the 'provisioners' section of the file.  The new AMI is created from this instance once all the commands have been run.  You can see this instance in the AWS Console which is then immediately terminated once Packer has finished working.

You can see the created AMIs in the AWS Console.  Go to

Services - EC2 - AMIs (Left Panel)

Define a set of variables at the top of the file that are easily changed.  This way you don't have to hunt through the file to find an instances of these variables that need to be altered later.
{
    "variables": {
        "region": "<region>",

// This uses the profile from the .aws/credentials file
        "profile": "<profile>",       

// The base ami that you are starting from
        "source_ami": "<base ami>",       

// The optional VPC (virtual private cloud) and subnet that you want this ami to be part of
        "vpc_id": "<vpc>",                   
        "subnet_id": "<subnet>"
    },
    "builders": [
        {
            "ami_name": "<name of the ami created>",
            "ami_description": "<description>",

// How is the communication with the packer instance going to be established
            "communicator": "ssh",

// Force any AMI with the same name to be removed ('deregistered')
            "force_deregister": true,
            "instance_type": "t2.micro",

// Use the parameters which are defined in the 'variables' section above
            "profile": "{{user `profile`}}",
            "region": "{{user `region`}}",
            "source_ami": "{{user `source_ami`}}",
            "ssh_pty": true,
            "ssh_username": "<ssh username that you are going to connect as>",
            "subnet_id": "{{user `subnet_id`}}",
            "type": "amazon-ebs",
            "vpc_id": "{{user `vpc_id`}}"
        }
    ],

// The provisioners section that adds additional files, installs etc to the AMI that is going to be created
    "provisioners": [

// This first provisioner installs wget
        {
            "type": "shell",
            "inline": [
                "sudo yum update -y",
                "sudo yum -y install wget"
            ]
        },

// Perhaps also install java afterwards?
        {
            "type": "shell",
            "inline": [
                "sudo yum -y install java-1.8.0-openjdk-devel"
            ]
        },
    ]
}

Parameter Store in AWS

Using the parameter store in AWS is pretty straight forward.  You can use the command line to get and put parameters and therefore not have to store them in source control.  You can use the IAM roles in AWS to limit access to the values.

Find the Parameter Store by logging in to the AWS console and navigating to

Services - Systems Manager - Parameter Store (Left panel)

Put Parameter

There are a number of types of value that can be stored in the parameter store.  String, StringList and SecureString.  To put a parameter use

aws ssm put-parameter --region <region> --name <parameterName> --type SecureString --value "my secure value"

To store the contents of a file you can use

aws ssm put-parameter --region <region> --name <parameterName> --type SecureString --value file://my_file_to_store.anything


Get Parameter

Use the simple command line to get a parameter value.

aws ssm get-parameter --region <region> --name <parameterName>

If you SecureString was used as a type then the --with-decryption value can be used to see the actual value.

aws ssm get-parameter --region <region> --name <parameterName> --with-decryption

This output in json isn't always useful.  A --query parameter can be added to specify the actual output needed

aws ssm get-parameter --region <region> --name <parameterName> --with-decryption --query Parameter.Value

Add | cut -d "\"" -f 2 to remove the quotes and using 'echo -e' will restore any line breaks which are encoded as \n

Similarly if a profile is needed then --profile <profileName> can be used

IAM Role

To allow access the arn:aws:iam::aws:policy/AmazonSSMReadOnlyAccess role can be added to a instance that needs to have read-only access.