Monday 26 November 2018

Enzyme Cheat Sheet

Here is a reminder of the enzyme tips I've picked up so I don't forget!

Basics

Mount a react component in a test

import React from 'react'
import { mount } from 'enzyme'

describe('MyReactComponent', () => {
  const $ = mount(
     <MyReactComponent />
  )

  ...
})

Now different expectations and finds can be used to test,

A React Class
expect($.find('MyReactClass').exists()).toBe(false)

An html element
expect($.find('button').exists()).toBe(false)

An html id
expect($.find('#my-id').exists()).toBe(false)

A style
expect($.find('.some-style').exists()).toBe(false)

Html properties
expect($.find('[aria-label="Save"]').exists()).toBe(false)
expect($.find('[title="Save"]').exists()).toBe(false)

Combination - Html element & Property
expect($.find('button[title="Save"]').exists()).toBe(false)

HTML / Text / Contains

Text content.  Find the first of many 'myClass' css classes and check the text of the element
expect($.find('.myclass').text()).toContain("My Value")
expect($.find('.myclass
').text()).not.toContain("My Value")

As above but with an exact text match
expect($.find('.myclass
').text()).toBe("The exact text")
As above but getting the whole html of the element rather than just the text
expect($.find('.myclass').at(0).html()).toBe("<div>something</div>")

Focus

Focus a field with enzyme
const inputField = $.find('#myInput')
inputField.getDOMNode().focus()

Validate that a field has focus
expect(document.activeElement.id).toBe('the id expected')

Props

Find an element and get the html properties
const button = $.find('[aria-label="My Button"]')
expect(button.props().disabled).not.toBe(true)

Once an element has been found the react props can be used for expectations by also using props()
expect($.find('MyReactClass').props().options).toEqual([{option: 'option1'}, {option: 'option2'}])






Monday 12 November 2018

Formik

Formik is a brilliant React form builder.  It is simple and intuitive.  It includes validation using a third party - I'm using yup below.

Basic Formik

This example below shows the basics of using Formik.  The internal properties and functions of formik take care of all the plumbing.  All you need to do is wire the onChange and onBlur functions to the individual 

<div>
  <Formik
    initialValues={{
      title: '',
      firstName: '',
      surname: ''
    }}
    validationSchema={Yup.object().shape({
      title: Yup.string()
        .trim()
        .required('Please enter a title')
        .max(5, 'Too many characters (Maximum 5 allowed)'),
      firstName: Yup.string()
        .trim()
        .required('Please enter a firstName')
        .max(100, 'Too many characters (Maximum 100 allowed)'),
      surname: Yup.string()
        .trim()
        .required('Please enter a surname')
        .max(100, 'Too many characters (Maximum 100 allowed)'),
      
    })}
    onSubmit={values => alert(values)}
  >
    {formikProps => {
      const {
        values,
        touched,
        errors,
        isSubmitting,
        handleChange,
        handleBlur,
        handleSubmit,
        setFieldValue,
        setFieldTouched
      } = formikProps
      return (
        <form>
          <div>
            <label htmlFor="title">
              <span className="mandatory">Title</span>
              <input
                id="title"
                type="text"
                value={values.title}
                onChange={handleChange}
                onBlur={handleBlur}
                className={`${errors.title && touched.title ? error : ''}`}
              />      
              {errors.title && touched.title && <div>{errors.title}</div>}
            </label>
          </div>

          <div>
            <label htmlFor="firstName">
              <span className="mandatory">First Name</span>
              <input
                id="firstName"
                type="text"
                value={values.firstName}
                onChange={handleChange}
                onBlur={handleBlur}
                className={`${errors.firstName && touched.firstName ? error : ''}`}
              />      
              {errors.firstName && touched.firstName && <div>{errors.firstName}</div>}
            </label>
          </div>

          <div>
            <label htmlFor="surname">
              <span className="mandatory">Surname</span>
              <input
                id="surname"
                type="text"
                value={values.surname}
                onChange={handleChange}
                onBlur={handleBlur}
                className={`${errors.surname && touched.surname ? error : ''}`}
              />      
              {errors.surname && touched.surname && <div>{errors.surname}</div>}
            </label>
          </div>
        </form>
      )
    }}
  </Formik>
</div>


Validate on Edit

By default Formik will validate once the first blur has occurred and then everytime on change.  This gives the user the chance to get the content right first without being bothered by error messages.  However, should you want to validate all changes from the off you can change your inputs to include an onInput function,


            <label htmlFor="surname">

              <span className="mandatory">Surname</span>
              <input
                id="surname"
                type="text"
                value={values.surname}

                onInput={() => setFieldTouched('surname', true)} // Validate as the user types
                onChange={handleChange}
                onBlur={handleBlur}
                className={`${errors.surname && touched.surname ? error : ''}`}
              />      
              {errors.surname && touched.surname && <div>{errors.surname}</div>}
            </label>


Validate Function

Validation doesn't have to be done by a schema.  This is quite limiting as it doesn't allow dynamic messages to be generated.  You can still use a schema as this is really easy syntax but just recreate it each time.  If you use the validate function the Formik 'values' object is passed to it.  This can be used to generate a schema with the current values and adjust the error messages accordingly.  In the example below the validate function calls for a validationSchema.  The schema uses the values and the ` string template indicator to create dynamic error messaages.

getValidateSchema = values => 
  Yup.object().shape({
    title: Yup.string()
      .trim()
      .required('Please enter a title')
      .max(5, `Too many characters (${values.title.length} entered, Maximum 5 allowed)`),
    firstName: Yup.string()
      .trim()
      .required('Please enter a firstName')
      .max(100, `Too many characters (${values.firstName.length} entered, Maximum 100 allowed)`),
    surname: Yup.string()
      .trim()
      .required('Please enter a surname')
      .max(100, `Too many characters (${values.surname.length} entered, Maximum 100 allowed)`),
  })

validate = values => {
  try {
    validateYupSchema(values, this.getValidateSchema(values), true, {})
    return {}
  } catch (error) {
    return yupToFormErrors(error)
  }
}

render = () => 
  <div>
    <Formik
      initialValues={{
        title: '',
        firstName: '',
        surname: ''
      }}
      validate={values => this.validate(values)}
      onSubmit={values => alert(values)}
    >
      {formikProps => {
        const {
          values,
          touched,
          errors,
          isSubmitting,
          handleChange,
          handleBlur,
          handleSubmit,
          setFieldValue,
          setFieldTouched
        } = formikProps
        return (
          <form>
            <div>
              <label htmlFor="title">
                <span className="mandatory">Title</span>
                <input
                  id="title"
                  type="text"
                  value={values.title}
                  onInput={() => setFieldTouched('title', true)}
                  onChange={handleChange}
                  onBlur={handleBlur}
                  className={`${errors.title && touched.title ? error : ''}`}
                />      
                {errors.title && touched.title && <div>{errors.title}</div>}
              </label>
            </div>

            <div>
              <label htmlFor="firstName">
                <span className="mandatory">First Name</span>
                <input
                  id="firstName"
                  type="text"
                  value={values.firstName}
                  onInput={() => setFieldTouched('firstName', true)}
                  onChange={handleChange}
                  onBlur={handleBlur}
                  className={`${errors.firstName && touched.firstName ? error : ''}`}
                />      
                {errors.firstName && touched.firstName && <div>{errors.firstName}</div>}
              </label>
            </div>

            <div>
              <label htmlFor="surname">
                <span className="mandatory">Surname</span>
                <input
                  id="surname"
                  type="text"
                  value={values.surname}
                  onInput={() => setFieldTouched('surname', true)}
                  onChange={handleChange}
                  onBlur={handleBlur}
                  className={`${errors.surname && touched.surname ? error : ''}`}
                />      
                {errors.surname && touched.surname && <div>{errors.surname}</div>}
              </label>
            </div>
          </form>
        )
      }}
    </Formik>
  </div>

Wednesday 26 September 2018

JSON Date / Time Formatting

The default behaviour when serialising json from a Java Date object is to break it down to its constituent parts eg,

"startDate" : {
    "year" : 2014,
    "month" : "MARCH",
    "dayOfMonth" : 1,
    "dayOfWeek" : "FRIDAY",
    "dayOfYear" : 1,
    "monthValue" : 1,
    "hour" : 2,
    "minute" : 2,
    "second" : 0,
    "nano" : 0,
    "chronology" : {
      "id" : "ISO",
      "calendarType" : "iso8601"
    }
  }

Spring Boot 2

Spring boot already contains the dependencies,

    <dependency>
        <groupId>com.fasterxml.jackson.core</groupId>
        <artifactId>jackson-databind</artifactId>
    </dependency>
    <dependency>
        <groupId>com.fasterxml.jackson.datatype</groupId>
        <artifactId>jackson-datatype-jsr310</artifactId>
    </dependency>

and it'll wire the jsr310 dependency automatically for any ObjectMapper that is @Autowired.  All that is necessary then is to add the property,

    spring.jackson.serialization.WRITE_DATES_AS_TIMESTAMPS=false

Standalone

For standalone code the dependencies above will need to be added.  In the code when the object mapper is created the following is necessary,

    final ObjectMapper objectMapper = new ObjectMapper();
    objectMapper.enable(SerializationFeature.INDENT_OUTPUT);
    objectMapper.disable(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS);
    objectMapper.registerModule(new JavaTimeModule());


Deserializer

Of course if you already have a file that is serialized like this it is too late.  In that case you can transform the file using some deserializers.  Below are two deserializers for LocalDate and LocalDateTime to allow you to reconstruct your object.




  public class LocalDateDeserializer extends StdDeserializer<LocalDate>
  {
      public LocalDateDeserializer()
      {
          this(null);
      }

      public LocalDateDeserializer(final Class<LocalDate> t)
      {
          super(t);
      }

      @Override
      public LocalDate deserialize(final JsonParser p, final DeserializationContext ctxt) throws IOException, JsonProcessingException
      {
          JsonNode node = p.getCodec().readTree(p);
          return LocalDate.of(node.get("year").intValue(),
                              node.get("monthValue").intValue(),
                              node.get("dayOfMonth").intValue());
      }
  }

  public class LocalDateTimeDeserializer extends StdDeserializer<LocalDateTime>
  {
      public LocalDateTimeDeserializer()
      {
          this(null);
      }

      public LocalDateTimeDeserializer(final Class<LocalDateTime> t)
      {
          super(t);
      }

      @Override
      public LocalDateTime deserialize(final JsonParser p, final DeserializationContext ctxt) throws IOException, JsonProcessingException
      {
          JsonNode node = p.getCodec().readTree(p);
          return LocalDateTime.of(node.get("year").intValue(),
                                  node.get("monthValue").intValue(),
                                  node.get("dayOfMonth").intValue(),
                                  node.get("hour").intValue(),
                                  node.get("minute").intValue(),
                                  node.get("second").intValue(),
                                  node.get("nano").intValue()
                                  );
      }
  }

These allow you to write code to read in your existing badly formatted json like so,

  final ObjectMapper mapper = new ObjectMapper();

  SimpleModule module = new SimpleModule();
  module.addDeserializer(LocalDate.class, new LocalDateDeserializer());
  module.addDeserializer(LocalDateTime.class, new LocalDateTimeDeserializer());
  mapper.registerModule(module);
  final MyObject myobj = mapper.readValue(new FileInputStream("<file to read in>"), MyObject.class);

  final ObjectMapper dateTimeMapper = new ObjectMapper();
  dateTimeMapper.registerModule(new JavaTimeModule();  dateTimeMapper.configure(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS, false);
  final FileOutputStream os = new FileOutputStream("<file to write out");
  dateTimeMapper.writeValue(os, myobj);
  os.flush();

  os.close();

Friday 17 August 2018

PACTs

PACT testing is a Consumer Driven Contract implementation.

Here is a quick example for a Consumer test and a Provider test.

Consumer

This is some sample code for the consumer.  This will create a 'pact' json file in the target folder after a maven build.  This json is what will get sent to the Pact Broker or can be used to validate the pacts locally instead.
 
public class MyPactConsumerTest {  

  @Rule
  public PactProviderRuleMk2 pactProviderRule = new PactProviderRuleMk2("<NAME_OF_PROVIDER>", "localhost", PACT_SERVER_PORT, PactSpecVersion.V2, this);

  @Pact(consumer = "<NAME_OF_CONSUMER>", provider = "<NAME_OF_PROVIDER")
  public RequestResponsePact doSomething(final PactDslWithProvider pactDslWithProvider) {

    return pactDslWithProvider.given("<STATE>")
      .uponReceiving("<DESCRIPTION")
      .path("<URL>")
      .method("POST")
      .body(getExpectedRequestBody())
      .headers("Content-Type", "<TYPE>")
      .willRespondWith()
      .status(HttpStatus.NO_CONTENT.value())
      .toPact();
  }

  @Test
  @PactVerification(fragment = "doSomething") <THIS IS THE METHOD NAME ABOVE>
  public void testDoSomething() {
    someJavaClassToCall.someMethod();
  }

  private DslPart getExpectedResponseBody() {
    return new PactDslJsonBody().stringMatcher("dataAttribute", ".*", "");
  }
}

Provider

The provider can then validate the pact json created by the consumer above.  If the Pacts are published then it is necessary to validate against the Broker with the @PactBroker annotation.  If you want to just validate against a pact json file then copy the file into a folder in /src/test/resources/pact (or something) and just add the @PactFolder("pact") annotation instead.


@Provider("<NAME_OF_PROVIDER>")
@PactBroker
@IgnoreNoPactsToVerify
//@PactFolder("pact") <A FOLDER CAN BE USED INSTEAD OF THE BROKER>
public class MyPactProviderTest {

  @State("<STATE>")
  public void methodName() {
    when(mockService.mockMethod(any())).thenReturn(new Data()); <THIS RETURNS DATA THAT MATCHES THE RESPONSE BODY ABOVE>
  }

}

Friday 16 March 2018

sed Crib Sheet

sed (stream editor) is a linux command that can manipulate files.  It is particularly useful for doing match and replace on a whole file. Below is the linux syntax

Change from 'something' to 'something else

sample.txt

hello out
there
how are
you today


Simple single line match
> sed -i s/hello/goodbye/ sample.txt
goodbye out
there
how are
you today


Multiline replace.  Match on 'out'.  N adds the next line to the 'match space' and then there is a normal substitution on the 'match space'.
> sed -i "/out/{N;s/there/fred/}" sample.txt
hello out
fred
how are
you today

Thursday 8 March 2018

Kubernetes / kubectl Crib Sheet


Get and Describe

Generally the get and describe can be used for 'deployments', 'nodes', 'pods', 'services', 'secrets' etc

// Get a list of pods
> kubectl get pods

// Interact with the pod by getting a shell
> kubectl exec -it <pod-nane> /bin/bash

// Get a list of services
> kubectl get services

// Describe a service to see how to connect to it
> kubectl describe service <service-name>

Contexts

// Allow kubectl to see multiple config files - this isn't really a merge in that the files stay separate
> export KUBECONFIG=config:config-other:config-different

// List the Contexts (* = current)
> kubectl config get-contexts

// View all the config details
> kubectl config view

// Use a particular contextsa
> kubectl config use-context <context-name>

Secrets

Create a secret
> kubectl create secret generic ssh-key-secret --from-file=id_rsa=~/.ssh/id_rsa --from-file=id_rsa.pub=~/.ssh/id_rsa.pub

Thursday 22 February 2018

Packer Basics

Packer by Hashicorp (https://www.packer.io/) is used to create AWS AMI (Amazon Machine Image) which are used as images that instances are spun up from.  Packer allows you to take a base image and provision it as required.  Packer uses json so you can't add comments to your packer files which is a bit annoying.  I have commented the packer elements in the example below.

Packer will spin up the 'source_ami' specified and connect with ssh to execute the commands in the 'provisioners' section of the file.  The new AMI is created from this instance once all the commands have been run.  You can see this instance in the AWS Console which is then immediately terminated once Packer has finished working.

You can see the created AMIs in the AWS Console.  Go to

Services - EC2 - AMIs (Left Panel)

Define a set of variables at the top of the file that are easily changed.  This way you don't have to hunt through the file to find an instances of these variables that need to be altered later.
{
    "variables": {
        "region": "<region>",

// This uses the profile from the .aws/credentials file
        "profile": "<profile>",       

// The base ami that you are starting from
        "source_ami": "<base ami>",       

// The optional VPC (virtual private cloud) and subnet that you want this ami to be part of
        "vpc_id": "<vpc>",                   
        "subnet_id": "<subnet>"
    },
    "builders": [
        {
            "ami_name": "<name of the ami created>",
            "ami_description": "<description>",

// How is the communication with the packer instance going to be established
            "communicator": "ssh",

// Force any AMI with the same name to be removed ('deregistered')
            "force_deregister": true,
            "instance_type": "t2.micro",

// Use the parameters which are defined in the 'variables' section above
            "profile": "{{user `profile`}}",
            "region": "{{user `region`}}",
            "source_ami": "{{user `source_ami`}}",
            "ssh_pty": true,
            "ssh_username": "<ssh username that you are going to connect as>",
            "subnet_id": "{{user `subnet_id`}}",
            "type": "amazon-ebs",
            "vpc_id": "{{user `vpc_id`}}"
        }
    ],

// The provisioners section that adds additional files, installs etc to the AMI that is going to be created
    "provisioners": [

// This first provisioner installs wget
        {
            "type": "shell",
            "inline": [
                "sudo yum update -y",
                "sudo yum -y install wget"
            ]
        },

// Perhaps also install java afterwards?
        {
            "type": "shell",
            "inline": [
                "sudo yum -y install java-1.8.0-openjdk-devel"
            ]
        },
    ]
}

Parameter Store in AWS

Using the parameter store in AWS is pretty straight forward.  You can use the command line to get and put parameters and therefore not have to store them in source control.  You can use the IAM roles in AWS to limit access to the values.

Find the Parameter Store by logging in to the AWS console and navigating to

Services - Systems Manager - Parameter Store (Left panel)

Put Parameter

There are a number of types of value that can be stored in the parameter store.  String, StringList and SecureString.  To put a parameter use

aws ssm put-parameter --region <region> --name <parameterName> --type SecureString --value "my secure value"

To store the contents of a file you can use

aws ssm put-parameter --region <region> --name <parameterName> --type SecureString --value file://my_file_to_store.anything


Get Parameter

Use the simple command line to get a parameter value.

aws ssm get-parameter --region <region> --name <parameterName>

If you SecureString was used as a type then the --with-decryption value can be used to see the actual value.

aws ssm get-parameter --region <region> --name <parameterName> --with-decryption

This output in json isn't always useful.  A --query parameter can be added to specify the actual output needed

aws ssm get-parameter --region <region> --name <parameterName> --with-decryption --query Parameter.Value

Add | cut -d "\"" -f 2 to remove the quotes and using 'echo -e' will restore any line breaks which are encoded as \n

Similarly if a profile is needed then --profile <profileName> can be used

IAM Role

To allow access the arn:aws:iam::aws:policy/AmazonSSMReadOnlyAccess role can be added to a instance that needs to have read-only access.