Friday, 16 March 2018

sed Crib Sheet

sed (stream editor) is a linux command that can manipulate files.  It is particularly useful for doing match and replace on a whole file. Below is the linux syntax

Change from 'something' to 'something else

sample.txt

hello out
there
how are
you today


Simple single line match
> sed -i s/hello/goodbye/ sample.txt
goodbye out
there
how are
you today


Multiline replace.  Match on 'out'.  N adds the next line to the 'match space' and then there is a normal substitution on the 'match space'.
> sed -i "/out/{N;s/there/fred/}" sample.txt
hello out
fred
how are
you today

Thursday, 8 March 2018

Kubernetes / kubectl Crib Sheet


Get and Describe

Generally the get and describe can be used for 'deployments', 'nodes', 'pods', 'services', 'secrets' etc

// Get a list of pods
> kubectl get pods

// Interact with the pod by getting a shell
> kubectl exec -it <pod-nane> /bin/bash

// Get a list of services
> kubectl get services

// Describe a service to see how to connect to it
> kubectl describe service <service-name>

Contexts

// Allow kubectl to see multiple config files - this isn't really a merge in that the files stay separate
> export KUBECONFIG=config:config-other:config-different

// List the Contexts (* = current)
> kubectl config get-contexts

// View all the config details
> kubectl config view

// Use a particular contextsa
> kubectl config use-context <context-name>

Secrets

Create a secret
> kubectl create secret generic ssh-key-secret --from-file=id_rsa=~/.ssh/id_rsa --from-file=id_rsa.pub=~/.ssh/id_rsa.pub

Thursday, 22 February 2018

Packer Basics

Packer by Hashicorp (https://www.packer.io/) is used to create AWS AMI (Amazon Machine Image) which are used as images that instances are spun up from.  Packer allows you to take a base image and provision it as required.  Packer uses json so you can't add comments to your packer files which is a bit annoying.  I have commented the packer elements in the example below.

Packer will spin up the 'source_ami' specified and connect with ssh to execute the commands in the 'provisioners' section of the file.  The new AMI is created from this instance once all the commands have been run.  You can see this instance in the AWS Console which is then immediately terminated once Packer has finished working.

You can see the created AMIs in the AWS Console.  Go to

Services - EC2 - AMIs (Left Panel)

Define a set of variables at the top of the file that are easily changed.  This way you don't have to hunt through the file to find an instances of these variables that need to be altered later.
{
    "variables": {
        "region": "<region>",

// This uses the profile from the .aws/credentials file
        "profile": "<profile>",       

// The base ami that you are starting from
        "source_ami": "<base ami>",       

// The optional VPC (virtual private cloud) and subnet that you want this ami to be part of
        "vpc_id": "<vpc>",                   
        "subnet_id": "<subnet>"
    },
    "builders": [
        {
            "ami_name": "<name of the ami created>",
            "ami_description": "<description>",

// How is the communication with the packer instance going to be established
            "communicator": "ssh",

// Force any AMI with the same name to be removed ('deregistered')
            "force_deregister": true,
            "instance_type": "t2.micro",

// Use the parameters which are defined in the 'variables' section above
            "profile": "{{user `profile`}}",
            "region": "{{user `region`}}",
            "source_ami": "{{user `source_ami`}}",
            "ssh_pty": true,
            "ssh_username": "<ssh username that you are going to connect as>",
            "subnet_id": "{{user `subnet_id`}}",
            "type": "amazon-ebs",
            "vpc_id": "{{user `vpc_id`}}"
        }
    ],

// The provisioners section that adds additional files, installs etc to the AMI that is going to be created
    "provisioners": [

// This first provisioner installs wget
        {
            "type": "shell",
            "inline": [
                "sudo yum update -y",
                "sudo yum -y install wget"
            ]
        },

// Perhaps also install java afterwards?
        {
            "type": "shell",
            "inline": [
                "sudo yum -y install java-1.8.0-openjdk-devel"
            ]
        },
    ]
}

Parameter Store in AWS

Using the parameter store in AWS is pretty straight forward.  You can use the command line to get and put parameters and therefore not have to store them in source control.  You can use the IAM roles in AWS to limit access to the values.

Find the Parameter Store by logging in to the AWS console and navigating to

Services - Systems Manager - Parameter Store (Left panel)

Put Parameter

There are a number of types of value that can be stored in the parameter store.  String, StringList and SecureString.  To put a parameter use

aws ssm put-parameter --region <region> --name <parameterName> --type SecureString --value "my secure value"

To store the contents of a file you can use

aws ssm put-parameter --region <region> --name <parameterName> --type SecureString --value file://my_file_to_store.anything


Get Parameter

Use the simple command line to get a parameter value.

aws ssm get-parameter --region <region> --name <parameterName>

If you SecureString was used as a type then the --with-decryption value can be used to see the actual value.

aws ssm get-parameter --region <region> --name <parameterName> --with-decryption

This output in json isn't always useful.  A --query parameter can be added to specify the actual output needed

aws ssm get-parameter --region <region> --name <parameterName> --with-decryption --query Parameter.Value

Add | cut -d "\"" -f 2 to remove the quotes and using 'echo -e' will restore any line breaks which are encoded as \n

Similarly if a profile is needed then --profile <profileName> can be used

IAM Role

To allow access the arn:aws:iam::aws:policy/AmazonSSMReadOnlyAccess role can be added to a instance that needs to have read-only access.

Wednesday, 1 November 2017

SMB / Netbios Enumeration

SMB / Netbios
# Search for SMB services (open ports only reported)
nmap -p139,445 a.a.a.a-b --open

# Specific nbt span
nbtscan a.a.a.a-b

SMB Null Session 
This is valid for Windows machines before 2003 Server and XP
rpcclient -U "" a.a.a.a
Password: <leave empty>
> srvinfo
... (server info)
> enumdomusers
... (users defined on server)
> getdompwinfo
... (password policy info)

enum4linux
enum4linux -v a.a.a.a

nmap using 'nse'
# Enumerate SMB users
nmap -p139,445 --script smb-enum-users a.a.a.a

# Check for SMB Vunerabilities
nmap -p139,445 --script smb-check-vulns --script-args=unsafe=1 a.a.a.a


SNMP Enumeration

SNMP Enumeration
# SNMP scan for open 161 ports
nmap -sU -p 161 --open a.a.a.a-b

onesixtyone
# Use the 161 tool
# community is a file which contains a list of community strings eg
public
private
manager
# ips is a file which contains a list of ip addresses.  It can be generated easily using
for ip in (seq 50 100); do
echo a.a.a.$ip >> ips
done
# Now invoke the onesixtyone tool with these files
onesixtyone -c community -i ips

snmpwalk
# Use snmpwalk to get the values of each leaf of the snmp server using community string 'public' and version 1
snmpwalk -c public -v1 a.a.a.a

# Search for a particular MiB value
snmpwalk -c public -v1 a.a.a.a 1.2.3.4.5.6.7.8.9

snmpenum



snmpcheck



SMTP Enumeration

SMTP Enumeration
# Scan for open port 25
nmap -sT -p 25 --open a.a.a.a-b


# Connect to an SMTP server
nc -nv a.a.a.a 25
220 ... server details
# Verify that a user exists.
> VRFY ******
250 ... ******


where a.a.a.a-b is an ip range such as 192.168.1.100-150