<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[My Blog]]></title><description><![CDATA[My Writings on Various bits of Tech that interest me.]]></description><link>https://myblog.salterje.com/</link><generator>Ghost 3.23</generator><lastBuildDate>Fri, 10 Apr 2026 17:24:08 GMT</lastBuildDate><atom:link href="https://myblog.salterje.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Adding Ingress Rules to an AWS Security Group]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>In this post we will look at how to modify the ingress rules for a security group using the AWS CLI to add and remove http and ssh access. In order to test and verify we'll create an EC2 instance running a basic webpage and attach our security group to</p>]]></description><link>https://myblog.salterje.com/adding-ingress-rules-to-aws-security-group/</link><guid isPermaLink="false">6123d23a9d551b035aefc56c</guid><category><![CDATA[AWS]]></category><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Jason Salter]]></dc:creator><pubDate>Mon, 23 Aug 2021 21:40:24 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>In this post we will look at how to modify the ingress rules for a security group using the AWS CLI to add and remove http and ssh access. In order to test and verify we'll create an EC2 instance running a basic webpage and attach our security group to it.</p>
<p>To keep things simple we will deploy into a Public subnet in our default VPC and will not use any Network Access Control Lists, NACLs.</p>
<p>Once this is done we will look at how to modify the security group to add and remove both ssh and http access to the EC2 instance.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/08/AWS-SG-Lab-2.png" alt="AWS-SG-Lab"></p>
<h2 id="createakeypair">Create a Key-Pair</h2>
<p>The first thing we will do is create a key-pair to allow ssh access to the newly created instance. To do this we must first create the key-pair and also save the relevant private key to a suitable file on our client machine.</p>
<p>Once this is done the permissions must be changed on the private key.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/08/Create-KeyPair.png" alt="Create-KeyPair"></p>
<p>This will allow us to connect to our instance when we create it.</p>
<h2 id="createasecuritygroup">Create a Security Group</h2>
<p>The next thing to be done is to create the security group that will be attached to our instance. For this we will need to ensure that we define the correct VPC that we are using.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/08/CreateSG.png" alt="CreateSG"></p>
<p>This has created a security group which currently has no ingress rules and the command also returns the GroupId which is needed for the creation of our EC2 instance.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/08/SG-2.png" alt="SG-2"></p>
<p>We will add some ingress rules to the group but for the time-being this will be enough.</p>
<h2 id="creatinganec2instance">Creating an EC2 Instance</h2>
<p>We will now create an EC2 instance that will run our web-server and be used for testing. We'll add some user-data to update the instance, install apache web-server and start it up.</p>
<p>We'll also need to make sure that we make use of our key-pair that we generated earlier.</p>
<p>To install the web-server we'll use the following user-data to bootstrap it:</p>
<p><img src="https://myblog.salterje.com/content/images/2021/08/user-datta.png" alt="user-data"></p>
<p>This file will be passed to the instance when it's created using the ec2 run-instances command with the relevant options.</p>
<p>--image-id is the AMI image which in our case is the latest AWS Linux image<br>
--count is the number of instances, in our case just a single one<br>
--instance-type is t2-micro which is more then sufficient for our tests<br>
--key-name is the Key Pair that we have previously created<br>
--security-group-ids is the security group we have created (with no ingress at the moment<br>
--user-data is the user-data we are using to bootstrap the instance</p>
<pre><code class="language-yaml"> aws ec2 run-instances --image-id ami-0c2b8ca1dad447f8a --count 1 --instance-type t2.micro --key-name MyKeyPair --security-group-ids sg-0750df8c5753405fc  --user-data file://user-data.txt
</code></pre>
<pre><code> {
    &quot;Groups&quot;: [],
    &quot;Instances&quot;: [
        {
            &quot;AmiLaunchIndex&quot;: 0,
            &quot;ImageId&quot;: &quot;ami-0c2b8ca1dad447f8a&quot;,
            &quot;InstanceId&quot;: &quot;i-027b63499df0102f5&quot;,
            &quot;InstanceType&quot;: &quot;t2.micro&quot;,
            &quot;KeyName&quot;: &quot;MyKeyPair&quot;,
            &quot;LaunchTime&quot;: &quot;2021-08-23T17:58:07+00:00&quot;,
            &quot;Monitoring&quot;: {
                &quot;State&quot;: &quot;disabled&quot;
            },
            &quot;Placement&quot;: {
                &quot;AvailabilityZone&quot;: &quot;us-east-1c&quot;,
                &quot;GroupName&quot;: &quot;&quot;,
                &quot;Tenancy&quot;: &quot;default&quot;
            },
            &quot;PrivateDnsName&quot;: &quot;ip-172-31-83-248.ec2.internal&quot;,
            &quot;PrivateIpAddress&quot;: &quot;172.31.83.248&quot;,
            &quot;ProductCodes&quot;: [],
            &quot;PublicDnsName&quot;: &quot;&quot;,
            &quot;State&quot;: {
                &quot;Code&quot;: 0,
                &quot;Name&quot;: &quot;pending&quot;
            },
            &quot;StateTransitionReason&quot;: &quot;&quot;,
            &quot;SubnetId&quot;: &quot;subnet-0f9d0a27d9f06064f&quot;,
            &quot;VpcId&quot;: &quot;vpc-0308ac31e67cd27a6&quot;,
            &quot;Architecture&quot;: &quot;x86_64&quot;,
            &quot;BlockDeviceMappings&quot;: [],
            &quot;ClientToken&quot;: &quot;7f557f8b-037f-430f-91d3-b720d24a7df1&quot;,
            &quot;EbsOptimized&quot;: false,
            &quot;EnaSupport&quot;: true,
            &quot;Hypervisor&quot;: &quot;xen&quot;,
            &quot;NetworkInterfaces&quot;: [
                {
                    &quot;Attachment&quot;: {
                        &quot;AttachTime&quot;: &quot;2021-08-23T17:58:07+00:00&quot;,
                        &quot;AttachmentId&quot;: &quot;eni-attach-0ae9de90c635014d3&quot;,
                        &quot;DeleteOnTermination&quot;: true,
                        &quot;DeviceIndex&quot;: 0,
                        &quot;Status&quot;: &quot;attaching&quot;,
                        &quot;NetworkCardIndex&quot;: 0
                    },
                    &quot;Description&quot;: &quot;&quot;,
                    &quot;Groups&quot;: [
                        {
                            &quot;GroupName&quot;: &quot;my-sg&quot;,
:...skipping...
{
    &quot;Groups&quot;: [],
    &quot;Instances&quot;: [
        {
            &quot;AmiLaunchIndex&quot;: 0,
            &quot;ImageId&quot;: &quot;ami-0c2b8ca1dad447f8a&quot;,
            &quot;InstanceId&quot;: &quot;i-027b63499df0102f5&quot;,
            &quot;InstanceType&quot;: &quot;t2.micro&quot;,
            &quot;KeyName&quot;: &quot;MyKeyPair&quot;,
            &quot;LaunchTime&quot;: &quot;2021-08-23T17:58:07+00:00&quot;,
            &quot;Monitoring&quot;: {
                &quot;State&quot;: &quot;disabled&quot;
            },
            &quot;Placement&quot;: {
                &quot;AvailabilityZone&quot;: &quot;us-east-1c&quot;,
                &quot;GroupName&quot;: &quot;&quot;,
                &quot;Tenancy&quot;: &quot;default&quot;
            },
            &quot;PrivateDnsName&quot;: &quot;ip-172-31-83-248.ec2.internal&quot;,
            &quot;PrivateIpAddress&quot;: &quot;172.31.83.248&quot;,
            &quot;ProductCodes&quot;: [],
            &quot;PublicDnsName&quot;: &quot;&quot;,
            &quot;State&quot;: {
                &quot;Code&quot;: 0,
                &quot;Name&quot;: &quot;pending&quot;
            },
            &quot;StateTransitionReason&quot;: &quot;&quot;,Group
            &quot;SubnetId&quot;: &quot;subnet-0f9d0a27d9f06064f&quot;,
            &quot;VpcId&quot;: &quot;vpc-0308ac31e67cd27a6&quot;,
            &quot;Architecture&quot;: &quot;x86_64&quot;,
            &quot;BlockDeviceMappings&quot;: [],
            &quot;ClientToken&quot;: &quot;7f557f8b-037f-430f-91d3-b720d24a7df1&quot;,
            &quot;EbsOptimized&quot;: false,
            &quot;EnaSupport&quot;: true,
            &quot;Hypervisor&quot;: &quot;xen&quot;,
            &quot;NetworkInterfaces&quot;: [
                {
                    &quot;Attachment&quot;: {
                        &quot;AttachTime&quot;: &quot;2021-08-23T17:58:07+00:00&quot;,
                        &quot;AttachmentId&quot;: &quot;eni-attach-0ae9de90c635014d3&quot;,
                        &quot;DeleteOnTermination&quot;: true,
                        &quot;DeviceIndex&quot;: 0,
                        &quot;Status&quot;: &quot;attaching&quot;,
                        &quot;NetworkCardIndex&quot;: 0
                    },
                    &quot;Description&quot;: &quot;&quot;,
                    &quot;Groups&quot;: [
                        {
                            &quot;GroupName&quot;: &quot;my-sg&quot;,
                            &quot;GroupId&quot;: &quot;sg-0750df8c5753405fc&quot;
                        }
                    ],
:

</code></pre>
<h2 id="addingingressrulestosecuritygroup">Adding Ingress Rules to Security Group</h2>
<p>We now have a running instance and will now add an ingress rule for ssh and http traffic using ec2 authorize-security-group-ingress command.</p>
<p>--group-name is the name of the security group we will add the rule to<br>
--protocol is tcp for both http and ssh<br>
--port is the port (22 for ssh, 80 for http)<br>
--cidr is the source IP range, which in our case is from anywhere</p>
<p><img src="https://myblog.salterje.com/content/images/2021/08/AddingRules.png" alt="AddingRules"></p>
<p>We have now added two rules to the security group and can verify by describing the group that our ingress rules for port 22 (ssh) and port 80 (http) have been added.</p>
<pre><code>aws ec2 describe-security-groups --group-ids sg-0750df8c5753405fc
{
    &quot;SecurityGroups&quot;: [
        {
            &quot;Description&quot;: &quot;My security group&quot;,
            &quot;GroupName&quot;: &quot;my-sg&quot;,
            &quot;IpPermissions&quot;: [
                {
                    &quot;FromPort&quot;: 80,
                    &quot;IpProtocol&quot;: &quot;tcp&quot;,
                    &quot;IpRanges&quot;: [
                        {
                            &quot;CidrIp&quot;: &quot;0.0.0.0/0&quot;
                        }
                    ],
                    &quot;Ipv6Ranges&quot;: [],
                    &quot;PrefixListIds&quot;: [],
                    &quot;ToPort&quot;: 80,
                    &quot;UserIdGroupPairs&quot;: []
                },
                {
                    &quot;FromPort&quot;: 22,
                    &quot;IpProtocol&quot;: &quot;tcp&quot;,
                    &quot;IpRanges&quot;: [
                        {
                            &quot;CidrIp&quot;: &quot;0.0.0.0/0&quot;
                        }
                    ],
                    &quot;Ipv6Ranges&quot;: [],
                    &quot;PrefixListIds&quot;: [],
                    &quot;ToPort&quot;: 22,
                    &quot;UserIdGroupPairs&quot;: []
                }
            ],
            &quot;OwnerId&quot;: &quot;082684335586&quot;,
            &quot;GroupId&quot;: &quot;sg-0750df8c5753405fc&quot;,
            &quot;IpPermissionsEgress&quot;: [
                {
                    &quot;IpProtocol&quot;: &quot;-1&quot;,
                    &quot;IpRanges&quot;: [
                        {
                            &quot;CidrIp&quot;: &quot;0.0.0.0/0&quot;
                        }
                    ],
                    &quot;Ipv6Ranges&quot;: [],
                    &quot;PrefixListIds&quot;: [],
                    &quot;UserIdGroupPairs&quot;: []
                }
            ],
            &quot;VpcId&quot;: &quot;vpc-0308ac31e67cd27a6&quot;
        }
    ]
}
</code></pre>
<h2 id="confirmingfunctionality">Confirming Functionality</h2>
<p>We can connect to the instance using ssh and our private key, confirming that apache is running (proving that our boot-strapping user-data has worked)</p>
<p><img src="https://myblog.salterje.com/content/images/2021/08/ssh-connection.png" alt="ssh-connection"></p>
<p>We can then confirm that http is working by a simple curl command to the public address of the instance.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/08/Confirmation.png" alt="Confirmation"></p>
<h2 id="removinghttpingress">Removing HTTP Ingress</h2>
<p>It is a simple case of revoking the http access on the security group</p>
<p><img src="https://myblog.salterje.com/content/images/2021/08/RevokeAccess.png" alt="RevokeAccess"></p>
<p>A simple curl command proves we can no longer access the website</p>
<p><img src="https://myblog.salterje.com/content/images/2021/08/RevokeTest.png" alt="RevokeTest"></p>
<p>We can also prove by looking at the security group rules again.</p>
<pre><code>aws ec2 describe-security-groups --group-ids sg-0750df8c5753405fc
{
    &quot;SecurityGroups&quot;: [
        {
            &quot;Description&quot;: &quot;My security group&quot;,
            &quot;GroupName&quot;: &quot;my-sg&quot;,
            &quot;IpPermissions&quot;: [
                {
                    &quot;FromPort&quot;: 22,
                    &quot;IpProtocol&quot;: &quot;tcp&quot;,
                    &quot;IpRanges&quot;: [
                        {
                            &quot;CidrIp&quot;: &quot;0.0.0.0/0&quot;
                        }
                    ],
                    &quot;Ipv6Ranges&quot;: [],
                    &quot;PrefixListIds&quot;: [],
                    &quot;ToPort&quot;: 22,
                    &quot;UserIdGroupPairs&quot;: []
                }
            ],
            &quot;OwnerId&quot;: &quot;082684335586&quot;,
            &quot;GroupId&quot;: &quot;sg-0750df8c5753405fc&quot;,
            &quot;IpPermissionsEgress&quot;: [
                {
                    &quot;IpProtocol&quot;: &quot;-1&quot;,
                    &quot;IpRanges&quot;: [
                        {
                            &quot;CidrIp&quot;: &quot;0.0.0.0/0&quot;
                        }
                    ],
                    &quot;Ipv6Ranges&quot;: [],
                    &quot;PrefixListIds&quot;: [],
                    &quot;UserIdGroupPairs&quot;: []
                }
            ],
            &quot;VpcId&quot;: &quot;vpc-0308ac31e67cd27a6&quot;
        }
    ]
}
</code></pre>
<p>This shows port 80 is no longer present.</p>
<h2 id="authorizehttpagain">Authorize HTTP Again</h2>
<p>It is a simple case of authorizing the http ingress again.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/08/HttpWorkingAgain.png" alt="HttpWorkingAgain"></p>
<h2 id="conclusions">Conclusions</h2>
<p>It is relatively simple to modify ingress rules within a security group using the AWS CLI and this has been proven with this lab. We have also used a local user-data file to bootstrap our instance to install apache web-server and start the service up when it is created.</p>
<p>In the case of this lab we added two ingress rules to allow ssh and http to reach our test instance that had our security group attached to it.</p>
<p>In a real situation we could remove the ssh rule and if we had a need to access the instance again just add the rule back in (probably locked down to just the IP address of the connecting client).</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Running AWS CLI Client in Docker]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>In this post, which is a followup to the one <a href="https://myblog.salterje.com/installation-of-aws-cli/">here</a> we'll run the AWS CLI client in a Docker container rather then install it on our host machine.</p>
<p>The lab assumes that Docker is already installed and running on the machine we are using. In this particular lab I</p>]]></description><link>https://myblog.salterje.com/running-aws-cli-client-in-docker/</link><guid isPermaLink="false">60f70e469d551b035aefc4e8</guid><category><![CDATA[AWS]]></category><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Jason Salter]]></dc:creator><pubDate>Tue, 20 Jul 2021 19:03:37 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>In this post, which is a followup to the one <a href="https://myblog.salterje.com/installation-of-aws-cli/">here</a> we'll run the AWS CLI client in a Docker container rather then install it on our host machine.</p>
<p>The lab assumes that Docker is already installed and running on the machine we are using. In this particular lab I have got a bare system without any other containers or images installed.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/07/Initial-Docker.png" alt="Initial-Docker"></p>
<p>It's a simple command to pull the latest image and try to run a command to list the buckets in our AWS account.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/07/Pulling-AWS-Docker.png" alt="Pulling-AWS-Docker"></p>
<p>This shows that we didn't have the image installed locally and it got pulled from Docker Hub (we haven't specified any particular version so it will pull the latest one).</p>
<p>--rm specifies that we will clean up and delete the container after use<br>
--it is for a shell session into the container</p>
<p>S3 ls at the end of the command is to just list the S3 buckets within our account (or at least the ones that our access will let us see). As we haven't provided any credentials the client is not able to log in (which would frankly be really worrying if it could at this stage).</p>
<p>We can verify that we now have the image saved on our machine but there are still no containers running.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/07/Docker-After-Download.png" alt="Docker-After-Download"></p>
<h2 id="runningwithenvironmentalvariables">Running with Environmental Variables</h2>
<p>We can pass our Access-Key and Secret-Access-Key as environmental variables to provide our credentials</p>
<p><img src="https://myblog.salterje.com/content/images/2021/07/Running-with-ENV.png" alt="Running-with-ENV"></p>
<p>This is done using</p>
<p>--e followed by the various variables that we will pass to the container<br>
--rm to clean up afterwards<br>
--it for a shell session into the container</p>
<p>This shows that we were able to bring up a container running the client, pass it our credentials and run a command to interact with our account (in this case to list our S3 buckets).</p>
<h2 id="mountingawsintocontainer">Mounting .aws into Container</h2>
<p>We can also mount our credentials, which by default are in a hidden .aws folder within the home folder from our host machine into the container.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/07/Mount-into-Container.png" alt="Mount-into-Container"></p>
<p>--v for mounting from our host into the container<br>
--rm to clean up afterwards<br>
--it for a shell session into the container</p>
<p>This time we were able to use our credentials from the host machine.</p>
<h2 id="conclusions">Conclusions</h2>
<p>The use of a Docker container makes it easy to interact with our AWS account using an up to date image which can be set to always use the latest version, pulling it onto the the Host if required.</p>
<p>We looked at two ways of passing the necessary credentials to our container by either mounting the necessary files from the host or passing them as environmental variables which means nothing is left afterwards.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Installation of AWS CLI]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>There are a number of ways of interacting with AWS and in this post we'll run through the installation of the CLI client under Ubuntu 20.04. This will allow the running of AWS commands within a standard Bash terminal.</p>
<p>In this post we'll run through how to download the</p>]]></description><link>https://myblog.salterje.com/installation-of-aws-cli/</link><guid isPermaLink="false">60e98d0d9d551b035aefc3da</guid><category><![CDATA[AWS]]></category><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Jason Salter]]></dc:creator><pubDate>Sat, 10 Jul 2021 14:44:10 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>There are a number of ways of interacting with AWS and in this post we'll run through the installation of the CLI client under Ubuntu 20.04. This will allow the running of AWS commands within a standard Bash terminal.</p>
<p>In this post we'll run through how to download the latest version of the client, check the GPG key of the downloaded zip file and then install it.</p>
<p>We'll also see how to give the client details of the Access Key ID and Secret Access Key that will enable it to connect to an existing AWS user account. This will be done by both using the generated config files as well as exporting them for a temporary connection.</p>
<p>We'll start with a base Ubuntu installation and first confirm that the client is not installed already.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/07/aws-preinstall-1.png" alt="aws-preinstall-1"></p>
<p>The installation process we'll be following can be found <a href="https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-linux.html">here</a>  in the AWS Documentation.</p>
<h2 id="installingawscliversion2">Installing AWS CLI Version 2</h2>
<p>Before we begin the installation the curl utility must be installed.</p>
<h3 id="gettingholdofcurlviasnap">Getting hold of curl via snap</h3>
<p>As we are using Ubuntu we'll load the necessary curl command with snap and use it to download the zip file for the installation.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/07/curl-aws.png" alt="curl-aws"></p>
<h3 id="checkinginstallationkey">Checking Installation Key</h3>
<p>We're going to be extra careful and check the GPG key of the zip file</p>
<p>This is done by pasting the GPG Key into a text file.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/07/aws-key-pub.png" alt="aws-key-pub"></p>
<p>We'll import the key that has just been created and compare it to the .sig file available from AWS.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/07/verify-sig.png" alt="verify-sig"></p>
<p>The warning in the output is expected and doesn't indicate a problem. It occurs as we don't have a personal PGP key which means there isn't a chain of trust with the AWS CLI PGP key.</p>
<h3 id="installationofcliclient">Installation of CLI client</h3>
<p>We are now in a position to actually install the software after unzipping the download.</p>
<pre><code>unzip awscliv2.zip
</code></pre>
<p><img src="https://myblog.salterje.com/content/images/2021/07/aws-install.png" alt="aws-install"></p>
<p>We can now confirm the client has been installed.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/07/which-aws.png" alt="which-aws"></p>
<h2 id="connectingtoourawsaccount">Connecting to Our AWS Account</h2>
<p>We are now ready to connect to our account. This is done by running the aws configure command and entering the relevant details for the account we will connect to.</p>
<p>This will prompt for the relevant Access Key ID and Secret Access Key for the account (the ones that I use here have been deleted and in normal conditions these credentials should never be shared).</p>
<p><img src="https://myblog.salterje.com/content/images/2021/07/aws-config.png" alt="aws-config"></p>
<p>The installation will create a hidden folder called .aws which is used to store the credentials and config details (in this case the default region that will be used when running the command).</p>
<p>As a test I have listed the S3 buckets associated with the account and then created a new one.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/07/s3-mb.png" alt="s3-mb"></p>
<p>Of course this means that the credentials are stored within the files in the .aws folder. It is also possible to export the credentials and not create the files.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/07/aws-export.png" alt="aws-export"></p>
<p>This means they will only be valid for that terminal session. Starting another session shows we can't log on to our account any longer.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/07/aws-lackofcred.png" alt="aws-lackofcred"></p>
<h2 id="conclusions">Conclusions</h2>
<p>The installation of the AWS ci client is straightforward and allows interaction by running suitable commands within a terminal. It makes use of Access-Keys and Secret Access-Keys that should be protected in the same way as usernames and passwords.</p>
<p>It is possible to create suitable files with these credentials within them by running aws configure, making things very convenient (but not particularly secure). Alternatively these details can be exported to run for that particular terminal session if an ad-hoc session is needed.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Adding a New User to an AWS Account]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>In this Lab we will create a new AWS user that will have access to an existing AWS account (165541912265) to create EC2 Virtual Machines and S3 Object Storage.</p>
<p>This involves setting up a new user with the IAM service and giving it the appropriate permissions. To do this we</p>]]></description><link>https://myblog.salterje.com/adding-a-new-aws-user/</link><guid isPermaLink="false">60c603119d551b035aefc2b3</guid><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Jason Salter]]></dc:creator><pubDate>Sun, 13 Jun 2021 15:43:59 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>In this Lab we will create a new AWS user that will have access to an existing AWS account (165541912265) to create EC2 Virtual Machines and S3 Object Storage.</p>
<p>This involves setting up a new user with the IAM service and giving it the appropriate permissions. To do this we will actually create a new group with full permissions for EC2 and S3 and nothing else.</p>
<p>The new user can then be added to this group to inherit these permissions (by default newly created users don't have any permissions).</p>
<p><img src="https://myblog.salterje.com/content/images/2021/06/AWS-S3Role-S3-Role.png" alt="AWS-S3Role-S3-Role"></p>
<p>For obvious reasons the user will be deleted after the Lab.</p>
<!--kg-card-end: markdown--><p> </p><!--kg-card-begin: markdown--><h2 id="creationofec2s3fullaccessgroup">Creation of EC2+S3 Full Access Group</h2>
<p>We will add a new group to our AWS account giving full access to EC2 and S3 (to keep things fairly simple). This group will be used when we create the new user (as we will add the user to it).</p>
<p>This makes it easy to add further users with the same permissions, or to remove the user from the group to remove the permissions.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/06/IAM-GroupCreation.png" alt="IAM-GroupCreation"></p>
<p>We will then add the EC2 and S3 Full Permissions to the group, in this case there are already some AWS managed policies that can be added by simply searching and ticking the desired policies.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/06/EC2GroupPolicy.png" alt="EC2GroupPolicy"></p>
<p><img src="https://myblog.salterje.com/content/images/2021/06/S3Permissions.png" alt="S3Permissions"></p>
<p>Once this is done we have created our group with appropriate policies and it just remains to create a new user and add them to the group.</p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h2 id="creationofnewuserandadditiontothegroup">Creation of New User and Addition to the Group</h2>
<p>We will now create a new user, salterje, that will be added to EC2+S3-Group. This will allow this user to inherit the appropriate permissions that we need to create EC2 and S3 instances.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/06/Createsalterjeuser.png" alt="Createsalterjeuser"></p>
<p><img src="https://myblog.salterje.com/content/images/2021/06/AddsalterjetoGroup.png" alt="AddsalterjetoGroup"></p>
<p><img src="https://myblog.salterje.com/content/images/2021/06/AddUserTags.png" alt="AddUserTags"></p>
<p>This will create a new user that has access to the AWS console as well as suitable access keys to allow use of the AWS commandline and SDK (although for the purposes of this lab we will just use the console to test).</p>
<p>We will be prompted to download these details to allow us to login to our account as the newly created salterje user.</p>
<h2 id="testingofthenewuser">Testing of the New User</h2>
<p>We will login to the account with the downloaded credentials.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/06/PasswordDetails.png" alt="PasswordDetails"></p>
<p>All the necessary details can be found in the .csv file that is available at the last step of user creation. It is important that these credentials are looked after carefully as they contain everything that is needed to access the account as the newly created user.</p>
<p>We will log into the account console with our new user, making use of the URL that was included in the user details.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/06/NewUserLogin.png" alt="NewUserLogin"></p>
<p>It can be seen at the top right hand corner of the console that we are logged in as salterje</p>
<p>We can now create and view a new EC2 instance and a new S3 Bucket.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/06/EC2-Launched.png" alt="EC2-Launched"></p>
<p><img src="https://myblog.salterje.com/content/images/2021/06/S3Creation.png" alt="S3Creation"></p>
<p>This proves that the user has the appropriate permissions to create these resources but to prove that we don't have access to all resources we will now try to create an RDS Database.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/06/RDS-Creation.png" alt="RDS-Creation"></p>
<p>We can see that we don't have the appropriate permissions to create the Database.</p>
<h2 id="removalofuserfromgroup">Removal of User From Group</h2>
<p>Our last test with the salterje user permissions is to remove the user from the EC2+S3-Group. This must be done from our original user that has the appropriate permissions to remove users from their IAM groups.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/06/RemoveUsers.png" alt="RemoveUsers"></p>
<p>Once this is done it can be seen that the salterje user can no longer even view EC2 or S3 instances, let alone perform any action. This happens as soon as the user is removed from the group.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/06/EC2Permissions.png" alt="EC2Permissions"></p>
<p><img src="https://myblog.salterje.com/content/images/2021/06/S3PermissionError.png" alt="S3PermissionError"></p>
<p>As we have made use of the group to generate the permissions it is easy to add or remove users (for example when people join or leave an organization).</p>
<h2 id="conclusions">Conclusions</h2>
<p>This Lab shows the concept of user and group policies within AWS. In this case we have added a new user to a group that allows the creation of EC2 and S3 resources but nothing else.</p>
<p>We also saw that it is a simple case of removing the user from the group and once this is done they no longer have any access to their resources.</p>
<p>When a user is created all details can be downloaded which will allow the newly created user to log in using the appropritate account URL and password.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[OpenVPN using PFSense]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>This post is a follow up to the previous one that described the use of tunnelling of an OpenVPN link to encrypt a connection from a client on the 192.168.200.0/24 network to a web server on 172.16.0.0/24 network. The original post highlighted</p>]]></description><link>https://myblog.salterje.com/openvpn-using-pfsense/</link><guid isPermaLink="false">60732382d2d5df031c2522d7</guid><category><![CDATA[Linux]]></category><category><![CDATA[OpenVPN]]></category><dc:creator><![CDATA[Jason Salter]]></dc:creator><pubDate>Sun, 11 Apr 2021 21:06:32 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>This post is a follow up to the previous one that described the use of tunnelling of an OpenVPN link to encrypt a connection from a client on the 192.168.200.0/24 network to a web server on 172.16.0.0/24 network. The original post highlighted the need to confirm the traffic was actually routed over the tunnel overlay at 10.8.0.0/24 rather then the underlying network.</p>
<p>In this post we will replace the OpenVPN machine with one that is running pfSense which will run OpenVPN and allow the same setup to be done in an easier way. While we will just use OpenVPN in this article pfSense is a powerful package which allows other functionality as well.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/04/pfSense-NoVPN.png" alt="pfSense-NoVPN"></p>
<p>The basic setup consists of 4 Virtual Machines, each of which also has a interface that acts as a NAT interface to the Host Machine which allows remote access and internet connection.</p>
<p>By default all traffic between the clients and the Web server goes via the underlay network.</p>
<pre><code>vagrant@client-1:~$ tracepath -n 172.16.0.20
 1?: [LOCALHOST]                      pmtu 1500
 1:  10.0.2.2                                              0.670ms 
 1:  10.0.2.2                                              0.617ms 
 2:  172.16.0.20                                           1.265ms reached
</code></pre>
<pre><code>salterje@salterje-VirtualBox:~$ tracepath -n 172.16.0.20
 1?: [LOCALHOST]                      pmtu 1500
 1:  10.0.2.2                                              0.342ms 
 1:  10.0.2.2                                              0.619ms 
 2:  172.16.0.20                                           1.334ms reached
     Resume: pmtu 1500 hops 2 back 64 
</code></pre>
<p>Both of these results mean that the clients are actually reaching the Web Server at 172.16.0.20 via the Virtualbox NAT address of 10.0.2.2, with all the guest machines having an address of 10.0.2.15</p>
<p>This is expected when running under VirtualBox.</p>
<h2 id="configurationofpfsense">Configuration of PFSense</h2>
<p>We'll now configure PFSense to run an OpenVPN server which will push routes out to the clients, set up Firewall settings on the PFSense machine and allow the connection from the clients to the web server to be encrypted and sent over a tunnel which runs over the interfaces connected to 192.168.200.10.</p>
<h3 id="installationofopenvpnclientexport">Installation of openvpn-client-export</h3>
<p>While OpenVPN is included within PFSense we will first install a plugin which will allow easy export of the necessary configuration file and TLS certificates.</p>
<p>This is a simple case of installing it via the package manager via the web interface.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/04/pfsense-plugin1.png" alt="pfsense-plugin1"></p>
<p>Select the appropriate plugin under available packages and click to install it.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/04/client-export-installed.png" alt="client-export-installed"></p>
<p>This plugin will make things much easier when we come to copy the necessary files to the clients.</p>
<h3 id="creationofopenvpnviawizard">Creation of OpenVPN via Wizard</h3>
<p>PFSense offers a Wizard to setup an OpenVpn Server, including the creation of a CA and appropriate server certificate. To make things easy we will rely on a local user database running on the PFSense Machine itself, although it is possible to integrate user management with an LDAP or Radius server.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/04/Wizard1.png" alt="Wizard1"></p>
<p>We'll create a local user installation</p>
<p><img src="https://myblog.salterje.com/content/images/2021/04/Wizard2.png" alt="Wizard2"></p>
<p>The first thing to do is the creation of a Certificate Signing Authority which will be used for creation of all necessary certificates.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/04/Wizard3.png" alt="Wizard3"></p>
<p>We'll then create a certificate for the Server itself, which will use the CA that has just been created.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/04/Wizard4.png" alt="Wizard4"></p>
<p>We'll actually be using LAN1 for our VPN which is slightly different to a lot of real installations that will be using the WAN link.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/04/Wizard5.png" alt="Wizard5"></p>
<p>Most of the next settings can be left to default</p>
<p><img src="https://myblog.salterje.com/content/images/2021/04/Wizard8.png" alt="Wizard8"></p>
<p>The Tunnel network can be set to any IP range, as long as it's not used anywhere else on the network. This range will be what is assigned to the tunnel network and in our case we'll keep to the standard range of 10.8.0.0/24. This will give us plenty of addresses for any potential clients.</p>
<p>We must also define the networks that will be pushed out to the clients. In our case as well as the 172.16.0.0/24 network that holds our Web Server we'll push out two other networks to prove our configurations.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/04/Wizard10.png" alt="Wizard10"></p>
<p>The final stage is the automatic creation of appropriate firewall rules as by default PFSense will block everything on the interfaces. It is necessary to create rules for the LAN1 interface as well as the OpenVPN interface itself.</p>
<h3 id="modificationofthevpntousejusttls">Modification of the VPN to Use Just TLS</h3>
<p>Once the VPN Server has been created by the Wizard we will perform a minor tweak to it to change the authenication mode to just accept TLS/SSL logins rather then also needing a username/password.</p>
<p>This is mostly to ease the configuration of the client1 settings to save having to add the username/password settings.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/04/OpenVPN1.png" alt="OpenVPN1"></p>
<h3 id="creationofusersforthevpn">Creation of Users for the VPN</h3>
<p>The final stage is the creation of a couple of users for each of the clients. We'll also create some appropriate certificates for each user which will be signed by our CA.</p>
<p>This is done with the User Manager section of the GUI and we will tick the box to also create a certificate for each user. While we will be setting the VPN server to not need a username/password to be entered we will create them when creating the user.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/04/user1.png" alt="user1"></p>
<p><img src="https://myblog.salterje.com/content/images/2021/04/user2.png" alt="user2"></p>
<p>As can be seen the certificate makes use of the CA that was previously created.</p>
<h3 id="exportofsettingstotheclients">Export of Settings to the Clients</h3>
<p>The final stage is to export the appropriate configuration files, certificates and keys to the clients and for this we'll make use of the plugin we previously installed.</p>
<p>The client export section can be found under the client export section of the VPN.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/04/client-export.png" alt="client-export"></p>
<p><img src="https://myblog.salterje.com/content/images/2021/04/client1-export-1.png" alt="client1-export-1"></p>
<p>The client export section actually has the relevant files for many different clients, including windows, android and apple. This makes it a really useful tool.</p>
<p>It is a simple task to download the appropriate files, in our case the bundled configurations archive which has everything needed for our Linux clients.</p>
<h3 id="installationontotheclients">Installation onto the Clients</h3>
<p>The exported bundle for each client can be scp'd to the necessary machines and moved to the appropriate directories at /etc/openvpn</p>
<p><img src="https://myblog.salterje.com/content/images/2021/04/client-install.png" alt="client-install"></p>
<p>There are also some minor tweaks to the owner of the files and permissions.</p>
<p>The two clients are slightly different as the 2nd client is an Ubuntu 20.04 Desktop which allows the .ovpn file to be imported easily using network control manager.</p>
<p>As per our last Lab we will run the client1 from the commandline to allow testing.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/04/client-1-running.png" alt="client-1-running"></p>
<p>Once both clients are running we can run our tests.</p>
<h3 id="confirmationofopenvpnconnections">Confirmation of OpenVPN Connections</h3>
<p>The initial checks can be done within the GUI running on PFSense which will confirm that both clients are connected.</p>
<p>This can be found under the OpenVPN status page.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/04/openvpn-connection-1.png" alt="openvpn-connection-1"></p>
<p>This confirms that the connection has been made but it is also important to check the path that both clients are using to reach 172.16.0.20 which should be over our new tunnels via the 10.8.0.1 address which is the OpenVPN server on our PFSense Machine.</p>
<p>We can also check the routing tables of the clients which should have our new routes pushed to them, including the extra two networks we configured.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/04/client1-confirmed.png" alt="client1-confirmed"></p>
<p><img src="https://myblog.salterje.com/content/images/2021/04/client2-confirmed.png" alt="client2-confirmed"></p>
<p>This proves that we have setup OpenVPN and the clients are connecting to the remote network via the PFSense Machine rather then the VirtualBox NAT interface.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/04/pfSense2-3.png" alt="pfSense2-3"></p>
<h2 id="conclusions">Conclusions</h2>
<p>PFSense allows the creation of an OpenVPN Server in as user friendly manner as is possible with the creation of a VPN. It is possible to configure most things via Wizards which also deal with such things as the creation of certificates and the appropriate configuration files.</p>
<p>The Client Export Plugin makes it much easier to allow a wide range of clients to be used. It is possible to make changes to the way the VPN works and regenerate the client files easily.</p>
<p>The final verification of the connections was done by checking that the routing tables of the clients had the new routes added and that the connection to the Web Server was indeed via the Tunnel device rather then the physical interface.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[OpenVPN using Vagrant]]></title><description><![CDATA[<p></p><!--kg-card-begin: markdown--><p>OpenVPN is a popular opensource VPN solution and in this post we'll set up a Lab with a Virtual Machine acting as a gateway to securely access a Web Server running on the 172.16.0.0/24 network from a client. The lab will show the concepts of creating</p>]]></description><link>https://myblog.salterje.com/openvpn-using-vagrant/</link><guid isPermaLink="false">6045203ad2d5df031c252240</guid><category><![CDATA[Linux]]></category><category><![CDATA[Vagrant]]></category><category><![CDATA[OpenVPN]]></category><dc:creator><![CDATA[Jason Salter]]></dc:creator><pubDate>Tue, 16 Mar 2021 21:43:24 GMT</pubDate><content:encoded><![CDATA[<p></p><!--kg-card-begin: markdown--><p>OpenVPN is a popular opensource VPN solution and in this post we'll set up a Lab with a Virtual Machine acting as a gateway to securely access a Web Server running on the 172.16.0.0/24 network from a client. The lab will show the concepts of creating a VPN tunnel running over the 192.168.200.0/24  link which we'll route traffic through to the end server.</p>
<p>Key to this will be the creation of suitable routes to ensure the traffic goes via the tunnel and not over the underlay network. We will also see the source address of the request from the client being modified by OpenVPN.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/03/OpenVPNLab-1.png" alt="OpenVPNLab-1"></p>
<h2 id="settingupbasiclab">Setting up Basic Lab</h2>
<p>The Lab consists of 3 Virtual Machines running under VirtualBox which have been created with a suitable Vagrantfile. The Vagrantfile has some simple shell provisioning scripts to create static routes and in the case of the OpenVPN server grab the installation script that will be used for the setup for the server.</p>
<p>The provisioning  also sets up an Nginx instance that will be used as a test target to check the connection to a remote server and pull back http traffic over the VPN tunnel.</p>
<p>The scripts can be found at <a href="https://github.com/salterje/openvpn-vagrant">https://github.com/salterje/openvpn-vagrant</a></p>
<h2 id="checkingthingsbeforeopenvpn">Checking Things Before OpenVPN</h2>
<p>Because of the way that Vagrant always creates a default gateway on eth0 the provisioning scripts have created some static routes to ensure that all communication between the web server and the client take place through the OpenVPN machine, rather then through the Host.</p>
<p>In addition the OpenVPN machine has been provisioned to allow ip forwarding between it's interfaces,  ensuring that the web server and client traffic can be forwarded between it's eth1 and eth2 interfaces.</p>
<p>This can be confirmed by checking the status of the ip_forward bit within /proc/sys/net/ipv4/ip_forward and also confirming the routing table.</p>
<pre><code>vagrant@openvpn-1:~$ cat /proc/sys/net/ipv4/ip_forward
1

vagrant@openvpn-1:~$ ip route
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
172.16.0.0/24 dev eth2 proto kernel scope link src 172.16.0.10 
192.168.200.0/24 dev eth1 proto kernel scope link src 192.168.200.10 

</code></pre>
<p>The routing tables can also be confirmed on the two other machines.</p>
<pre><code>vagrant@client-1:~$ ip route
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
172.16.0.0/24 via 192.168.200.10 dev eth1 proto static 
192.168.200.0/24 dev eth1 proto kernel scope link src 192.168.200.11 
</code></pre>
<pre><code>vagrant@web-1:~$ ip route
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
172.16.0.0/24 dev eth1 proto kernel scope link src 172.16.0.20 
192.168.200.0/24 via 172.16.0.10 dev eth1 proto static 


</code></pre>
<p>If all is working it should be possible to make a connection between the client and web server through the OpenVPN machine and pull back the test page  being served by Nginx.</p>
<pre><code>vagrant@client-1:~$ tracepath -n 172.16.0.20
 1?: [LOCALHOST]                      pmtu 1500
 1:  192.168.200.10                                        0.393ms
 1:  192.168.200.10                                        0.240ms
 2:  172.16.0.20                                           0.454ms reached
     Resume: pmtu 1500 hops 2 back 2
vagrant@client-1:~$ curl 172.16.0.20
&lt;!DOCTYPE html&gt;
&lt;html&gt;
&lt;head&gt;
&lt;title&gt;Welcome to nginx!&lt;/title&gt;
&lt;style&gt;
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
&lt;/style&gt;
&lt;/head&gt;
&lt;body&gt;
&lt;h1&gt;Welcome to nginx!&lt;/h1&gt;
&lt;p&gt;If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.&lt;/p&gt;

&lt;p&gt;For online documentation and support please refer to
&lt;a href=&quot;http://nginx.org/&quot;&gt;nginx.org&lt;/a&gt;.&lt;br/&gt;
Commercial support is available at
&lt;a href=&quot;http://nginx.com/&quot;&gt;nginx.com&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Thank you for using nginx.&lt;/em&gt;&lt;/p&gt;
&lt;/body&gt;
&lt;/html&gt;
</code></pre>
<p>This confirms that the client can get the content from the web server and all traffic is routed through the server in the middle rather then via the underlying Host.</p>
<p>As a final test we will use tcpdump to check the traffic going into eth1 and eth2 interfaces of the server in the middle by pinging from the client to the web server.</p>
<pre><code>vagrant@client-1:~$ ping -s 200 172.16.0.20
PING 172.16.0.20 (172.16.0.20) 200(228) bytes of data.
208 bytes from 172.16.0.20: icmp_seq=1 ttl=63 time=1.43 ms
208 bytes from 172.16.0.20: icmp_seq=2 ttl=63 time=1.46 ms
208 bytes from 172.16.0.20: icmp_seq=3 ttl=63 time=0.573 ms
208 bytes from 172.16.0.20: icmp_seq=4 ttl=63 time=1.56 ms
</code></pre>
<pre><code>vagrant@openvpn-1:~$ sudo tcpdump -i eth1 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
21:37:34.740360 IP 192.168.200.11 &gt; 172.16.0.20: ICMP echo request, id 2, seq 1, length 208
21:37:34.741099 IP 172.16.0.20 &gt; 192.168.200.11: ICMP echo reply, id 2, seq 1, length 208
21:37:35.741898 IP 192.168.200.11 &gt; 172.16.0.20: ICMP echo request, id 2, seq 2, length 208
21:37:35.742631 IP 172.16.0.20 &gt; 192.168.200.11: ICMP echo reply, id 2, seq 2, length 208
21:37:36.743809 IP 192.168.200.11 &gt; 172.16.0.20: ICMP echo request, id 2, seq 3, length 208
</code></pre>
<pre><code>vagrant@openvpn-1:~$ sudo tcpdump -i eth2 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth2, link-type EN10MB (Ethernet), capture size 262144 bytes
21:38:46.277016 IP 192.168.200.11 &gt; 172.16.0.20: ICMP echo request, id 3, seq 1, length 208
21:38:46.277287 IP 172.16.0.20 &gt; 192.168.200.11: ICMP echo reply, id 3, seq 1, length 208
21:38:47.286199 IP 192.168.200.11 &gt; 172.16.0.20: ICMP echo request, id 3, seq 2, length 208
21:38:47.286862 IP 172.16.0.20 &gt; 192.168.200.11: ICMP echo reply, id 3, seq 2, length 208
21:38:48.287739 IP 192.168.200.11 &gt; 172.16.0.20: ICMP echo request, id 3, seq 3, length 208
</code></pre>
<p>This proves that the ICMP traffic is being sent through the server in the middle from the client eth1 interface at 192.168.200.11 to the web server eth1 interface at 172.16.0.20.</p>
<p>It is now time to install OpenVPN on the server in the middle and create a VPN tunnel using the link between 192.168.200.11 on the client to 192.168.200.10 on the server in the middle. The aim will be to encrypt all traffic leaving the client heading to the web server.</p>
<h2 id="installingopenvpnserver">Installing OpenVPN Server</h2>
<p><img src="https://myblog.salterje.com/content/images/2021/03/OpenVPNLab2.png" alt="OpenVPNLab2"></p>
<p>Setting up a basic OpenVPN Server on Ubuntu can be done by downloading the appropriate installation script, making it executable and running it.</p>
<pre><code>wget https://git.io/vpn -O openvpn-ubuntu-install.sh
chmod -v +x openvpn-ubuntu-install.sh
sudo ./openvpn-ubuntu-install.sh

</code></pre>
<p>The script can be run and some parameters entered which will create the necessary setup on the server. It will also create the necessary openVPN client configuration.</p>
<pre><code>Welcome to this OpenVPN road warrior installer!

Which IPv4 address should be used?
     1) 10.0.2.15
     2) 192.168.200.10
     3) 172.16.0.10
IPv4 address [1]: 2

This server is behind NAT. What is the public IPv4 address or hostname?
Public IPv4 address / hostname [151.230.176.38]: 192.168.200.10

Which protocol should OpenVPN use?
   1) UDP (recommended)
   2) TCP
Protocol [1]: 

What port should OpenVPN listen to?
Port [1194]: 

Select a DNS server for the clients:
   1) Current system resolvers
   2) Google
   3) 1.1.1.1
   4) OpenDNS
   5) Quad9
   6) AdGuard
DNS server [1]: 

DNS server [1]: 

Enter a name for the first client:
Name [client]: 

OpenVPN installation is ready to begin.
Press any key to continue...

The client configuration is available in: /root/client.ovpn
New clients can be added by running this script again.
vagrant@openvpn-1:~$ 

</code></pre>
<p>The necessary client.ovpn file can now be copied to the client machine via scp.</p>
<p>It should be noted that in this particular lab we have kept everything local and are using the IP addresses of our Virtual Machines. The aim is to ensure that the traffic that goes between over the 192.168.200.0/24 network between the Virtual Machines is protected.</p>
<p>The installer has actually set up a new tunnel interface and made adjustments to the route table of the OpenVPN server.</p>
<pre><code>vagrant@openvpn-1:~$ ip addr
1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:14:86:db brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 76475sec preferred_lft 76475sec
    inet6 fe80::a00:27ff:fe14:86db/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:3e:f5:ae brd ff:ff:ff:ff:ff:ff
    inet 192.168.200.10/24 brd 192.168.200.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe3e:f5ae/64 scope link
       valid_lft forever preferred_lft forever
4: eth2: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:45:47:54 brd ff:ff:ff:ff:ff:ff
    inet 172.16.0.10/24 brd 172.16.0.255 scope global eth2
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe45:4754/64 scope link
       valid_lft forever preferred_lft forever
5: tun0: &lt;POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP&gt; mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 100
    link/none
    inet 10.8.0.1/24 brd 10.8.0.255 scope global tun0
       valid_lft forever preferred_lft forever
    inet6 fe80::71ba:b44:d082:cbf4/64 scope link stable-privacy
       valid_lft forever preferred_lft forever

vagrant@openvpn-1:~$ ip route
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.8.0.0/24 dev tun0 proto kernel scope link src 10.8.0.1 
172.16.0.0/24 dev eth2 proto kernel scope link src 172.16.0.10 
192.168.200.0/24 dev eth1 proto kernel scope link src 192.168.200.10 
</code></pre>
<h2 id="startingupopenvpnasaservice">Starting up OpenVPN as a Service</h2>
<p>We will set OpenVPN to run as a service on the openvpn Virtual Machine.</p>
<pre><code>vagrant@openvpn-1:~$ sudo systemctl start openvpn-server@server.service
vagrant@openvpn-1:~$ sudo systemctl enable openvpn-server@server.service
vagrant@openvpn-1:~$ sudo systemctl status openvpn-server@server.service
● openvpn-server@server.service - OpenVPN service for server
     Loaded: loaded (/lib/systemd/system/openvpn-server@.service; enabled; vendor preset: enabled)
     Active: active (running) since Sat 2021-03-13 21:47:49 UTC; 7min ago
       Docs: man:openvpn(8)
             https://community.openvpn.net/openvpn/wiki/Openvpn24ManPage
             https://community.openvpn.net/openvpn/wiki/HOWTO
   Main PID: 43562 (openvpn)
     Status: &quot;Initialization Sequence Completed&quot;
      Tasks: 1 (limit: 2281)
     Memory: 1.2M
     CGroup: /system.slice/system-openvpn\x2dserver.slice/openvpn-server@server.service
             └─43562 /usr/sbin/openvpn --status /run/openvpn-server/status-server.log --status-version 2 --suppress-timestamps --config server.conf

Mar 13 21:47:49 openvpn-1 openvpn[43562]: Could not determine IPv4/IPv6 protocol. Using AF_INET
Mar 13 21:47:49 openvpn-1 openvpn[43562]: Socket Buffers: R=[212992-&gt;212992] S=[212992-&gt;212992]
Mar 13 21:47:49 openvpn-1 openvpn[43562]: UDPv4 link local (bound): [AF_INET]192.168.200.10:1194
Mar 13 21:47:49 openvpn-1 openvpn[43562]: UDPv4 link remote: [AF_UNSPEC]
Mar 13 21:47:49 openvpn-1 openvpn[43562]: GID set to nogroup
Mar 13 21:47:49 openvpn-1 openvpn[43562]: UID set to nobody
Mar 13 21:47:49 openvpn-1 openvpn[43562]: MULTI: multi_init called, r=256 v=256
Mar 13 21:47:49 openvpn-1 openvpn[43562]: IFCONFIG POOL: base=10.8.0.2 size=252, ipv6=0
Mar 13 21:47:49 openvpn-1 openvpn[43562]: IFCONFIG POOL LIST
Mar 13 21:47:49 openvpn-1 openvpn[43562]: Initialization Sequence Completed
</code></pre>
<h2 id="settinguptheclientmachine">Setting up the Client Machine</h2>
<p>Once the file has been copied we can install the OpenVPN client software.</p>
<pre><code>vagrant@client-1:~$ sudo apt install openvpn
sudo cp client.ovpn /etc/openvpn/client.conf
sudo openvpn --client --config /etc/openvpn/client.conf

</code></pre>
<pre><code>vagrant@client-1:~$ ip addr
1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:14:86:db brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 76001sec preferred_lft 76001sec
    inet6 fe80::a00:27ff:fe14:86db/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:f0:b6:2a brd ff:ff:ff:ff:ff:ff
    inet 192.168.200.11/24 brd 192.168.200.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fef0:b62a/64 scope link
       valid_lft forever preferred_lft forever
5: tun0: &lt;POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP&gt; mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 100
    link/none
    inet 10.8.0.2/24 brd 10.8.0.255 scope global tun0
       valid_lft forever preferred_lft forever
    inet6 fe80::eb83:64ee:5f42:c433/64 scope link stable-privacy
       valid_lft forever preferred_lft forever


</code></pre>
<p>In our particular lab, because of the way Vagrant uses eth0 as it's default gateway we must make a small change to reach the server physical address of 192.168.200.10 via eth1 rather then eth0. We must also remove the previous static route to reach the web server via the physical interface as we need it to go via the tunnel.</p>
<pre><code>vagrant@client-1:~$ sudo ip route del 192.168.200.10 via 10.0.2.2 dev eth0
vagrant@client-1:~$ sudo ip route del 172.16.0.0/24 via 192.168.200.10 dev eth1 
vagrant@client-1:~$ tracepath -n 192.168.200.10
 1?: [LOCALHOST]                      pmtu 1500
 1:  192.168.200.10                                        0.809ms reached
 1:  192.168.200.10                                        0.675ms reached
     Resume: pmtu 1500 hops 1 back 1 
vagrant@client-1:~$ ip route
0.0.0.0/1 via 10.8.0.1 dev tun0 
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 
10.8.0.0/24 dev tun0 proto kernel scope link src 10.8.0.2 
128.0.0.0/1 via 10.8.0.1 dev tun0 
172.16.0.0/24 via 192.168.200.10 dev eth1 proto static 
192.168.200.0/24 dev eth1 proto kernel scope link src 192.168.200.11 
</code></pre>
<p>The client has set two new routes to send all traffic out the tunnel interface. These are the routes at 0.0.0.0/1 via 10.8.0.1 and 128.0.0.0/1 via 10.8.0.1.</p>
<p>Once this is done the traffic between the client and openVPN server is routed via the tun0 intreface rather then eth1 (although tun0 makes use of the link). All traffic is encrypted on port 1194.</p>
<pre><code>vagrant@openvpn-1:~$ sudo tcpdump -ni eth1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
22:24:10.968408 IP 192.168.200.11.45357 &gt; 192.168.200.10.1194: UDP, length 252
22:24:10.969500 IP 192.168.200.10.1194 &gt; 192.168.200.11.45357: UDP, length 252
22:24:11.969848 IP 192.168.200.11.45357 &gt; 192.168.200.10.1194: UDP, length 252
22:24:11.970961 IP 192.168.200.10.1194 &gt; 192.168.200.11.45357: UDP, length 252
22:24:12.971290 IP 192.168.200.11.45357 &gt; 192.168.200.10.1194: UDP, length 252


vagrant@openvpn-1:~$ sudo tcpdump -ni tun0
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tun0, link-type RAW (Raw IP), capture size 262144 bytes
22:24:30.999390 IP 10.8.0.2 &gt; 172.16.0.20: ICMP echo request, id 9, seq 492, length 208
22:24:31.000122 IP 172.16.0.20 &gt; 10.8.0.2: ICMP echo reply, id 9, seq 492, length 208
22:24:31.999759 IP 10.8.0.2 &gt; 172.16.0.20: ICMP echo request, id 9, seq 493, length 208
22:24:32.000430 IP 172.16.0.20 &gt; 10.8.0.2: ICMP echo reply, id 9, seq 493, length 208
</code></pre>
<p>We can see that the ICMP traffic is going over the tun0 link and has had it's source address changed to 10.8.0.2 (which is the address of the openVPN client). We can also see that the traffic going over eth1 is UDP traffic on port 1194.</p>
<pre><code>vagrant@openvpn-1:~$ sudo iptables -t nat -L -n -v
Chain PREROUTING (policy ACCEPT 227 packets, 188K bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain INPUT (policy ACCEPT 146 packets, 145K bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 9 packets, 726 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain POSTROUTING (policy ACCEPT 19 packets, 10350 bytes)
 pkts bytes target     prot opt in     out     source               destination
    3  1788 SNAT       all  --  *      *       10.8.0.0/24         !10.8.0.0/24          to:192.168.200.10
</code></pre>
<h2 id="conclusions">Conclusions</h2>
<p>This Lab has shown how to get a basic OpenVPN setup using Virtualbox and Vagrant.</p>
<p>It has highlighted the importance of host routing within the Virtual Machines to ensure that traffic is sent out the  correct interface, particularly when there are multiple interfaces involved. The use of Vagrant allows the easy creation of Virtual Machines but as it creates a management/configuration interface on eth0 , which also serves as a default gateway it meant further routing modifications had to be made to ensure traffic didn't get routed via the Host.</p>
<p>OpenVPN also creates routes and performs NAT to overwrite the source address of the client that is used to reach the final remote network. The use of 0.0.0.0/1 and 128.0.0.0/1 routes matches all traffic coming out of the client and sends it through the tunnel but means that the normal default route doesn't need to be overwritten.</p>
<p>TCPdump was used to confirm that the traffic going over the underlay network was not ICMP but encrypted traffic using the standard port 1194.</p>
<p>When the traffic was inspected over the tunnel it was shown to be ICMP that used the tunnel source IP address of 10.8.0.2.</p>
<p>While the Lab is not a realistic scenario it shows the use of routing table inspection and using TCPdump to verify the VPN is working.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[My Experience of the CKA Exam]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p><img src="https://myblog.salterje.com/content/images/2021/02/CKA_Certificate.png" alt="CKA_Certificate"></p>
<p>This is a quick overview of my experience in sitting and passing  the Certified Kubernetes Administrator exam,  the first time I have sat a Linux Foundation certification.</p>
<p>The exam was taken at home,  another thing that I have not experienced before which was of concern to me prior to the</p>]]></description><link>https://myblog.salterje.com/my-experience-of-the-cka-exam/</link><guid isPermaLink="false">602ae58fd2d5df031c25222f</guid><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Jason Salter]]></dc:creator><pubDate>Mon, 15 Feb 2021 21:26:27 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p><img src="https://myblog.salterje.com/content/images/2021/02/CKA_Certificate.png" alt="CKA_Certificate"></p>
<p>This is a quick overview of my experience in sitting and passing  the Certified Kubernetes Administrator exam,  the first time I have sat a Linux Foundation certification.</p>
<p>The exam was taken at home,  another thing that I have not experienced before which was of concern to me prior to the test. All my previous certifications have always been done at dedicated testing facilities and I really had no real idea of how the testing experience would be at home.</p>
<p>I started getting exam reminders a few days before the exam and these emails had a link to the Linux Foundation portal that would be used to gain access to the test along  with relevant background information. This included the system requirements which was basically chrome with a plugin for the screen monitoring and in my case this ran on a laptop running Ubuntu 18.04.</p>
<p>I did the exam from the dining room table and made sure that I had cleared the room and surroundings of any clutter as detailed in the instructions, logging onto the site about 15 minutes before my timeslot. In due course the proctor made contact via a chat window and we spent some time panning the webcam of the laptop over the room and table to ensure everything was clear and there was nothing in the room that would aid my attempt.</p>
<p>It was at this point that I found the wifi signal in the room became intermittent (I'm sure my wifi has a setting to only start playing up on important calls or in this case an important test...). I lost connection a couple of times but found the proctor still present in the chat and after running a CAT5 cable from my laptop directly to the router I was back in business. This took a while to get working but I found the proctor was very patient and once everything was set the exam was opened.</p>
<p>I cannot disclose much about the exam because of the NDA but can say that I found the layout and questions clear. All work is done within the browser which has a terminal window and you are allowed one other tab which can display documentation from Kubernetes.io</p>
<p>The layout meant that the questions were visible while working in the terminal window and the only switching between tabs was when checking documentation. Each question had a clear indication of the context switch that needed to be carried out to ensure you were working on the correct cluster and where necessary it was possible to open up an ssh session to a Host.</p>
<p>I found the exam to be fair and conformed fully to the blueprint, all questions are tasked based and it was clear what was expected. Some tasks were more involved then others and these questions had a higher weighting.</p>
<p>The nature of the test meant that one has to be fairly quick but not at the expense of accuracy. It's very important to take the time to read the question carefully to ensure you carry out exactly what is asked of you. I struggled a little at the beginning of the test with copy and paste but this was probably as much to do with muscle memory of working in a normal terminal session and nerves.</p>
<p>I was able to complete all the questions but didn't get a chance to double check some of the answers and I know there was at least one task that I knew I didn't complete (sometimes you just have to cut your losses and a combination of the clock ticking and timely warnings of time meant I ended up skipping to the next questions).</p>
<p>All in all I think the experience was quite positive and a fairer test then some other vendors (I'm looking at you Cisco...). It is a test that is very hands on and I would recommend the only way to study is to build, upgrade, backup and break clusters in as many ways as possible (managed Kubernetes or minikube will only take you so far).</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Custom Column Reports using Kubectl]]></title><description><![CDATA[How to make use of jsonpath queries to generate custom column reports using kubectl]]></description><link>https://myblog.salterje.com/custom-column-reports-using-kubectl/</link><guid isPermaLink="false">6016f598d2d5df031c252198</guid><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Jason Salter]]></dc:creator><pubDate>Sun, 31 Jan 2021 20:43:57 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>The standard output options available with the kubectl command allow gathering of  information about our Kubernetes cluster and what is running within it. In many circumstances this may be sufficient but sometimes there is a need to display information not part of the standard kubectl output.</p>
<p>For example it is relatively easy to determine what Pods are running in a particular namespace.</p>
<pre><code class="language-yaml">kubectl get pods 
</code></pre>
<p><img src="https://myblog.salterje.com/content/images/2021/01/get-pods.png" alt="get-pods"></p>
<p>The output tells us which Pods are running within the namespace and we can also get a bit more information by adding the -o wide flag.</p>
<pre><code class="language-yaml">kubectl get pods -o wide
</code></pre>
<p><img src="https://myblog.salterje.com/content/images/2021/01/get-pods-wide.png" alt="get-pods-wide"></p>
<p>This gives us more information, including the actual Node the Pod is currently running on and it's IP address. However there is no indication of which image has been used or what container ports are in use.</p>
<p>To get this extra information it is necessary to describe the Pod or Deployment which returns a lot of additional information which may or may not be of interest.</p>
<pre><code class="language-yaml">kubectl describe pod website1-7758c6fd5-db5sf
</code></pre>
<pre><code class="language-yaml">Name:         website1-7758c6fd5-db5sf
Namespace:    test
Priority:     0
Node:         k8s-node-1/10.53.104.77
Start Time:   Sun, 31 Jan 2021 16:29:05 +0000
Labels:       app=website1
              pod-template-hash=7758c6fd5
              project=blog-entry
Annotations:  &lt;none&gt;
Status:       Running
IP:           10.44.0.1
IPs:
  IP:           10.44.0.1
Controlled By:  ReplicaSet/website1-7758c6fd5
Containers:
  nginx:
    Container ID:   docker://a91b2c9de7596afca05a68c0cdf547852cb40241c8a799520f44ce7f89dc969c
    Image:          nginx:1.18.0-alpine
    Image ID:       docker-pullable://nginx@sha256:7ae8e5c3080f6012f8dc719e2308e60e015fcfa281c3b12bf95614bd8b6911d6
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Sun, 31 Jan 2021 16:29:10 +0000
    Ready:          True
    Restart Count:  0
    Environment:    &lt;none&gt;
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-jwtw5 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  default-token-jwtw5:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-jwtw5
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  &lt;none&gt;
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  36m   default-scheduler  Successfully assigned test/website1-7758c6fd5-db5sf to k8s-node-1
  Normal  Pulling    36m   kubelet            Pulling image &quot;nginx:1.18.0-alpine&quot;
  Normal  Pulled     36m   kubelet            Successfully pulled image &quot;nginx:1.18.0-alpine&quot; in 4.286046676s
  Normal  Created    36m   kubelet            Created container nginx
  Normal  Started    36m   kubelet            Started container nginx
</code></pre>
<pre><code class="language-yaml">kubectl describe deployment website1
</code></pre>
<pre><code class="language-yaml">Name:                   website1
Namespace:              test
CreationTimestamp:      Sun, 31 Jan 2021 16:29:05 +0000
Labels:                 app=website1
                        project=blog-entry
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=website1
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=website1
           project=blog-entry
  Containers:
   nginx:
    Image:        nginx:1.18.0-alpine
    Port:         8080/TCP
    Host Port:    0/TCP
    Environment:  &lt;none&gt;
    Mounts:       &lt;none&gt;
  Volumes:        &lt;none&gt;
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  &lt;none&gt;
NewReplicaSet:   website1-7758c6fd5 (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  37m   deployment-controller  Scaled up replica set website1-7758c6fd5 to 1

</code></pre>
<p>As can be seen describing the Pods and Deployments generates a lot of additional information which is limited to those particular objects. In our case we would like to compare this information across all Pods running in the namespace, which would mean running the command for each Pod or Deployment.</p>
<p>When kubectl is used it actually interrogates the api-server and pulls back the complete json data for the whole object, although by default  this is displayed in a cut-down form.</p>
<p>It is also possible to output in either yaml or json format which is often too much information.</p>
<pre><code class="language-yaml">kubectl get pods -o yaml
kubectl get pods -o json
</code></pre>
<p>However it is worth taking the time to become familiar with the full json data being returned by the kubectl commands and learn how to use jsonpath queries to extract desired information.</p>
<p>Using output custom columns allows information extracted by these jsonpath queries to be displayed in the familiar format used by standard kubectl commands.</p>
<p>In this example we will extract the image and container ports from the returned status of the Pods.</p>
<p>We will then display these details along with the Pod name, Node details and Pod IP details. This will give a single summary report allowing us to view the information for all Pods in the namespace at the same time.</p>
<pre><code class="language-yaml"> kubectl get pods -o custom-columns=NAME:.metadata.name,IP:.status.podIPs[*].ip,PORT:.spec.containers[*].ports[*].containerPort,IMAGE:.status.containerStatuses[*].image,NODE:.spec.nodeName
</code></pre>
<p><img src="https://myblog.salterje.com/content/images/2021/01/custom-column-report-1.png" alt="custom-column-report-1"></p>
<p>The use of custom columns has enabled us to get a formatted list that looks similar to the original output but also including the image and container port details.</p>
<p>We can now see that my-test-5dfcd48dd9-dc88v is actually a busybox Pod, which would not be obvious from it's name. We can also see that the nginx Pods are exposing port 8080 which must match any associated services.</p>
<p>We can also see the two Pods running nginx are not running the same version of nginx (which may or may not be an issue)</p>
<p>In this initial case the command requires a lot of typing but once a query has been checked it is possible to save it to a template file which allows it to be used in the future.</p>
<p>The format of the file is similar to the typed information.</p>
<pre><code class="language-yaml">cat my-report.txt 
</code></pre>
<p><img src="https://myblog.salterje.com/content/images/2021/01/my-report.png" alt="my-report"></p>
<p>To run the report it is a simple command to reference the file while setting the output format to custom columns.</p>
<pre><code class="language-yaml">kubectl get pods -o custom-columns-file=my-report.txt 
</code></pre>
<p><img src="https://myblog.salterje.com/content/images/2021/01/custom-column-report.png" alt="custom-column-report"></p>
<h2 id="conclusions">Conclusions</h2>
<p>When there is a need to generate reports needing information normally only available by describing an object or trawling through the full .yaml or .json outputs the use of custom columns is very useful. The custom columns reports can give a good way of making comparisons of objects such as Pods and their current status.</p>
<p>While this still means the construction of suitable jsonpath query expressions once these are done and the report is in the correct format it is possible to write a template form that makes it easy to run the query again in the future. These template files can be used on any machine that has access to the api-server, making them available for scripting and automation in the future.</p>
<p>It is very much worth taking the time to study the json format of returned api-calls to get an idea of what information is available, as this can save time in fault-finding.</p>
<p>In this particular example we now have a simple way of looking at the images that have been used for the running Pods as well as their containerports.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Using Netcat for Testing Container Connectivity]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>This post examines making use of the netcat utility to confirm connectivity between containers running within a Kubernetes cluster. Netcat is a really useful tool for testing network connectivity which is part of the standard Busybox image.</p>
<p>The tool can be used for setting up a basic server listening on</p>]]></description><link>https://myblog.salterje.com/using-netcat-for-testing-container-connectivity/</link><guid isPermaLink="false">5ff34214d2d5df031c25205b</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Jason Salter]]></dc:creator><pubDate>Mon, 18 Jan 2021 19:51:03 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>This post examines making use of the netcat utility to confirm connectivity between containers running within a Kubernetes cluster. Netcat is a really useful tool for testing network connectivity which is part of the standard Busybox image.</p>
<p>The tool can be used for setting up a basic server listening on a particular port as well as a client connecting to a port running on a different container.</p>
<p>We'll create some Busybox Deployments, linked with a suitable service and create a simple TCP server to listen on a particular port. This will allow us to confirm connectivity from another Pod acting as a client via the connected service.</p>
<h2 id="netcattcpserver">Netcat TCP Server</h2>
<p>A simple Deployment is created using the following manifest.</p>
<pre><code class="language-yaml">apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: netcat-server
  name: netcat-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: netcat-server
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: netcat-server
        networking/allow-connection-netcat-client: &quot;true&quot;
    spec:
      containers:
      - image: busybox:1.28
        name: busybox
        command: [&quot;sh&quot;,&quot;-c&quot;]
        args:
        - echo &quot;Beginning Test at $(date)&quot;;
          echo &quot;Stating Netcat Server&quot;;
          while true; do { echo -e &quot;HTTP/1.1 200 OK\r\n&quot;; echo &quot;This is some text sent at $(date) by $(hostname)&quot;; } | nc -l -vp 8080; done

</code></pre>
<p>The simple shell script causes netcat to listen on port 8080, has verbose logging and will return a simple html response which will send back some text with the current date and display the hostname.</p>
<p>We'll create another Deployment to act as a simple client to connect to our server.</p>
<pre><code class="language-yaml">apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: netcat-client
  name: netcat-client
spec:
  replicas: 1
  selector:
    matchLabels:
      app: netcat-client
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: netcat-client
        networking/allow-connection-netcat-client: &quot;true&quot;
    spec:
      containers:
      - image: busybox:1.28
        name: busybox
        command: [&quot;sh&quot;,&quot;-c&quot;]
        args:
        - echo &quot;Beginning Test at $(date)&quot;;
          sleep 3600
</code></pre>
<p>The two Deployments can be created by running the manifests</p>
<pre><code class="language-yaml">kubectl create -f netcat-server.yaml
kubectl create -f netcat-client.yaml
</code></pre>
<p>We'll also expose the netcat-server deployment to create a clusterIP service that will give us an easy way of connecting to it.</p>
<pre><code class="language-yaml">kubectl expose deployment netcat-server --port=8080 
</code></pre>
<p>This has created our basic test setup and we'll also scale up the number of servers that we have running.</p>
<pre><code class="language-yaml">kubectl scale deployment netcat-server --replicas=3 --record
</code></pre>
<p>To confirm what is running</p>
<pre><code class="language-yaml">kubectl get all
</code></pre>
<p>This will show that we have 3 replicas of the netcat-server deployment.</p>
<p><img src="https://myblog.salterje.com/content/images/2021/01/NetCatLab.png" alt="NetCatLab"></p>
<p><img src="https://myblog.salterje.com/content/images/2021/01/kubectl-get-all.png" alt="kubectl-get-all"></p>
<h2 id="provingconnectivity">Proving Connectivity</h2>
<p>To test the connectivity we will run an interactive shell on the busybox-client Pod and connect to the netcat-server deployment on port 8080 via the netcat-server service, which will load-balance between the endpoint Pods.</p>
<pre><code class="language-yaml">kubectl exec -it netcat-client-bfffcbcf9-bpgvf -- sh
</code></pre>
<p>We can use the same netcat command to connect to the service proving that the netcat running on the remote Pods is returning our simple message.</p>
<pre><code class="language-yaml"># nc netcat-server 8080
</code></pre>
<p><img src="https://myblog.salterje.com/content/images/2021/01/running-nc-on-client.png" alt="running-nc-on-client"></p>
<p>To send the command from the client again it is necessary to escape using CTRL-C each time to send another request.</p>
<p>It can be seen that the messages being returned are actually coming from multiple containers, running on different Pods that form the netcat-server Deployment.</p>
<h2 id="conclusions">Conclusions</h2>
<p>Netcat can be used to allow a simple server to be created within a container to listen upon a set port. In this example the server is returning a simple message with the hostname and date, allowing us to confirm that the associated service is indeed load balancing between the end Pods.</p>
<p>Netcat can also be used as a client to get the returned information from the server, which we have done from another Busybox container.</p>
<p>In this lab we run a small script to bring the server up upon creation of the container within the Pod. Upon creation the script returns the hostname and date to anything connecting to the Pod on port 8080.</p>
<p>It is also very possible to create basic Busybox containers that can be used ad-hoc via interactive shells, giving a good way of troubleshooting connectivity issues within the cluster.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Using Kubernetes RBAC for Creation of New User]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>The user that kubeadm creates when building a cluster has full access, allowing the creation and deletion of anything within the cluster.</p>
<p>In this post we will create a new user, salterje, which only has the ability to gather information about running Pods running within the default namespace of the</p>]]></description><link>https://myblog.salterje.com/using-kubernetes-rbac-for-creation-of-new-user/</link><guid isPermaLink="false">5fd64a9ed2d5df031c251ee6</guid><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Jason Salter]]></dc:creator><pubDate>Mon, 14 Dec 2020 21:44:40 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>The user that kubeadm creates when building a cluster has full access, allowing the creation and deletion of anything within the cluster.</p>
<p>In this post we will create a new user, salterje, which only has the ability to gather information about running Pods running within the default namespace of the cluster.</p>
<p>In order to do this a private key is generated which is used to generate a certificate signing request using the cluster Certificate Authority. This allows a suitable certificate to be generated for our new user.</p>
<p>We must then modify the default kubeconfig file to add the user's credentials along with a suitable context that will be used in place of the default kubernetes admin context.</p>
<p>The final step is the creation of a suitable role that only allows the viewing of Pods and for this we will create a role binding object linking the role to the newly created salterje user.</p>
<h2 id="creationofnewuser">Creation of New User</h2>
<p>The user that is to be created must first have a private key generated.</p>
<h3 id="creationofprivatekeyforuser">Creation of Private Key for User</h3>
<pre><code class="language-yaml">openssl genrsa -out salterje.key 2048
</code></pre>
<p><img src="https://myblog.salterje.com/content/images/2020/12/openssl-genrsa.png" alt="openssl-genrsa"></p>
<h3 id="createacertificatesigningrequest">Create a Certificate Signing Request</h3>
<p>The next stage is to create a Certificate Signing Request using the Kubernetes CA and the newly generated private key.</p>
<pre><code class="language-yaml">openssl req -new -key salterje.key -out salterje.csr -subj &quot;/CN=salterje/O=projects&quot;
</code></pre>
<p><img src="https://myblog.salterje.com/content/images/2020/12/openssl-csr.png" alt="openssl-csr"></p>
<h3 id="usetheclustercaandcakeytosigncsr">Use the Cluster CA and CA.key to Sign CSR</h3>
<p>The generated CSR can now be signed by the cluster CA certificate and CA.key</p>
<p><img src="https://myblog.salterje.com/content/images/2020/12/opensslx509-crt.png" alt="opensslx509-crt"></p>
<h2 id="addusertokubeconfig">Add User to Kubeconfig</h2>
<p>The signed certificate and private key for the new user can now be added to the kubeconfig file allowing access to the cluster. For this a new context and user will be added.</p>
<p>The easiest way of doing this is to use kubectl to modify the kubeconfig file.</p>
<p>The initial kubeconfig file that has been created by kubeadm has a kubenetes-admin user with full access.</p>
<p><img src="https://myblog.salterje.com/content/images/2020/12/kubeconfig-initial.png" alt="kubeconfig-initial"></p>
<p>We will initially add the credentials for the new salterje user, making use of the signed certificate and private key. A cluster entry will also be added as well as a suitable context to link both together.</p>
<pre><code class="language-yaml">kubectl config set-credentials salterje --client-certificate=salterje.crt 
--client-key=salterje.key --embed-certs 
</code></pre>
<pre><code class="language-yaml">kubectl config set-cluster salterje-cluster --server=https://k8s-master 
--certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs
</code></pre>
<pre><code class="language-yaml">kubectl config set-context salterje-login --user=salterje 
--cluster=salterje-cluster
</code></pre>
<p>These three commands will add a new user, cluster and context to the default kubeconfig file within the cluster</p>
<pre><code class="language-yaml">apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://k8s-master:6443
  name: kubernetes
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://k8s-master:6443
  name: salterje-cluster
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
- context:
    cluster: salterje-cluster
    user: salterje
  name: salterje-login
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
- name: salterje
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
</code></pre>
<p>Once these have have been set we can make use of the new salterje-login context</p>
<pre><code class="language-yaml">kubectl config use-context salterje-login
</code></pre>
<p>To prove the functionality we try getting the running Pods within the cluster</p>
<p><img src="https://myblog.salterje.com/content/images/2020/12/GetPods.png" alt="GetPods"></p>
<p>This proves the new user doesn't have any rights to access anything within the cluster at the moment.</p>
<p>The next stage is to create a role and rolebinding object to link the newly created user.</p>
<h3 id="createarole">Create a Role</h3>
<p>We create a role that allows the listing and watching of Pods but doesn't allow the creation of them (or viewing any other objects).</p>
<pre><code class="language-yaml">kubectl create role pod-reader --verb=get --verb=list --verb=watch
--resource=pods --dry-run=client -o yaml &gt; pod-reader.yaml
</code></pre>
<p>This creates the pod-reader.yaml manifest</p>
<pre><code class="language-yaml">apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  creationTimestamp: null
  name: pod-reader
rules:
- apiGroups:
  - &quot;&quot;
  resources:
  - pods
  verbs:
  - get
  - list
  - watch
  
</code></pre>
<p>We also need to create a suitable role-binding to link the role to our new user. This generates the pod-reader-binding.yaml manifest.</p>
<pre><code class="language-yaml">kubectl create rolebinding pod-reader-binding --user=salterje --role=pod-reader
--dry-run=client -o yaml &gt; pod-reader-binding.yaml
</code></pre>
<pre><code class="language-yaml">apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  creationTimestamp: null
  name: pod-reader-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: pod-reader
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: salterje
</code></pre>
<p>In essence the role-binding object links the user to the role that has been created.</p>
<p>In order to create the objects the manifests are run in the normal way, but we must first go back to the kubernetes-admin context to run the commands.</p>
<pre><code class="language-yaml">kubectl config use-context kubernetes-admin@kubernetes
kubectl create -f pod-reader.yaml
kubectl create -f pod-reader-binding.yaml
</code></pre>
<p>Now going back to the salterje-login context it can be proven that we are able to view Pods but not view services or do anything else within the cluster.</p>
<pre><code class="language-yaml">kubectl config use-context salterje-login
</code></pre>
<p><img src="https://myblog.salterje.com/content/images/2020/12/getpods-salterje.png" alt="getpods-salterje"></p>
<h2 id="conclusions">Conclusions</h2>
<p>The use of RBAC within Kubernetes allows the linking of users with roles within the cluster. This allows minimum privalege levels for a user and the associated role.</p>
<p>However Kubernetes doesn't have a concept of a user object and relies on the use of certicates and keys that are sent with the api call. This lab has shown how to create the necessary key and certificate as well as the modification of the kubeconfig file to create a new context to use them.</p>
<p>Once the incoming request has been authenicated the request is authorized against a role which must be linked to a user with a suitable role-binding object.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Basic Overview of Multipass]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Multipass is a tool created by Canonical that makes it easy to create Virtual Machines which by default run the latest LTS version of Ubuntu. On Linux it uses KVM as it's hypervisor and if it isn't already present on the Host can be installed by a simple snap command.</p>]]></description><link>https://myblog.salterje.com/basic-overview-of-multipass/</link><guid isPermaLink="false">5fc3cb855064d9031b5933c1</guid><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Jason Salter]]></dc:creator><pubDate>Tue, 01 Dec 2020 21:47:09 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Multipass is a tool created by Canonical that makes it easy to create Virtual Machines which by default run the latest LTS version of Ubuntu. On Linux it uses KVM as it's hypervisor and if it isn't already present on the Host can be installed by a simple snap command.</p>
<pre><code class="language-yaml">sudo snap install multipass
</code></pre>
<p>In this post I'll describe how to create a single Virtual Machine running Nginx and use a cloud-init file to modify the newly created instance.</p>
<p>This will be used to create a couple of additional users, upgrade the machine, install the latest Nginx package and start the service. As well as creation of the users we'll also upload some Public keys to allow ssh access (although Multipass does create a default user and up-loads a Public key from the Host).</p>
<p>We'll use one of the extra users to allow confirmation that the Nginx service is running, and if necessary re-start it using Ansible from the Host machine.</p>
<h2 id="usingmultipass">Using Multipass</h2>
<p>Multipass has been designed to be quick and easy to use and has a relatively simple set of commands.</p>
<pre><code class="language-yaml">multipass --help
Usage: multipass [options] &lt;command&gt;
Create, control and connect to Ubuntu instances.

This is a command line utility for multipass, a
service that manages Ubuntu instances.

Options:
  -h, --help     Display this help
  -v, --verbose  Increase logging verbosity. Repeat the 'v' in the short option
                 for more detail. Maximum verbosity is obtained with 4 (or more)
                 v's, i.e. -vvvv.

Available commands:
  delete    Delete instances
  exec      Run a command on an instance
  find      Display available images to create instances from
  get       Get a configuration setting
  help      Display help about a command
  info      Display information about instances
  launch    Create and start an Ubuntu instance
  list      List all available instances
  mount     Mount a local directory in the instance
  purge     Purge all deleted instances permanently
  recover   Recover deleted instances
  restart   Restart instances
  set       Set a configuration setting
  shell     Open a shell on a running instance
  start     Start instances
  stop      Stop running instances
  suspend   Suspend running instances
  transfer  Transfer files between the host and instances
  umount    Unmount a directory from an instance
  version   Show version details
</code></pre>
<p>To find out if there are any Multipass machines running and their version it is a simple case of running multipass ls.</p>
<pre><code class="language-yaml">multipass ls
No instances found.

multipass version
multipass  1.5.0
multipassd 1.5.0
</code></pre>
<p>To launch a basic machine we use the following.</p>
<pre><code class="language-yaml">multipass launch -n my-test-vm
Launched: my-test-vm

multipass ls                
Name                    State             IPv4             Image
my-test-vm              Running           10.205.150.176   Ubuntu 20.04 LTS

multipass info my-test-vm 
Name:           my-test-vm
State:          Running
IPv4:           10.205.150.176
Release:        Ubuntu 20.04.1 LTS
Image hash:     bb0a97102288 (Ubuntu 20.04 LTS)
Load:           1.76 0.58 0.20
Disk usage:     1.2G out of 4.7G
Memory usage:   140.5M out of 981.2M

</code></pre>
<p>This creates a basic Virtual Machine, based upon the latest 20.04 LTS, with a default disk size of 5G, 1G of memory and 1 CPU.</p>
<p>It is simple to either run a command within the newly created machine or log on via a shell session.</p>
<p>Commands can be run using</p>
<pre><code class="language-yaml">multipass exec my-test-vm -- free -h
</code></pre>
<p>An ssh session to the Virtual Machine is as easy as running</p>
<pre><code class="language-yaml">multipass shell my-test-vm
</code></pre>
<p><img src="https://myblog.salterje.com/content/images/2020/11/Multipass_Command.png" alt="Multipass_Command"></p>
<p>This makes it very easy to create test machines which can just as easily be deleted and purged from the Host.</p>
<pre><code class="language-yaml">multipass delete my-test-vm
multipass purge
</code></pre>
<p>This removes any trace of the created Virtual Machines.</p>
<p>It is also possible to add arguments to define disk size, memory, CPU and the base image to be used.</p>
<p>It's also possible to use a cloud-init file to further modify the base image and this is what we will use to create our Virtual Machine with extra users, uploaded Keys and the for the installation of Nginx.</p>
<pre><code class="language-yaml">#cloud-config                                                                                                                                 
hostname: nginx-node

users:
  - name: test
    groups: sudo 
    shell: /bin/bash
    sudo: ['ALL=(ALL) NOPASSWD:ALL']
    ssh-authorized-keys:
      - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC1fGw9YMWDZQwOzqcSEsGDgxWJYN0sNhFzn4R2i4OmFxW0+GGV1qKM9UdE/N1PzN3bBDXPWOvpQMRMFH3rkRdu8K2+wY6BFSOTkMqKkG9Q5ityO5uxQ8ReOaQeVww8+64ye1UWIR7/eGt2D/cA1Ah6tttT75ZMBXasiH/9Phsg8vIff8Oc4O39IEdFKgj8a60dE8SfbC7ACvWVokOB9BTc6kCoVoPz1H/FJFdl92ZsPGq0ELHpH5bZmOLNcmU/To1WG5nEGXsPi/A2bYcUKbtT/j0UfxtBD9qmAPYh5cOgyiNTImd0aRs8gWYWG4aSsIbMlRgrkYmIna0sp7rk72NDJmk5iUD/aD4jBTkmDLhpt/0UJvzNWHsrKhe9arYnkO1d3IsH7GcbajFjAlJP1CEXOnUjGmhIjNjufZ5Eva0V9Yap6tpnZlsV9ZO3Is1F/uin89dRvJK7tzLwQv0OCf32BDO1MkRPSrzMUTSfqG+M7tt+rAetNubEzBQ7/tkbcYQSDfTu1ptPuktN2L127gT51mM7I5Ciub5HYsOHuBeKQR/Vn7HUIPY5tgzIktXx6XqvTsrOWDa0LaFWkKhFr4OaE8yBGY32xVgBCN7ovKY/OndZ2eJ++xKZQ6IOrkPweNxtjm8rPezjfaDIxl4QFAGcFg2IIvZTMt99+IPN7zxLaQ== test@test.com

  - name: ansible
    groups: sudo
    sudo: ['ALL=(ALL) NOPASSWD:ALL']
    ssh-authorized-keys:
      - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDL9WIfSsSP+4vKCCP+al+9dSgfJjXtDUKhAhG6r/Rh1pMevrZRz4FH7xWuJs5B5GLnRBkp9cfrPHHzqnR6KtnmAhV124M/9hiNdWnr2J+p2vBHk1BehO1ZCO6TVom8AXS1zCIsdN9JcvnqbaUuZEEiRXUWIdrH3EZyVzwBpvBCJ9k0hZ/nucLMdq04c9pVA49SnYh4UQv74QowjHk9NG5lgntzuzw3HoU5iJw610taTNGirX8ovCvA/C3cYpKYWPXVJrykFyMxARqbDbqokyeozIHTiHN/tg8ZDKd5rPD78UV3s4TKoWaZEeH6FkbsQcepJCkRqpl7JsnqtCXpKtd20yMA4UnNdwxOfIUL+O+VPWzCpe6VqOsnr7iEoh1jKlF1Y5e7YK7L6yk8Flmw9P60UBoJqICshcH0mKVpS7J3G4zMWM3p6Rk2EwY8KVcDzydy+7mNqeXUUQQg9jMItMfpfTCZz2nRdoVCHctn/4LFDyTLPFk2WC2M/+3+cc6QW1PuqQavJGBr9jszPmPY8+3VJEATlwy0UOfx/A+LQeosk2xPZeff5esZw/Tl17qWuJxVMZjLzlqh8SB/wImaGrDTxvFsU3LKEc0+Gm3crdxcU0/vltLMOIODbsRxc8KBuFtLql13fHCI+OpJ/YlNjKUamUoSfPkl6ZLuHkEvIeWOPw== ansible user
         
package_update: true
package_upgrade: true
packages:
  - nginx
  - curl
  - htop
runcmd:
  - systemctl enable nginx
  - systemctl start nginx

final_message: &quot;Complete&quot;    
</code></pre>
<p>The YAML file has all the details needed to create two new users (test and ansible), add them as sudo users with no password and upload a public key when the Virtual Machine is first created.</p>
<p>It will also update and upgrade all packages as well as installing nginx, htop and curl.</p>
<p>The final lines will start the Nginx service.</p>
<h2 id="creatingourmachine">Creating our Machine</h2>
<p>The cloud-init file will be used by adding the appropriate argument</p>
<pre><code class="language-yaml">multipass launch --cpus 1 --disk 20G --mem 1g --name nginx-node --cloud-init ./nginx-cloud-init.yaml 
</code></pre>
<p>This creates a Virtual Machine called nginx-node</p>
<pre><code class="language-yaml">multipass ls
Name                    State             IPv4             Image
nginx-node              Running           10.205.150.231   Ubuntu 20.04 LTS

multipass info nginx-node 
Name:           nginx-node
State:          Running
IPv4:           10.205.150.231
Release:        Ubuntu 20.04.1 LTS
Image hash:     bb0a97102288 (Ubuntu 20.04 LTS)
Load:           0.05 0.41 0.27
Disk usage:     1.4G out of 19.2G
Memory usage:   147.6M out of 981.2M
</code></pre>
<p>We can confirm the test user has been created by logging onto the machine using the appropriate private key from the Host machine.</p>
<pre><code class="language-yaml">ssh test@10.205.150.231 -i nginx_rsa
The authenticity of host '10.205.150.231 (10.205.150.231)' can't be established.
ECDSA key fingerprint is SHA256:9BMIZHBH76VHJzhjYHlC7BbQl/58eCxGvUI26aUWn/I.
Are you sure you want to continue connecting (yes/no)? yes
Failed to add the host to the list of known hosts (/home/salterje/.ssh/known_hosts).

Welcome to Ubuntu 20.04.1 LTS (GNU/Linux 5.4.0-54-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Sun Nov 29 17:09:21 GMT 2020

  System load:  0.0               Processes:             104
  Usage of /:   7.3% of 19.21GB   Users logged in:       0
  Memory usage: 21%               IPv4 address for ens4: 10.205.150.231
  Swap usage:   0%


0 updates can be installed immediately.
0 of these updates are security updates.



The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

To run a command as administrator (user &quot;root&quot;), use &quot;sudo &lt;command&gt;&quot;.
See &quot;man sudo_root&quot; for details.

test@nginx-node:~$

</code></pre>
<p>This confirms the creation of one of the extra users and a simple connection from the Host to the appropriate IP address confirms the normal Nginx test page.</p>
<p><img src="https://myblog.salterje.com/content/images/2020/11/NginxTestPage.png" alt="NginxTestPage"></p>
<h2 id="checkingconnectionusingansible">Checking Connection Using Ansible</h2>
<p>Our last checks will be to confirm if the Ansible user has been created, appropriate Key uploaded and sudo rights established to allow the newly created machine to be monitored and controlled via Ansible running on the Host.</p>
<p>A simple inventory file on the Host uses the IP address of our newly created Virtual Machine and points at the appropriate private key matching the uploaded Public Key for the ansible user.</p>
<pre><code class="language-yaml">[nginx]                                                                                                                                       
10.205.150.231

[multi:children]
nginx

[multi:vars]
ansible_user=ansible
ansible_ssh_private_key_file=/home/salterje/Documents/M/Multipass/NGINX/ansible
</code></pre>
<p>To test the inventory file as well as the ssh permissions a simple ping can be sent using Ansible.</p>
<pre><code class="language-yaml">ansible -i test-ansible.yaml nginx -m ping
10.205.150.231 | SUCCESS =&gt; {
    &quot;ansible_facts&quot;: {
        &quot;discovered_interpreter_python&quot;: &quot;/usr/bin/python3&quot;
    }, 
    &quot;changed&quot;: false, 
    &quot;ping&quot;: &quot;pong&quot;
}
</code></pre>
<p>This proves that Ansible has correct ssh permissions and the Virtual Machine is active.</p>
<p>We'll now run a simple playbook to ensure Nginx service is running, using our test inventory file.</p>
<pre><code class="language-yaml">ansible-playbook nginx-playbook.yaml -i test-ansible.yaml 

PLAY [all] ***********************************************************************************************************************************

TASK [Gathering Facts] ***********************************************************************************************************************
ok: [10.205.150.231]

TASK [start nginx] ***************************************************************************************************************************
ok: [10.205.150.231]

PLAY RECAP ***********************************************************************************************************************************
10.205.150.231             : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
</code></pre>
<p>This proves the Nginx service is running and would re-start the service if it was found to be stopped. This can be proven by stopping the service and re-running the playbook.</p>
<pre><code class="language-yaml">ansible-playbook nginx-playbook.yaml -i test-ansible.yaml 

PLAY [all] ***********************************************************************************************************************************

TASK [Gathering Facts] ***********************************************************************************************************************
ok: [10.205.150.231]

TASK [start nginx] ***************************************************************************************************************************
changed: [10.205.150.231]

PLAY RECAP ***********************************************************************************************************************************
10.205.150.231             : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

salterje@Lenovo-Z500:~/Documents/M/Multipass/NGINX$ ansible-playbook nginx-playbook.yaml -i test-ansible.yaml 

PLAY [all] ***********************************************************************************************************************************

TASK [Gathering Facts] ***********************************************************************************************************************
ok: [10.205.150.231]

TASK [start nginx] ***************************************************************************************************************************
ok: [10.205.150.231]

PLAY RECAP ***********************************************************************************************************************************
10.205.150.231             : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0  

</code></pre>
<h2 id="conclusions">Conclusions</h2>
<p>Multipass is a very useful tool to easily spin up Development machines on a Host with a couple of simple commands.</p>
<p>It can be combined with cloud-init to modify the base images to install packages, create users and upload keys. The same cloud-init files can then be used for the creation of cloud instances, meaning that testing and verification can be done locally.</p>
<p>In many ways Multipass performs a similar function to Vagrant but is probably easier to use for simple tasks. For more complicated tasks other provisioners such as Ansible can be used in combination.</p>
<p>I have started using Multipass where previously I have used Vagrant combined with Virtual Box and it has proven a quick and easy way of building combinations of machines for a range of testing scenarios.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Busybox as a web CGI Server within Kubernetes]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>It is always useful to have basic tools for testing when working with Pods and services within Kubernetes. In this post I'll be taking a look at using Busybox the 'swiss army knife' of embedded Linux to run a light weight http server.</p>
<p>The aim of this lab is to</p>]]></description><link>https://myblog.salterje.com/busybox-as-an-http-server/</link><guid isPermaLink="false">5fb012435064d9031b59329d</guid><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Jason Salter]]></dc:creator><pubDate>Sun, 15 Nov 2020 17:08:18 GMT</pubDate><media:content url="https://myblog.salterje.com/content/images/2020/11/BusyboxLab-1.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://myblog.salterje.com/content/images/2020/11/BusyboxLab-1.png" alt="Busybox as a web CGI Server within Kubernetes"><p>It is always useful to have basic tools for testing when working with Pods and services within Kubernetes. In this post I'll be taking a look at using Busybox the 'swiss army knife' of embedded Linux to run a light weight http server.</p>
<p>The aim of this lab is to build a simple Pod that will display basic information such as IP address, hostname and environmental values that are sent to it. This can be used to confirm the functionality of an attached service load balancing between the Pods within the Deployment.</p>
<p>The information that is displayed will be generated by a simple bash script that displays the hostname and IP address of the running container as well as the current date and an environmental value passed to it.</p>
<p>As well as this there is a simple CSS file that will be used to add a little colour. These files will be mounted into the Busybox container using ConfigMaps to keep things simple.</p>
<h2 id="creatingtheconfigmaps">Creating the ConfigMaps</h2>
<p>Before the Deployment can be created the two Configmaps must be created. There are various ways of creating the two necessary objects and the way they were built for this lab was generating some suitable files in a directory and using them to create the configmaps.</p>
<p>The following files were used:</p>
<h3 id="cgitablesh">cgi-table.sh</h3>
<pre><code>My_date=$(date)
My_host=$(hostname)
My_env=$(env)
My_var=$(echo &quot;$MY_VAR&quot;)
My_IP=$(ip add show eth0 | grep inet)

echo &quot;Content-type: text/html&quot; # Tells the browser what type of content
echo &quot;&quot; # Empty Line

echo &quot;&lt;link rel=&quot;stylesheet&quot; type=&quot;text/css&quot; href=&quot;/css/mystyle.css&quot;&quot;
echo &quot;&quot;
echo &quot;table style=&quot;width:100%&quot;&gt;&quot;
echo &quot;&lt;table border=1&gt;&quot;
echo &quot;&quot;
echo &quot;&lt;tr&gt;&quot;
echo &quot;  &lt;th&gt;Information&lt;/th&gt;&quot;
echo &quot;  &lt;th&gt;Value&lt;/th&gt;&quot;
echo &quot;&lt;/tr&gt;&quot;
echo &quot;&lt;tr&gt;&quot;
echo &quot;  &lt;td&gt;Hostname&lt;/td&gt;&quot;
echo &quot;  &lt;td&gt;$My_host&lt;/td&gt;&quot;
echo &quot;&lt;/tr&gt;&quot;
echo &quot;&lt;tr&gt;&quot;
echo &quot;  &lt;td&gt;Date&lt;/td&gt;&quot;
echo &quot;  &lt;td&gt;$My_date&lt;/td&gt;&quot;
echo &quot;&lt;/tr&gt;&quot;
echo &quot;&lt;tr&gt;&quot;
echo &quot;  &lt;td&gt;IP Address&lt;/td&gt;&quot;
echo &quot;  &lt;td&gt;$My_IP&lt;/td&gt;&quot;
echo &quot;&lt;/tr&gt;&quot;
echo &quot;&lt;tr&gt;&quot;
echo &quot;  &lt;td&gt;My Var&lt;/td&gt;&quot;
echo &quot;  &lt;td&gt;$My_var&lt;/td&gt;&quot;
echo &quot;&lt;/tr&gt;&quot;
echo &quot;&lt;/table&gt;&quot;
</code></pre>
<h3 id="mystylecss">mystyle.css</h3>
<pre><code>body {                                                                                                                                        
  background-color: lightblue;
}

h1 {
  color: navy;
  margin-left: 20px;
}

p {
  color: blue;
}

table,th,td {
  border: 1px solid black;
  width: 75%;
  text-align: left;
  padding: 15px;
}
</code></pre>
<p>Both of the configmaps were built by running the following commands:</p>
<pre><code class="language-yaml">kubectl create configmap cgi-table.sh --from-file=cgi-table.sh
kubectl create configmap mystyle.css --from-file=mystyle.css
</code></pre>
<p>Once this was done the existance of the configmaps can be confirmed.</p>
<pre><code class="language-yaml">vagrant@k8s-master:~/httpd$ kubectl get configmaps 
NAME           DATA   AGE
cgi-table.sh   1      94m
mystyle.css    1      151m
</code></pre>
<p>The configmaps must be made before the Deployment of the Pods or the Deployment will stay in the pending state.</p>
<h2 id="addingthedeployment">Adding the Deployment</h2>
<p>The Deployment being used is based upon a Busybox image and will mount the configMaps to add the necessary script and CSS file. It also has a command argument to run the httpd service to listen on port 8080.</p>
<p>An environmental variable is also being added to the container and will be displayed by the httpd server.</p>
<p>The manifest file is</p>
<pre><code class="language-yaml">apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: my-busybox-httpd
  name: my-busybox-httpd
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-busybox-httpd
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: my-busybox-httpd
    spec:
      containers:
      - image: busybox:1.28
        name: busybox
        command: [&quot;bin/sh&quot;,&quot;-c&quot;]
        args:
        - httpd -p 8080 -f -v -h /var/www/;
        volumeMounts:
        - name: cgi-script
          mountPath: /var/www/cgi-bin
        - name: mystyle-css
          mountPath: /var/www/css
        env:
        - name: MY_VAR
          value: &quot;This is an environmental variable&quot;
        resources: {}
      volumes:
        - name: cgi-script
          configMap:
              name: cgi-table.sh
              defaultMode: 0777
        - name: mystyle-css
          configMap:
             name: mystyle.css
status: {}
</code></pre>
<p>The Deployment is created using the manifest file.</p>
<pre><code class="language-yaml">kubectl create -f http-busybox.yaml
</code></pre>
<p>As the cgi-script must be executable the permissions of the volume must be updated, which is the purpose of the defaultMode line within the manifest.</p>
<h2 id="exposingthedeployment">Exposing the Deployment</h2>
<p>After the Deployment has been created it can be exposed using a simple Nodeport to allow access from outside the cluster.</p>
<pre><code class="language-yaml">kubectl expose deployment my-busybox-httpd --port=8080 --type=NodePort
</code></pre>
<p>It is important to ensure that the Nodeport connects to the correct port that has been exposed within the container, which in this case has been set as 8080.</p>
<h2 id="scalingthedeploymentto3replicas">Scaling the Deployment to 3 Replicas</h2>
<p>The manifest only creates a single replica of the Pod so the final stage is to scale this up to 3 replicas that will allow the verification of the load balancing.</p>
<pre><code class="language-yaml">kubectl scale deployment my-busybox-httpd --replicas=3 --record
</code></pre>
<h2 id="confirmingeverythingisrunning">Confirming Everything is Running</h2>
<p>The next step is to confirm that everything is running</p>
<p><img src="https://myblog.salterje.com/content/images/2020/11/Busyboxhttpd.png" alt="Busybox as a web CGI Server within Kubernetes"></p>
<p>Once this is done the created cgi-script can be viewed in a browser from outside the cluster using the relevant port that has been exposed by the Nodeport service.</p>
<p><img src="https://myblog.salterje.com/content/images/2020/11/browser1.png" alt="Busybox as a web CGI Server within Kubernetes"></p>
<p>It is important that the correct URL be used to display the output of the script. The browser can be refreshed to prove that the traffic is coming from different Pods via the Nodeport service.</p>
<p><img src="https://myblog.salterje.com/content/images/2020/11/browser2.png" alt="Busybox as a web CGI Server within Kubernetes"></p>
<p>It can be seen from the hostname being displayed as well as the IP address that the traffic is being served by different containers. It's also actually being served by containers running on different nodes within the cluster (which is determined by the IP address of the container).</p>
<p>The environmental variable has also been sent to all the relevant Pods and remains the same.</p>
<h2 id="conclusions">Conclusions</h2>
<p>This lab has shown to serve simple dynamic web pages it is possible to make use of a simple Busybox image running httpd. The page being served is actually a cgi script running within the container which returns the relevant output as http to the end browser.</p>
<p>The necessary script and css file have been mounted into the container using configmaps for ease of use. However the script generating the page had to have it's permissions set appropriately within the container to allow it to execute.</p>
<p>The Deployment was created initially with a single replica and was then dynamically scaled up. This allowed the confirmation the traffic was indeed being load balanced by the attached Nodeport service.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Creating Dynamic Routers in Linux]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>In this post we will create two Virtual Machines that will act as Routers running OSPF using FRRouting which is a fork of the older Quagga package. The Routers will have one NIC acting as a gateway for PC1 and PC2 and another that connects the two together.</p>
<p>FRRouting (FRR)</p>]]></description><link>https://myblog.salterje.com/creating-dynamic-routers-in-linux/</link><guid isPermaLink="false">5f9d58b65064d9031b5930f0</guid><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Jason Salter]]></dc:creator><pubDate>Mon, 02 Nov 2020 14:15:50 GMT</pubDate><media:content url="https://myblog.salterje.com/content/images/2020/11/OSPFLab.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://myblog.salterje.com/content/images/2020/11/OSPFLab.png" alt="Creating Dynamic Routers in Linux"><p>In this post we will create two Virtual Machines that will act as Routers running OSPF using FRRouting which is a fork of the older Quagga package. The Routers will have one NIC acting as a gateway for PC1 and PC2 and another that connects the two together.</p>
<p>FRRouting (FRR) is an IP routing protocol suite for Linux and Unix platforms which includes protocol daemons for BGP, IS-IS, LDP, OSPF, PIM, and RIP.</p>
<p>In this Lab the software will be installed on a pair of Virtual Machines acting as Routers. They will be set up as OSPF neighbors and distribute  the networks that the two other Virtual Machines, PC1 and PC2 are sat on within a single OSPF area.</p>
<h2 id="settingstaticroutes">Setting Static Routes</h2>
<p>As the Virtual Machines have been created using Vagrant they have another interface for management which serves as a NAT interface to the wider internet and for ssh management from the Host. For this reason a static route will be created on both to send all traffic for the 192.168.0.0/16 range of addresses via the Host only interface.</p>
<p>The static route is created as a new entry in the netplan configuration within /etc/netplan/50-vagrant.yaml</p>
<pre><code class="language-yaml">vagrant@PC2:~$ cat /etc/netplan/50-vagrant.yaml 
---
network:
  version: 2
  renderer: networkd
  ethernets:
    enp0s8:
      addresses:
      - 192.168.201.1/24
      routes:
      - to: 192.168.0.0/16
        via: 192.168.201.254
        
        
vagrant@PC1:/etc/netplan$ cat /etc/netplan/50-vagrant.yaml 
---
network:
  version: 2
  renderer: networkd
  ethernets:
    enp0s8:
      addresses:
      - 192.168.200.1/24
      routes:
      - to: 192.168.0.0/16
        via: 192.168.200.254

</code></pre>
<p>It can be seen that the route set is slightly different on each PC but this will result in all traffic destined for any address in the 192.168.0.0/16 range being sent via the enp0s8 interface and using the enp0s9 interface on the Router Virtual Machines as their gateways.</p>
<pre><code>vagrant@PC1:$ ip route
default via 10.0.2.2 dev enp0s3 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev enp0s3 proto kernel scope link src 10.0.2.15 
10.0.2.2 dev enp0s3 proto dhcp scope link src 10.0.2.15 metric 100 
192.168.0.0/16 via 192.168.200.254 dev enp0s8 proto static 
192.168.200.0/24 dev enp0s8 proto kernel scope link src 192.168.200.1 


vagrant@PC2:~$ ip route
default via 10.0.2.2 dev enp0s3 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev enp0s3 proto kernel scope link src 10.0.2.15 
10.0.2.2 dev enp0s3 proto dhcp scope link src 10.0.2.15 metric 100 
192.168.0.0/16 via 192.168.201.254 dev enp0s8 proto static 
192.168.201.0/24 dev enp0s8 proto kernel scope link src 192.168.201.1 

</code></pre>
<p>The routes on the two Virtual Machines confirm that all the traffic is correct and it can also be seen that the two Virtual Machines can no longer communicate because they are sat on different networks.</p>
<p>This can be confirmed by running tracepath to check the next-hop address and the fact there is no connection.</p>
<pre><code>vagrant@PC1:~$ tracepath -n 192.168.201.1
 1?: [LOCALHOST]                      pmtu 1500
 1:  192.168.200.254                                       0.667ms 
 1:  192.168.200.254                                       0.613ms 
 2:  no reply
 3:  no reply
 4:  no reply

</code></pre>
<p>This is as expected and we must now set the other two Virtual Machines to allow routing between the 192.168.200.0/24 and 192.168.201.0/24 networks that will allow communication between the two end Virtual Machines.</p>
<h2 id="installationoffrrrouting">Installation of FRR Routing</h2>
<p>There are numerous ways of installing FRRouting but in the case of this Lab we will install the snap version of the application. This is done using a simple snap command on each of the Virtual Machines that will be acting as our routers.</p>
<pre><code>sudo snap install frr
</code></pre>
<p>Once the snap has been installed it is necessary to run the following to link the snap to the Virtual Machine to allow it the necessary control.</p>
<pre><code>sudo snap connect frr:network-control core:network-control
</code></pre>
<p>Once this is done then we also need to enable the forwarding of packets within the kernel.</p>
<p>This is done by un-commenting the following line in /etc/sysctl.cnf</p>
<pre><code># Uncomment the next line to enable packet forwarding for IPv4
net.ipv4.ip_forward=1

# Uncomment the next line to enable packet forwarding for IPv6
#  Enabling this option disables Stateless Address Autoconfiguration
#  based on Router Advertisements for this host
net.ipv6.conf.all.forwarding=1

</code></pre>
<p>The configuration for setting up FRRouting is very similar to a Cisco IOS Router and a vty session can be started by running</p>
<pre><code>vagrant@R1:~$ sudo frr.vtysh 

Hello, this is FRRouting (version 7.2.1).
Copyright 1996-2005 Kunihiro Ishiguro, et al.

R1# 
R1# 
R1# 
R1# configure terminal 
R1(config)# router ospf
R1(config-router)# network 192.168.100.0/24 area 0
R1(config-router)# network 192.168.200.0/24 area 0
R1(config-router)# 

</code></pre>
<pre><code>vagrant@R2:~$ sudo frr.vtysh 

Hello, this is FRRouting (version 7.2.1).
Copyright 1996-2005 Kunihiro Ishiguro, et al.

R2# configure terminal 
R2(config)# router ospf
R2(config-router)# network 192.168.100.0/24 area 0
R2(config-router)# network 192.168.201.0/24 area 0
</code></pre>
<p>The code above allows OSPF to be setup and both interfaces to be placed into OSPF area 0.</p>
<p>After this is run and the equivalent on R1 the creation of an adjacency can be confirmed.</p>
<pre><code>R2# show ip ospf neighbor 

Neighbor ID     Pri State           Dead Time Address         Interface            RXmtL RqstL DBsmL
1.1.1.1           1 Full/DR           36.825s 192.168.100.1   enp0s8:192.168.100.2     0     0     0
</code></pre>
<p>We can also check the routing table within FRRouting.</p>
<pre><code>R2# show ip route 
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
       F - PBR, f - OpenFabric,
       &gt; - selected route, * - FIB route, q - queued route, r - rejected route

K&gt;* 0.0.0.0/0 [0/100] via 10.0.2.2, enp0s3, src 10.0.2.15, 09:16:02
C&gt;* 10.0.2.0/24 is directly connected, enp0s3, 09:16:02
K&gt;* 10.0.2.2/32 [0/100] is directly connected, enp0s3, 09:16:02
O   192.168.100.0/24 [110/100] is directly connected, enp0s8, 00:09:44
C&gt;* 192.168.100.0/24 is directly connected, enp0s8, 09:16:06
O&gt;* 192.168.200.0/24 [110/200] via 192.168.100.1, enp0s8, 00:09:34
O   192.168.201.0/24 [110/100] is directly connected, enp0s9, 09:15:55
C&gt;* 192.168.201.0/24 is directly connected, enp0s9, 09:16:06
</code></pre>
<p>Like Cisco IOS we can determine that the Router has had routes for the remote network injected into the Routing table by OSPF and they are via the correct next-hop address.</p>
<p>We can also confirm the routing table directly running on the Virtual Machine.</p>
<pre><code>vagrant@R2:~$ ip route
default via 10.0.2.2 dev enp0s3 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev enp0s3 proto kernel scope link src 10.0.2.15 
10.0.2.2 dev enp0s3 proto dhcp scope link src 10.0.2.15 metric 100 
192.168.100.0/24 dev enp0s8 proto kernel scope link src 192.168.100.2 
192.168.200.0/24 via 192.168.100.1 dev enp0s8 proto 188 metric 20 
192.168.201.0/24 dev enp0s9 proto kernel scope link src 192.168.201.254 
</code></pre>
<p>This corresponds to the information displayed within FRRouting.</p>
<h2 id="endtoendtesting">End to End Testing</h2>
<p>We can now confirm the End to End connectivity via a combination of checking routing tables on PC1 and PC2 as well as running the tracepath command to confirm the exact path the traffic takes.</p>
<pre><code>vagrant@PC1:~$ ip route
default via 10.0.2.2 dev enp0s3 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev enp0s3 proto kernel scope link src 10.0.2.15 
10.0.2.2 dev enp0s3 proto dhcp scope link src 10.0.2.15 metric 100 
192.168.0.0/16 via 192.168.200.254 dev enp0s8 proto static 
192.168.200.0/24 dev enp0s8 proto kernel scope link src 192.168.200.1 
</code></pre>
<pre><code>vagrant@PC2:~$ ip route
default via 10.0.2.2 dev enp0s3 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev enp0s3 proto kernel scope link src 10.0.2.15 
10.0.2.2 dev enp0s3 proto dhcp scope link src 10.0.2.15 metric 100 
192.168.0.0/16 via 192.168.201.254 dev enp0s8 proto static 
192.168.201.0/24 dev enp0s8 proto kernel scope link src 192.168.201.1
</code></pre>
<p>Tracepath can also be run on PC1 and PC2.</p>
<pre><code>vagrant@PC1:~$ tracepath -n 192.168.201.1
 1?: [LOCALHOST]                      pmtu 1500
 1:  192.168.200.254                                       0.500ms 
 1:  192.168.200.254                                       0.385ms 
 2:  192.168.100.2                                         0.945ms 
 3:  192.168.201.1                                         1.313ms reached
     Resume: pmtu 1500 hops 3 back 3 
</code></pre>
<pre><code>vagrant@PC2:~$ tracepath -n 192.168.200.1
 1?: [LOCALHOST]                      pmtu 1500
 1:  192.168.201.254                                       0.543ms 
 1:  192.168.201.254                                       0.442ms 
 2:  192.168.100.1                                         0.850ms 
 3:  192.168.200.1                                         1.262ms reached
     Resume: pmtu 1500 hops 3 back 3 
</code></pre>
<p>This confirms PC1 and PC2 have full connectivity via R1 and R2, which are acting as Routers by forwarding the packets to the correct interfaces.</p>
<h2 id="removingospfadjacency">Removing OSPF Adjacency</h2>
<p>The final test to prove that R1 and R2 have the correct routes injected into their routing tables via OSPF is to remove the adjacency. This is easily done by removing one of the interfaces from OSPF on one of the Routers causing the adjacency to be dropped.</p>
<p>This will be done on R1 by a suitable config change with FRRouting.</p>
<pre><code>R1# configure terminal 
R1(config)# router ospf
R1(config-router)# no network 192.168.100.0/24 area 0
R1(config-router)# exit
R1(config)# exit
R1# show ip ospf neighbor 

Neighbor ID     Pri State           Dead Time Address         Interface            RXmtL RqstL DBsmL

R1# show ip route
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP,
       F - PBR, f - OpenFabric,
       &gt; - selected route, * - FIB route, q - queued route, r - rejected route

K&gt;* 0.0.0.0/0 [0/100] via 10.0.2.2, enp0s3, src 10.0.2.15, 00:31:38
C&gt;* 10.0.2.0/24 is directly connected, enp0s3, 00:31:38
K&gt;* 10.0.2.2/32 [0/100] is directly connected, enp0s3, 00:31:38
C&gt;* 192.168.100.0/24 is directly connected, enp0s8, 00:31:43
O   192.168.200.0/24 [110/100] is directly connected, enp0s9, 00:31:43
C&gt;* 192.168.200.0/24 is directly connected, enp0s9, 00:31:43
</code></pre>
<p>It can be seen that removing the enp0s8 interface from OSPF causes the adjacency to be lost and the route to the remote network to be lost.</p>
<p>This can also be seen as a change in the Routing Table within the Virtual Machine itself.</p>
<pre><code>vagrant@R1:~$ ip route
default via 10.0.2.2 dev enp0s3 proto dhcp src 10.0.2.15 metric 100 
10.0.2.0/24 dev enp0s3 proto kernel scope link src 10.0.2.15 
10.0.2.2 dev enp0s3 proto dhcp scope link src 10.0.2.15 metric 100 
192.168.100.0/24 dev enp0s8 proto kernel scope link src 192.168.100.1 
192.168.200.0/24 dev enp0s9 proto kernel scope link src 192.168.200.254 
</code></pre>
<p>We can also confirm the loss of the End to End connection on PC1 and PC2.</p>
<pre><code>vagrant@PC1:~$ tracepath -n 192.168.201.1
 1?: [LOCALHOST]                      pmtu 1500
 1:  192.168.200.254                                       0.679ms 
 1:  192.168.200.254                                       0.643ms 
 2:  no reply
 3:  no reply
 4:  no reply
 5:  no reply
</code></pre>
<pre><code>vagrant@PC2:~$ tracepath -n 192.168.200.1
 1?: [LOCALHOST]                      pmtu 1500
 1:  192.168.201.254                                       0.637ms 
 1:  192.168.201.254                                       0.630ms 
 2:  no reply
 3:  no reply
 4:  no reply
</code></pre>
<p>This shows there is no longer a route over the 192.168.100.0/24 network linking R1 and R2.</p>
<h2 id="conclusions">Conclusions</h2>
<p>In this Lab we have set up a pair of Virtual Machines with multiple NICs running Ubuntu 18.04 that have had FRRouting installed. These machines have had IP forwarding configured allowing them to act as Routers forwarding IP packets between different IP networks.</p>
<p>OSPF was run as a dynamic routing protocol which allowed the discovery of the 192.168.200.0/24 and 192.168.201.0/24 on both Routers. This allowed the two end machines PC1 and PC2 to communicate via their enp0s8 interfaces.</p>
<p>By removing the adjacency between R1 and R2 it was proven that these routes were indeed exchanged via OSPF.</p>
<p>The actual configuration of FRRouting is very similar to standard Cisco IOS and is fairly easy to setup, once the IP forwarding has been configured within the Linux kernel on these machines.</p>
<p>While this is a simple example and not really taking advantage of a dynamic routing protocol there would be no reason that alternative routes for the traffic could not be implemented.</p>
<p>The lab shows the flexibility that open source and Linux can bring as it is very possible to use a machine with multiple NICs to act as a router.</p>
<!--kg-card-end: markdown--><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Adding a New Node to a Cluster using Kubeadm]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>In this post we are going to use kubeadm to add a newly created Node to an existing cluster.  The Lab consists of a cluster of Virtual Machines running on a Host under Virtualbox which has been built using Vagrant.</p>
<p>This means that each Virtual Machine has two NICs, the</p>]]></description><link>https://myblog.salterje.com/adding-a-new-node-to-a-cluster-using-kubeadm/</link><guid isPermaLink="false">5f9444935064d9031b592ff0</guid><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Jason Salter]]></dc:creator><pubDate>Sun, 25 Oct 2020 19:29:49 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>In this post we are going to use kubeadm to add a newly created Node to an existing cluster.  The Lab consists of a cluster of Virtual Machines running on a Host under Virtualbox which has been built using Vagrant.</p>
<p>This means that each Virtual Machine has two NICs, the first one is used by Vagrant for management and the second is used for inter-host connectivity upon which the CNI overlay is created.</p>
<p><img src="https://myblog.salterje.com/content/images/2020/10/ClusterSetup-1.png" alt="ClusterSetup-1"></p>
<p>The use of the two NICs within the Virtual Machines means there are several things that need to be done to allow the cluster to be built in the first place using kubeadm, mostly related to ensuring that the api-server connection is via the correct interface.</p>
<p>By default the first interface that is created and used by Vagrant also acts as the default gateway, which is fine as it allows ssh connection from the Host and serves as a NAT interface for wider connection to the internet.</p>
<p>Looking at the IP address of the master node, after Docker and all the necessary Kubernetes components have been installed shows that the enp0s8 interface must be used for the inter-node communications.</p>
<p><img src="https://myblog.salterje.com/content/images/2020/10/GettingIPAddress.png" alt="GettingIPAddress"></p>
<p>We can also see the default route is set to the enp0s3 interface</p>
<p><img src="https://myblog.salterje.com/content/images/2020/10/IPRoute_initial.png" alt="IPRoute_initial"></p>
<p>By default when building a cluster using kubeadm the connections being made for the cluster will be via the default gateway which in our case is not correct. It is therefore necessary to explicitly set the correct IP address that will be used for the api server.</p>
<p>These settings are done when bringing up the master by setting --apiserver-advertise-address using the enp0s8 interface and --control-plane-endpoint using the DNS entry for the master.</p>
<pre><code class="language-yaml">kubeadm init --apiserver-advertise-address 192.168.200.20  --control-plane-endpoint k8s-master-1 --upload-certs | tee kubeadm-init.out
</code></pre>
<p>The kubeadm init has also been piped to a file to allow any fault finding to be done if there are any issues.</p>
<p>Once the kubeadm init command has been run the relevant settings can be seen below.</p>
<p><img src="https://myblog.salterje.com/content/images/2020/10/kubeadm_output.png" alt="kubeadm_output"></p>
<p>The first thing to do is to create the necessary kubeconfig file in a local user home directory that allows the running of kubectl. This is done by copying the necessary files as detailed in the kubeadm output.</p>
<p>Once this is done we can inspect the Pods that have been created and see that the CoreDNS Pods are pending and we can also see that the master Node is not ready.</p>
<p><img src="https://myblog.salterje.com/content/images/2020/10/kubectl_get_pods.png" alt="kubectl_get_pods"></p>
<p><img src="https://myblog.salterje.com/content/images/2020/10/kubectl_get_nodes.png" alt="kubectl_get_nodes"></p>
<p>These issues can be resolved by installing a suitable CNI network, which we will be do by installing weavenet.</p>
<pre><code class="language-yaml">kubectl apply -f &quot;https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')&quot;
</code></pre>
<p>Once this is done the CoreDNS Pods will become active and the suitable weavenet Pods will also be created (along with the other necessary objects that weavenet requires).</p>
<p><img src="https://myblog.salterje.com/content/images/2020/10/GetDetailsAfterCNI-1.png" alt="GetDetailsAfterCNI-1"></p>
<p>The worker Nodes can now join the cluster using the token that was generated by kubeadm. This is done by connecting to the worker Nodes and adding a static route to ensure it uses the correct interface to connect to the kubernetes cluster IP address and running the necessary join command.</p>
<p>The api server is run as a service that must be reachable by all the Nodes within the cluster. To confirm the IP address of all the services running the following kubectl command can be run on the master.</p>
<pre><code class="language-yaml">kubectl get service --all-namespaces
</code></pre>
<p><img src="https://myblog.salterje.com/content/images/2020/10/api-server-service.png" alt="api-server-service"></p>
<p>This confirms the IP address that needs to be reachable from the worker Nodes.</p>
<pre><code class="language-yaml">sudo ip route add 10.96.0.1/32 dev enp0s8
</code></pre>
<p>The static route is very important as without it the worker Node will send it's traffic via the default route (which is out of enp0s3) meaning it will never join the cluster.</p>
<p><img src="https://myblog.salterje.com/content/images/2020/10/JoinCommand.png" alt="JoinCommand"></p>
<p>Once this is done the first worker Node can be confirmed on the master.</p>
<p><img src="https://myblog.salterje.com/content/images/2020/10/FirstNodeJoined.png" alt="FirstNodeJoined"></p>
<p>The same thing is done for the second Node to allow it to join the cluster.</p>
<p><img src="https://myblog.salterje.com/content/images/2020/10/2ndNodeJoin.png" alt="2ndNodeJoin"></p>
<p>The initial token that is used to join the Nodes is valid for 24 hours which means that after this another token must be created to allow another Node to join the cluster. This could be due to maintenance or the expansion of the cluster.</p>
<h2 id="generateanewtoken">Generate a New Token</h2>
<p>We can easily view the list of tokens on the master by running</p>
<pre><code class="language-yaml">kubeadm token list
</code></pre>
<p><img src="https://myblog.salterje.com/content/images/2020/10/TokenList-1.png" alt="TokenList-1"></p>
<p>We must first generate a token and then print the join command associated with the token.</p>
<pre><code class="language-yaml">kubeadm token generate
kubeadm token create 1uth3n.fkhct68lnscgdqsr --print-join-command
</code></pre>
<p><img src="https://myblog.salterje.com/content/images/2020/10/CreateToken.png" alt="CreateToken"></p>
<p>The generated command can then be run on the worker Node in the normal way.</p>
<pre><code class="language-yaml">sudo kubeadm join k8s-master-1:6443 --token 1uth3n.fkhct68lnscgdqsr     --discovery-token-ca-cert-hash sha256:ed021289ab0fa3cbec56499fc87567d524eeffdcd36b73c5d832f0192664e776
</code></pre>
<p><img src="https://myblog.salterje.com/content/images/2020/10/3RDNodeJoin.png" alt="3RDNodeJoin"></p>
<p>Once this is done the new token and the number of Nodes on the master can be confirmed</p>
<pre><code class="language-yaml">kubeadm token list
kubectl get nodes
</code></pre>
<p><img src="https://myblog.salterje.com/content/images/2020/10/AllNodesAdded.png" alt="AllNodesAdded"></p>
<p>It can be seen that there is another token that has been created and the extra Node has joined the cluster.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Kubeadm is a tool that allows the easy creation of a Kubernetes cluster and will automatically generate all the necessary certificates and keys that are used to bootstrap the cluster.</p>
<p>When the command is first run it will automatically generate the necessary token that can be run on the worker Nodes that will allow them to join the cluster.</p>
<p>The token that is first generated will last for 24 hours before it expires. If there is a need to generate further tokens then kubeadm can be run again and can be used to generate the necessary join command that is run on a newly built Node.</p>
<p>When using multiple interfaces on the Hosts, which is done automatically if using Vagrant then care must be taken to ensure the inter-node communication is via the correct interface. If this is not done then the Nodes may not join the cluster.</p>
<p>It is also necessary to install a suitable CNI to allow the communication between within the cluster to work. If this is not done then the Nodes will not come up and the CoreDNS Pods will remain pending on the master.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Overview of Ingress Resources in Kubernetes]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>By default most inter-component communication within Kubernetes is internal to the cluster without any form of NAT. Pods within the cluster can be created and destroyed at any time meaning inter-pod communication is done via services allowing a static IP address to be created that can be used to reach</p>]]></description><link>https://myblog.salterje.com/overview-of-ingress-resources-in-kubernetes/</link><guid isPermaLink="false">5f81e6ca5064d9031b592e7e</guid><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Jason Salter]]></dc:creator><pubDate>Mon, 12 Oct 2020 16:45:45 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>By default most inter-component communication within Kubernetes is internal to the cluster without any form of NAT. Pods within the cluster can be created and destroyed at any time meaning inter-pod communication is done via services allowing a static IP address to be created that can be used to reach the desired Pods (normally done by the use of suitable selectors).</p>
<p>The use of Ingress Controllers within Kubernetes exposes http and https routes from outside the cluster to services within it. Traffic routing is controlled by rules that are defined on the Ingress resource.</p>
<p>In this Lab we are going to create some Pods and services within our cluster that are going to serve the following domains:</p>
<p><a href="http://website1.example.com">http://website1.example.com</a><br>
<a href="http://website2.example.com">http://website2.example.com</a><br>
<a href="http://whoami.example.com">http://whoami.example.com</a></p>
<p>We will install some ingress controllers that will have rules provided by an ingress resource that will route the traffic to the correct internal services. From these services the traffic will reach the desired NGINX Pods serving the relevant website.</p>
<p>The external requests will go via an HA-Proxy load balancer which will send the traffic to the two worker nodes within the cluster.</p>
<p><img src="https://myblog.salterje.com/content/images/2020/10/IngressClusterSetup.png" alt="IngressClusterSetup"></p>
<p>The Lab consists of 4 Virtual Machines running Ubuntu 18.04 in Virtual Box that are connected by an internal Host network on 192.168.200.0/24</p>
<p>Once the Lab has been setup the websites will be checked by looking at them in a browser running on the Host running the Virtual Machines.</p>
<p><img src="https://myblog.salterje.com/content/images/2020/10/IngressOverview-1.png" alt="IngressOverview-1"></p>
<h2 id="ingresscontrollers">Ingress Controllers</h2>
<p>Ingress within Kubernetes is controlled by rules that are set by an ingress resource. The ingress resource is created and rules must be written that will link the incoming http and https requests to services running within the cluster.</p>
<p>The services that are linked will then load balance to the associated end points of the Pods. In order to work the ingress resource must also be supported by ingress controllers.</p>
<p>The Ingress controller is an application that configures an http load balancer according to Ingress resources. The load balancer can be a software load balancer running in the cluster or a hardware/cloud load balancer running externally. Different load balancers require different Ingress controller implementations.</p>
<p>There are a number of ingress controllers available with details available at <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/">Kubernetes.io</a>. The ingress controllers are not part of the standard Kubernetes build and actual installation will vary.</p>
<h2 id="installationofnginxingresscontrollers">Installation of NGINX Ingress Controllers</h2>
<p>A well known and popular Ingress Controller is provided by NGINX and this is the version that will be used in the lab. The installation will be done by cloning the relevant github software and running the appropriate manifests.</p>
<p>The detailed instructions can be found at <a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/">NGINX Ingress Controller Site</a></p>
<p>The ingress controllers are created as Pods running within the cluster in their own namespace. The various manifest files that are run create the ingress controller and generates the necessary service accounts.</p>
<p>The process is done on the controller within the cluster.</p>
<pre><code class="language-yaml">$ git clone https://github.com/nginxinc/kubernetes-ingress/
$ cd kubernetes-ingress/deployments
$ git checkout v1.8.1
</code></pre>
<p>The NGINX Ingress controller is actually deployed in it's own namespace which means that all the components can be easily removed by simply deleting the namespace.</p>
<p>The first step is to run the manifest to create the namespace and service account for the controller.</p>
<pre><code class="language-yaml">kubectl apply -f common/ns-and-sa.yaml
</code></pre>
<p>This is followed by creating a cluster role and role-binding.</p>
<pre><code class="language-yaml">kubectl apply -f rbac/rbac.yaml
</code></pre>
<p>A TLS certificate and key are created. The default settings already have a certificate and key within them and as this is a test environment this will be ok. In a production environment a new certificate and key should be generated.</p>
<pre><code class="language-yaml">kubectl apply -f common/default-server-secret.yaml
</code></pre>
<p>Then a configmap is created that configures the controller. In our case we will simply use the default settings.</p>
<pre><code class="language-yaml">kubectl apply -f common/nginx-config.yaml
</code></pre>
<p>There are two ways of deploying the actual controllers and this will depend on the number of nodes and size of the cluster.</p>
<p>They can either be created with a Deployment , in which case there is a choice of the number of controllers that are deployed or a DaemonSet in which case there will be a controller created on each worker node.</p>
<p>In our case as there are only two nodes in the cluster we will deploy a DaemonSet.</p>
<pre><code class="language-yaml">kubectl apply -f daemon-set/nginx-ingress.yaml
</code></pre>
<h2 id="ingressresource">Ingress Resource</h2>
<p>We now have an ingress controller running on both of the worker nodes and all traffic into the cluster is sent via the separate HA-Proxy Load Balancer running on a Virtual Machine. The Ingress Resource is written to match incoming requests and send them to the appropriate backend services.</p>
<p>The manifest file for our Ingress Resource is</p>
<pre><code class="language-YAML">apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress-resource-1
spec:
  rules:
  - host: website1.example.com
    http:
      paths:
      - backend:
          serviceName: website1
          servicePort: 80
  - host: website2.example.com
    http:
      paths:
      - backend:
          serviceName: website2
          servicePort: 80
  - host: whoami.example.com
    http:
      paths:
      - backend:
          serviceName: whoami
          servicePort: 80
</code></pre>
<p>The manifest is run by</p>
<pre><code class="language-yaml">kubectl create -f ingress-resource-1.yaml
</code></pre>
<h2 id="creationofdeploymentsandexposureasservices">Creation of Deployments and Exposure as Services</h2>
<p>The backend Pods will be created as basic NGINX Deployments with some simple configmaps to change the default index page allowing the easy identification of the page.</p>
<p>The 3rd Deployment is a simple application that displays the details of the Host and IP address that is servicing the request.</p>
<p>Once the Deployments are created they are exposed as ClusterIP services which will serve as the permanent IP addresses for the Pods within.</p>
<p>It is these backend services that the rules created by the ingress resource connect to. As the traffic hits the services it is load balanced to the Pods that are within the Deployments.</p>
<p>The website1.example.com Deployment is the following:</p>
<pre><code class="language-yaml">apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: website1
  name: website1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: website1
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: website1
    spec:
      containers:
      - image: nginx
        name: nginx
        volumeMounts:
        - name: index-volume
          mountPath: /usr/share/nginx/html
      volumes:
        - name: index-volume
          configMap:
            name: website1
</code></pre>
<p>The website2.example.com Deployment is the following:</p>
<pre><code class="language-YAML">apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: website2
  name: website2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: website2
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: website2
    spec:
      containers:
      - image: nginx
        name: nginx
        volumeMounts:
        - name: index-volume
          mountPath: /usr/share/nginx/html
      volumes:
        - name: index-volume
          configMap:
            name: website2
</code></pre>
<p>The whoami.example.com Deployment is the following:</p>
<pre><code class="language-YAML">apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: whoami
  name: whoami
spec:
  replicas: 1
  selector:
    matchLabels:
      app: whoami
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: whoami
    spec:
      containers:
      - image: containous/whoami:latest
        name: whoami
        resources: {}
</code></pre>
<p>The following configmaps are also created which are mounted into the NGINX Pods acting as the webservers:</p>
<pre><code class="language-YAML">vagrant@k8s-master:~/web-sites$ kubectl get configmaps 
NAME       DATA   AGE
website1   1      25h
website2   1      25h
vagrant@k8s-master:~/web-sites$ kubectl describe configmaps 
Name:         website1
Namespace:    default
Labels:       &lt;none&gt;
Annotations:  &lt;none&gt;

Data
====
index.html:
----

&lt;!DOCTYPE html&gt;
&lt;html&gt;

&lt;body style=&quot;background-color:powderblue;&quot;&gt;
&lt;h1&gt;This is website1&lt;/h1&gt;

&lt;/body&gt;
&lt;html&gt;


Events:  &lt;none&gt;


Name:         website2
Namespace:    default
Labels:       &lt;none&gt;
Annotations:  &lt;none&gt;

Data
====
index.html:
----

&lt;!DOCTYPE html&gt;
&lt;html&gt;

&lt;body style=&quot;background-color:red;&quot;&gt;
&lt;h1&gt;This is website2&lt;/h1&gt;

&lt;/body&gt;
&lt;html&gt;


Events:  &lt;none&gt;
</code></pre>
<p>The Deployments can then be exposed which will create our necessary services</p>
<pre><code class="language-YAML">kubectl expose deployment website1 --port=80
kubectl expose deployment website2 --port=80
kubectl expose deployment whoami --port=80

</code></pre>
<p>We can then check everything is running in the default namespace</p>
<p><img src="https://myblog.salterje.com/content/images/2020/10/kubectl-get-all.png" alt="kubectl-get-all"></p>
<p>We can see the 3 Deployments with the whoami one having 3 replicas within it and each is exposed as a ClusterIP service, normally only reachable from within the cluster.</p>
<p>The ingress controllers have been configured with the rules contained within the ingress resource. This can be checked by running the following:</p>
<pre><code class="language-yaml">kubectl get ingress
kubectl describe ingress
</code></pre>
<p><img src="https://myblog.salterje.com/content/images/2020/10/kubectl-get-ingress-2.png" alt="kubectl-get-ingress-2"></p>
<p>It can be seen that the ingress is linking to the services that were set within the rules and the services link ultimately to the end points of the Pods within the Deployments.</p>
<h2 id="checkingtheconnectionfromoutsidethecluster">Checking the Connection from outside the Cluster</h2>
<p>The only thing that remains is to prove that each of the websites can be reached from outside the cluster via the HA-Proxy load balancer that sends the traffic to the nodes.</p>
<p>This requires a modification to the /etc/hosts file on the Host machine to map the URLs to the HA-Proxy which in our lab sits at 192.168.200.100</p>
<pre><code class="language-yaml">sudo cat /etc/hosts
[sudo] password for salterje:
127.0.0.1       localhost
127.0.1.1       salterje-PC-X008778

192.168.200.10  k8s-master
192.168.200.100 nginx.example.com
192.168.200.100 website1.example.com
192.168.200.100 website2.example.com
192.168.200.100 website3.example.com
192.168.200.100 whoami.example.com

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
</code></pre>
<p>The connections are checked using a browser running on the Host machine which is able to resolve the URLs via the modification of the local hosts file.</p>
<p><img src="https://myblog.salterje.com/content/images/2020/10/website-1-1.png" alt="website-1-1"></p>
<p><img src="https://myblog.salterje.com/content/images/2020/10/website-2.png" alt="website-2"></p>
<p><img src="https://myblog.salterje.com/content/images/2020/10/whoami1-2.png" alt="whoami1"></p>
<p><img src="https://myblog.salterje.com/content/images/2020/10/whoami2-2.png" alt="whoami2-2"></p>
<p>It can be seen from the output that the whoami website is being shared between Pods within the cluster by the whoami service and the traffic is actually hitting both the ingress controller Pods running on both nodes.</p>
<h2 id="conclusions">Conclusions</h2>
<p>This lab has given an overview of the use of http ingress into a Kubernetes cluster. The main components used are the ingress controllers that can run within the cluster, providing a means of linking external ingress to the internal services running.</p>
<p>The ingress controllers must have an associated ingress resource that sets the rules that link incoming traffic to the services.</p>
<p>The ingress controllers are not part of the standard build of a cluster and there are a large number of solutions available to choose from. In this lab the resource controllers were from NGINX and were set up by cloning the necessary software and running the included manifest files, allowing the applications to run as Pods within the cluster.</p>
<p>The Lab has been setup using an external HA-Proxy Load balancer running in it's own Virtual Machine that forwards http traffic to the two worker nodes. In the case of a cloud based solution this load balancer is often available from the provider.</p>
<p>By looking at the ingress it can be seen which services are linked to the incoming http and from these services the actual Pod endpoints can be determined.</p>
<p>The purpose of the services is to provide a permanent IP address within the cluster that the ingress resource can route traffic to. This means that Pods can come and go but will always be reachable via the service.</p>
<p>The use of ingress controllers and ingress resources allows the routing of incoming requests using shared components, rather then having a dedicated load balancer for each.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>