Friday, 20 September 2019

Setting up a simple web server on OCI

Did you miss the news? Oracle has announced that a free tier for usage of OCI which includes 2 compute instances, 2 autonomous databases, among a set of other free resources up to a certain limit. This tier isn't going to be suitable for high performance workloads, but hey, it's a pretty good deal I think.

if you've been following my activity, you will notice I've been starting to do a bit more with OCI, and for me, what better time to have an actual play around.

In this post, starting from a completely clean slate (no Virtual networks, no compute instances, etc), I wanted to see how I go about setting up an accessible web server. I opted to try with Ubuntu, since that is my daily driver so I'll just be consistent.

So, head over to the console and navigate to the compute section:



Once there, click the Create Instance button. You will see that it has by default selected Oracle Linux. So, let's see what else is available by clicking the Change Image Source button.


So, on this dialog - I am going to opt for Canonical Ubuntu 18.04 Minimal. Everything else I am going to leave as default - before creating the instance you will want to upload your public key in order to be able to connect to the server over SSH. So upload your public key either by pointing to the file in your system or by pasting it.

One other piece to notice is that because I don't have a network, OCI is going to create one for us.



Now, click the create button.

For me, the provisioning took under a couple of minutes.

Now that it's complete on the summary details page you will see it reports the private and public IP address information. So, naturally, our next step would be to ssh in to the server. I had read that instances come with the user opc, but in the case of ubuntu, the username is ubuntu.

First what you will want to do is update apt cache and upgrade any out of date packages.

sudo apt update
sudo apt upgrade

Then, I will install nginx.

sudo apt install nginx-light

Once that process completes you can verify it's working by checking on the status and also calling wget on localhost - you should get an index.html downloaded to your current working directory.


So far so good. Now, if you go to your local system and try and access the server from the public IP address you will not get the page you expect.

Further, if you run nmap against the server, you will only see port 22.


So we need to perform 2 more steps before our server can be accessible to the internet.

Firstly, we need to modify our security list to accept connections on port 80.
So back in OCI, navigate to your virtual networks (Networking -> Virtual Cloud Networks).






On that page, you will see the newly create network. So open that, and navigate to Security Lists.





On that page, we will want to add a new ingress rule to accept connections for port 80. In this basic example, I'm just opening it for the whole subnet - much like SSH is. In a real world scenario, the architecture would likely be different.

My rule list looks like this:




After that rule, you will notice it's still not accessible. The next part is the firewall at the OS level. So, I'm just going to flush my ruleset on the server by running the following:

sudo iptables -P INPUT ACCEPT
sudo iptables -P OUTPUT ACCEPT
sudo iptables -P FORWARD ACCEPT
sudo iptables -F

Source:

- https://serverfault.com/questions/129086/how-to-start-stop-iptables-on-ubuntu
- https://stackoverflow.com/questions/54794217/opening-port-80-on-oracle-cloud-infrastructure-compute-node

 After that, we can finally access the server in our web browser over the internet. Yay!

Tuesday, 10 September 2019

Saving git credentials when pushing to a remote

When you want to push changes upstream, git will prompt for your login details. To ease pushing changes, you may want to avoid doing this each and every time.

Last time I set this up I followed this steps detailed on AskUbuntu. This answer basically details the steps to:

1. Install the package libgnome-keyring-dev
2. Compile some files that git provides
3. Updating your Git config to use this compiled code

Just reviewing that to set up a new instance, this method is actually deprecated now since it's steps specific to gnome. Actually, the steps are very much the same, with one underlying change - the package you install in the first step.

More detail on StackOverflow, but basically the steps to set this up:

sudo apt install libsecret-1-0 libsecret-1-dev
cd /usr/share/doc/git/contrib/credential/libsecret
sudo make
git config --global credential.helper /usr/share/doc/git/contrib/credential/libsecret/git-credential-libsecret

That's all there is to it! Now when you push, you won't be prompted for your password each time (after an initial push where you enter your credentials).

Monday, 9 September 2019

Correctly classifying PL/SQL source code on GitHub

GitHub provides an engine that classifies source code that takes into account various factors, so may not always get it right. When it comes to relational database development, a common file extension would be .sql. However, with many different relational databases, it can be hard to determine which is the correct RDBMS the code relates to.

A case in point is a repository I came across, has the following classifications:




However, I happen to know in this scenario all source code directly relates to an Oracle database and such I believe all should be classified as PL/SQL.

So, how can we solve this dilemma for accurate reporting?

The engine for determining the language is under the package linguist. Within that repository there is a section Override which explains how you can override the chosen language very easily.

As it explains, create a file in the root of your repository if you don't already have one, .gitattributes, and specify the linguist-language property to that of any file extensions that are being miscategorised.

So, within that file, to clasify all sql files as PL/SQL code, create a line that looks like this:

*.sql linguist-language=PLSQL


After this change, this repository will start reporting the correct language:



Not only is this good for showing useful file stats within the repository, but the project will now have that source type as the primary language - so if I'm searching for some code, I could specify the language - and my project will be returned (before, it was classified as TSQL so wasn't being returned in this search)






Sunday, 8 September 2019

Get OCI compartment ID by name from bash

If you work with Oracle Cloud, it stands to reason you probably want some tooling around it to simplify your regular tasks. You have the option of using a client library with your programming language of choice, or you can use the command line client and have some bash scripts for your regular tasks.

One common argument you will need in performing some tasks is the compartment ID. For this we can run the command: "oci iam compartment list --all".

This will give us a JSON list of all the compartments:

{
  "data": [
    {
        "compartment-id": "ocid1.tenancy.oc1..xxxxx",
        "defined-tags": {},
        "description": "Compartment for Foo",
        "freeform-tags": {},
        "id": "ocid1.compartment.oc1..xxxxx",
        "inactive-status": null,
        "is-accessible": null,
        "lifecycle-state": "ACTIVE",
        "name": "Foo",
        "time-created": "2019-01-22T13:16:26.592000+00:00"
    }
  ]
}

So - what's a good way we could filter out to get the compartment by name?

There is a handy command line tool called jq, which allows you to query a json document easily. So, if we take the above sample and add it into jqplay.org we can develop the syntax for our selector. We come up with the following rule:

.data[] | select(.name == "Foo") | .id



So we can make this simple bash function:


function getCompartmentId {
   local compartmentName=$1

   oci iam compartment list --all | jq -r ".data[] | select(.name == \"${compartmentName}\") | .id"
}

That way in our script we can reference this function to perform some action on that specific compartment.

compartmentId=$(getCompartmentId Transat)
printf "Compartment ID for Foo is \"%s\"\n" $compartmentId 
 
 

Wednesday, 4 September 2019

Installing Oracle Instant Client on Ubuntu

Now that Oracle has enabled us to download instant client without any click through for accepting the license, I wanted to revisit a seamless install of the instant client on a new set up.

Ubuntu has the documentation about installing the instant client here: https://help.ubuntu.com/community/Oracle%20Instant%20Client.

First - because Oracle provides their releases in RPM archive format (or a tarball), in order to have an installer you need to create a DEB archive. There is a package in the archives, alien, which aids this process.

This gives the start of the script:

#!/bin/bash
# Install dependencies
sudo apt install alien


The 3 packages the Ubuntu documentation tells us to retrieve are:

- devel
- basic (I opt for basiclite instead)
- sqlplus

So, over at the downloads page: https://www.oracle.com/database/technologies/instant-client/linux-x86-64-downloads.html we can grab the link.

# Download files. Example specific to 19.3 
# Some links were not correct on the downloads page
# (still pointing to a license page), but easy enough to
# figure out from working ones 
wget https://download.oracle.com/otn_software/linux/instantclient/193000/oracle-instantclient19.3-basiclite-19.3.0.0.0-1.x86_64.rpm
wget https://download.oracle.com/otn_software/linux/instantclient/193000/oracle-instantclient19.3-devel-19.3.0.0.0-1.x86_64.rpm
wget https://download.oracle.com/otn_software/linux/instantclient/193000/oracle-instantclient19.3-sqlplus-19.3.0.0.0-1.x86_64.rpm

Next, install the RPM's using alien

sudo alien -i oracle-instantclient19.3-*.rpm

sqlplus will more than likely require libaio package, so install that dependency

sudo apt install libaio1

Set the environment up:

# Create Oracle environment script
sudo -s

printf "\n\n# Oracle Client environment\n \
export LD_LIBRARY_PATH=/usr/lib/oracle/19.3/client64/lib/${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}
export ORACLE_HOME=/usr/lib/oracle/19.3/client64\n" > /etc/profile.d/oracle-env.sh

exit

So, just to have that in the full script, it should look like:

#!/bin/bash
printf "Automated installer of oracle client for Ubuntu" 
# Install dependencies
sudo apt updatesudo apt install -y alien

# Download files. Example specific to 19.3 
# Some links were not correct on the downloads page
# (still pointing to a license page), but easy enough to
# figure out from working ones 
wget https://download.oracle.com/otn_software/linux/instantclient/193000/oracle-instantclient19.3-basiclite-19.3.0.0.0-1.x86_64.rpm
wget https://download.oracle.com/otn_software/linux/instantclient/193000/oracle-instantclient19.3-devel-19.3.0.0.0-1.x86_64.rpm
wget https://download.oracle.com/otn_software/linux/instantclient/193000/oracle-instantclient19.3-sqlplus-19.3.0.0.0-1.x86_64.rpm 

# Install all 3 RPM's downloaded 
sudo alien -i oracle-instantclient19.3-*.rpm

# Install SQL*Plus dependency  
sudo apt install -y libaio1

# Create Oracle environment script
printf "\n\n# Oracle Client environment\n \
export LD_LIBRARY_PATH=/usr/lib/oracle/19.3/client64/lib/${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}
export ORACLE_HOME=/usr/lib/oracle/19.3/client64\n" | sudo tee /etc/profile.d/oracle-env.sh > /dev/null

. /etc/profile.d/oracle-env.sh

printf "Install complete. Please verify"

Finally,

We verify we're all set up by launching sqlplus

sqlplus /nolog

Sunday, 1 September 2019

Testing software targeting Ubuntu with multipass


If you a regular reader of my blog, you're probably across the fact that my primary system is Ubuntu (Linux). When developing, we test our software to make sure everything is working as expected - especially if we want other users to install it, you may have some software installed on your system that other users don't have. So it's a good idea to test in a "clean" environment.


One strategy is with Docker - you can just boot up a docker container and run your scripts:

docker run -it ubuntu:latest


This will put you in a new shell where you can begin trying out your software, or install instructions to make sure there's nothing missing.


Docker however is not the purpose of this article - so on to multipass. Mutlipass is a tool to fire up instances of Ubuntu - it's the technology that snapcraft uses when building snaps. The GitHub repo for the projects describes it best:

Multipass is a lightweight VM manager for Linux, Windows and macOS. It's designed for developers who want a fresh Ubuntu environment with a single command. It uses KVM on Linux, Hyper-V on Windows and HyperKit on macOS to run the VM with minimal overhead. It can also use VirtualBox on Windows and macOS. Multipass will fetch images for you and keep them up to date.

So, first we need to install it. This is done with:

sudo snap install multipass --beta --classic


If you are on another system such as Windows or Mac, you can download the installer directly from the products' website.

So, first thing you will want to do is decide on which version of Ubuntu you would like to run. You can get a list of available versions with the command

multipass find




Once you settle on a version you'd like to target, you would run

mutlipass launch <imagename> 


If you notice the help output for launch, you will see that you can restrict the resources allocated to the VM - disk, CPU, memory.

If you reference "ubuntu" for imagename, it would be the current LTS. Similarly, if you omit the imagename, it would be the current LTS.

Assuming you haven't previously pulled the image, this could take some time (at least in my experience - it took over 30 minutes to pull the current LTS).

Once that is complete, you will have a running instance in the background. When you launch an instance, it is allocated a name. In my first case, it was allocated beaming-gnatcatcher. This is so that I can control it by an easy idenitifiable name. You could have also allocated your own name with the argument
-n|--name
.


So now, to actually run commands on the system you can either connect to the machines console, or pass commands in one by one.

You connect in with the
shell
command. You will notice that the user is
multipass

and thus
 
$HOME=/home/multipass




Or if you want to build a script that just runs commands one by one, you would use the
 
exec image -- command




One other scenario you will probably want to do is copy files across to the image. This is easily done with the transfer command. Depending on the direction you are transferring the file, you would prefix the file path with the name of the image, like so:

multipass transfer daemon.json beaming-gnatcatcher:/home/multipass/daemon.json


And of course, the other option is just to mount a path from your host into the VM. Here we use the mount command.

multipass mount /home/trent beaming-gnatcatcher:/home/trent


Finally, to wrap up - you would want to delete, at least stop your images to save resources. Do this by the delete and purge commands.