Wednesday, 16 October 2019

Attaching a second VNIC card to compute in OCI

Well, just under 30 days ago, Oracle announced a series of resources you can use in OCI for free. One thing that had stopped me signing up and trying out OCI in the past was that I wanted to make the best use of the free credits, and knowing I wouldn't get a full chance to try things out in the 30 days, I didn't want to sign up prematurely. Now that they offer some free resources, this prompted me to sign up.

I am now at the end of the 30-day period where I have some credits to use non-free resources. One final thing I wanted to try out was attaching multiple VNIC (Virtual network interface card) to a single compute instance. One use-case of these is that you may want a machine accessible in 2 different networks.

It's not just a matter of attaching it in the OCI console - to bring the interface up you have to perform a couple of extra steps. When I was first trying this, I didn't read the docs and figured I would just have to edit the interface config script and bring it up, but no, this is not the correct method.

So first, create you instance. It's worth noting the free machine shape can only have 1 VNIC. Without upgrading your account, you will see you can allocate only 2 VNIC's, but if you looks at the documentation, it is certainly possible to have many more attached.

As a side note: At first I thought to assign a public IP address where you missed that step during the creation, I couldn't see the UI to assign a new one and thought I had to attach a new VNIC. Not the case - the setting is just buried deep!

On the instance page, there is an Edit VNIC link. However this is not where you can enable a public IP Address.



Instead, you have to go to the VNIC resource (go to the details page) and you will see a Resources section where you can update details about the IP address.



OK, back to the secondary VNIC. Back on the compute instance, under resources click Attached VNIC's and create a new VNIC. This will attach it to the server.

After you attach it, you will notice the new interface appear as one of your network devices, but without any IP address allocated.



Here, the interface we are interested in is "ens5".

Now, this is where we need to turn to the documentation. Here, they provide a script that you can run.

So, what we will want to do is login to the server as root, put a copy of that script and run it.


Perfect - all looking good. At this point if you reboot the server and check the IP information, you will notice it's not right - keeping the interface up hasn't persisted after a reboot.

There are a number of way you can configure this script to run at boot time, but for this example, I will leverage CRON. It has the frequency attribute of "@reboot" you can use to get a script to run whenever you boot the system.

So I would expect the crontab to have a line resembling:

@reboot /root/secondary_vnic_all_configure.sh -c

One thing you will also have to do is make sure /sbin is in your path as it calls a few commands in that directory and by default cron only includes /usr/bin and /bin.

And that's a wrap. You can reboot to verify, but otherwise your newly minted VNIC is all set up and configured.

Wednesday, 2 October 2019

OCI: Logging Object Events with the Streaming Service

There are two functionalities in OCI that we can leverage in order to support logging - Streaming and Events service.

With the events service we can define on which events to match whereby you specify a service name and the corresponding event types. So for object storage, we log events based on create, update and delete:


 The next part is that we can define the action type, with three possible options:

  1. Streaming
  2. Notifications
  3. Functions
For this article, we are looking into Streaming. So, the first step is to go ahead and make a stream. Nothing too complex here, just go to the Analytics, Streaming menu in the console, and create a new stream. When you create it, you specify a retention policy where it's defaulted to 24 hours. SO I will leave it at the default. Actually, I'm leaving it all at the default.

The next step is that we need to define an IAM policy so that cloud events can leverage this streaming functionality. So, head over to IAM and create a new policy with the text:

allow service cloudEvents to use streams in tenancy

You will want this policy in your root compartment.

Now, we can go ahead and create our event logging. Back over at Events Services (Application Integration -> Event Service), create a new rule. I called mine "StreamObjectEvents".

In the action, you want to specify action type as streaming and the specific stream events should go into. It should look like this:


 

With all that set up, go ahead and perform some operations on your bucket. Once done, head back over to your stream, and refresh the events, and you should see new rows in there.


Now that all the pieces are in place, it's time to figure out how we'll consume this data. In this example I'll be creating a bash script. It's a simple 3 part process:

Step 1 - We need to determine our stream OCID.

oci streaming admin stream list \
    --compartment-id $TS_COMPART_ID \
    --name ObjLog \
    | jq -r '.data[].id'


So here, I have my compartment ID set in an environment variable named "TS_COMPART_ID" and I want to get the stream with the name ObjLog.

Step 2 - Create a cursor

Streams have a concept of cursors. A cursor tells OCI what data to read from the stream, and a cursor survives for 5 minutes only. There are different kinds of cursors and the documentation kindly lists 5 types of cursors for us:

  • AFTER_OFFSET
  • AT_OFFSET
  • AT_TIME
  • LATEST
  • TRIM_HORIZON 
I found that AT_TIME returned logs after a given time, so I opted to use that type.

My code looks like this:

oci streaming stream cursor create-cursor \
    --stream-id $objLogStreamId \
    --type AT_TIME \
    --partition 0 \
    --time "$(date --date='-1 hour' --rfc-3339=seconds)" \
    | jq -r '.data.value'


Basically, I'm saying here I will want to get any events that occured since the last hour.

Step 3 - Reading and reporting the data

Now we have all the pieces, we can consume the data in our log. One note is that I think it would be better if this event data actually returned the user performing the action from an auditing point of view. Maybe it will be added in the future.

Also note that the data is encoded in base64, so we first need to decode it which returns JSON in a data structure that resembles the following:

{
    "eventType": "com.oraclecloud.objectstorage.updateobject",
    "cloudEventsVersion": "0.1",
    "eventTypeVersion": "2.0",
    "source": "ObjectStorage",
    "eventTime": "2019-10-02T01:35:32.985Z",
    "contentType": "application/json",
    "data": {
        "compartmentId": "ocid1.compartment.oc1..xxx",
        "compartmentName": "education",
        "resourceName": "README.md",
        "resourceId": "/n/xxx/b/bucket-20191002-1028/o/README.md",
        "availabilityDomain": "SYD-AD-1",
        "additionalDetails": {
            "bucketName": "bucket-20191002-1028",
            "archivalState": "Available",
            "namespace": "xxx",
            "bucketId": "ocid1.bucket.oc1.ap-sydney-1.xxx",
            "eTag": "bdef8e2e-fa20-4889-8cdc-fc1cb7ee5e3b"
        }
    },
    "eventID": "e8e5ef3b-1a98-4bf7-4e47-2827f517feae",
    "extensions": {
        "compartmentId": "ocid1.compartment.oc1..xxx"
    }
}

So, I iterate and output the data like so

tabData="eventType\teventTime\tresourceName\b" 
for evtVal in $(oci streaming stream message get \
    --stream-id $objLogStreamId \
    --cursor $cursorId \
    | jq -r 'select(.data != null) | .data[].value' \
    )
do
    evtJson=$(echo $evtVal | base64 -d)

    evtType=$(echo $evtJson | jq -r '.eventType')
    evtTime=$(echo $evtJson | jq -r '.eventTime')
    resourceName=$(echo $evtJson | jq -r '.data.resourceName')

    line=$(printf "%s\t%s\t%s" "$evtType" "$evtTime" "$resourceName")
    tabData+="$line\n"

done
 
printf "$tabData" | column -t 

I place this code on GitHub so you can see the complete code:

https://github.com/tschf/oci-scripts/blob/master/objlog.sh

Friday, 20 September 2019

Setting up a simple web server on OCI

Did you miss the news? Oracle has announced that a free tier for usage of OCI which includes 2 compute instances, 2 autonomous databases, among a set of other free resources up to a certain limit. This tier isn't going to be suitable for high performance workloads, but hey, it's a pretty good deal I think.

if you've been following my activity, you will notice I've been starting to do a bit more with OCI, and for me, what better time to have an actual play around.

In this post, starting from a completely clean slate (no Virtual networks, no compute instances, etc), I wanted to see how I go about setting up an accessible web server. I opted to try with Ubuntu, since that is my daily driver so I'll just be consistent.

So, head over to the console and navigate to the compute section:



Once there, click the Create Instance button. You will see that it has by default selected Oracle Linux. So, let's see what else is available by clicking the Change Image Source button.


So, on this dialog - I am going to opt for Canonical Ubuntu 18.04 Minimal. Everything else I am going to leave as default - before creating the instance you will want to upload your public key in order to be able to connect to the server over SSH. So upload your public key either by pointing to the file in your system or by pasting it.

One other piece to notice is that because I don't have a network, OCI is going to create one for us.



Now, click the create button.

For me, the provisioning took under a couple of minutes.

Now that it's complete on the summary details page you will see it reports the private and public IP address information. So, naturally, our next step would be to ssh in to the server. I had read that instances come with the user opc, but in the case of ubuntu, the username is ubuntu.

First what you will want to do is update apt cache and upgrade any out of date packages.

sudo apt update
sudo apt upgrade

Then, I will install nginx.

sudo apt install nginx-light

Once that process completes you can verify it's working by checking on the status and also calling wget on localhost - you should get an index.html downloaded to your current working directory.


So far so good. Now, if you go to your local system and try and access the server from the public IP address you will not get the page you expect.

Further, if you run nmap against the server, you will only see port 22.


So we need to perform 2 more steps before our server can be accessible to the internet.

Firstly, we need to modify our security list to accept connections on port 80.
So back in OCI, navigate to your virtual networks (Networking -> Virtual Cloud Networks).






On that page, you will see the newly create network. So open that, and navigate to Security Lists.





On that page, we will want to add a new ingress rule to accept connections for port 80. In this basic example, I'm just opening it for the whole subnet - much like SSH is. In a real world scenario, the architecture would likely be different.

My rule list looks like this:




After that rule, you will notice it's still not accessible. The next part is the firewall at the OS level. So, I'm just going to flush my ruleset on the server by running the following:

sudo iptables -P INPUT ACCEPT
sudo iptables -P OUTPUT ACCEPT
sudo iptables -P FORWARD ACCEPT
sudo iptables -F

Source:

- https://serverfault.com/questions/129086/how-to-start-stop-iptables-on-ubuntu
- https://stackoverflow.com/questions/54794217/opening-port-80-on-oracle-cloud-infrastructure-compute-node

 After that, we can finally access the server in our web browser over the internet. Yay!

Tuesday, 10 September 2019

Saving git credentials when pushing to a remote

When you want to push changes upstream, git will prompt for your login details. To ease pushing changes, you may want to avoid doing this each and every time.

Last time I set this up I followed this steps detailed on AskUbuntu. This answer basically details the steps to:

1. Install the package libgnome-keyring-dev
2. Compile some files that git provides
3. Updating your Git config to use this compiled code

Just reviewing that to set up a new instance, this method is actually deprecated now since it's steps specific to gnome. Actually, the steps are very much the same, with one underlying change - the package you install in the first step.

More detail on StackOverflow, but basically the steps to set this up:

sudo apt install libsecret-1-0 libsecret-1-dev
cd /usr/share/doc/git/contrib/credential/libsecret
sudo make
git config --global credential.helper /usr/share/doc/git/contrib/credential/libsecret/git-credential-libsecret

That's all there is to it! Now when you push, you won't be prompted for your password each time (after an initial push where you enter your credentials).

Monday, 9 September 2019

Correctly classifying PL/SQL source code on GitHub

GitHub provides an engine that classifies source code that takes into account various factors, so may not always get it right. When it comes to relational database development, a common file extension would be .sql. However, with many different relational databases, it can be hard to determine which is the correct RDBMS the code relates to.

A case in point is a repository I came across, has the following classifications:




However, I happen to know in this scenario all source code directly relates to an Oracle database and such I believe all should be classified as PL/SQL.

So, how can we solve this dilemma for accurate reporting?

The engine for determining the language is under the package linguist. Within that repository there is a section Override which explains how you can override the chosen language very easily.

As it explains, create a file in the root of your repository if you don't already have one, .gitattributes, and specify the linguist-language property to that of any file extensions that are being miscategorised.

So, within that file, to clasify all sql files as PL/SQL code, create a line that looks like this:

*.sql linguist-language=PLSQL


After this change, this repository will start reporting the correct language:



Not only is this good for showing useful file stats within the repository, but the project will now have that source type as the primary language - so if I'm searching for some code, I could specify the language - and my project will be returned (before, it was classified as TSQL so wasn't being returned in this search)






Sunday, 8 September 2019

Get OCI compartment ID by name from bash

If you work with Oracle Cloud, it stands to reason you probably want some tooling around it to simplify your regular tasks. You have the option of using a client library with your programming language of choice, or you can use the command line client and have some bash scripts for your regular tasks.

One common argument you will need in performing some tasks is the compartment ID. For this we can run the command: "oci iam compartment list --all".

This will give us a JSON list of all the compartments:

{
  "data": [
    {
        "compartment-id": "ocid1.tenancy.oc1..xxxxx",
        "defined-tags": {},
        "description": "Compartment for Foo",
        "freeform-tags": {},
        "id": "ocid1.compartment.oc1..xxxxx",
        "inactive-status": null,
        "is-accessible": null,
        "lifecycle-state": "ACTIVE",
        "name": "Foo",
        "time-created": "2019-01-22T13:16:26.592000+00:00"
    }
  ]
}

So - what's a good way we could filter out to get the compartment by name?

There is a handy command line tool called jq, which allows you to query a json document easily. So, if we take the above sample and add it into jqplay.org we can develop the syntax for our selector. We come up with the following rule:

.data[] | select(.name == "Foo") | .id



So we can make this simple bash function:


function getCompartmentId {
   local compartmentName=$1

   oci iam compartment list --all | jq -r ".data[] | select(.name == \"${compartmentName}\") | .id"
}

That way in our script we can reference this function to perform some action on that specific compartment.

compartmentId=$(getCompartmentId Transat)
printf "Compartment ID for Foo is \"%s\"\n" $compartmentId 
 
 

Wednesday, 4 September 2019

Installing Oracle Instant Client on Ubuntu

Now that Oracle has enabled us to download instant client without any click through for accepting the license, I wanted to revisit a seamless install of the instant client on a new set up.

Ubuntu has the documentation about installing the instant client here: https://help.ubuntu.com/community/Oracle%20Instant%20Client.

First - because Oracle provides their releases in RPM archive format (or a tarball), in order to have an installer you need to create a DEB archive. There is a package in the archives, alien, which aids this process.

This gives the start of the script:

#!/bin/bash
# Install dependencies
sudo apt install alien


The 3 packages the Ubuntu documentation tells us to retrieve are:

- devel
- basic (I opt for basiclite instead)
- sqlplus

So, over at the downloads page: https://www.oracle.com/database/technologies/instant-client/linux-x86-64-downloads.html we can grab the link.

# Download files. Example specific to 19.3 
# Some links were not correct on the downloads page
# (still pointing to a license page), but easy enough to
# figure out from working ones 
wget https://download.oracle.com/otn_software/linux/instantclient/193000/oracle-instantclient19.3-basiclite-19.3.0.0.0-1.x86_64.rpm
wget https://download.oracle.com/otn_software/linux/instantclient/193000/oracle-instantclient19.3-devel-19.3.0.0.0-1.x86_64.rpm
wget https://download.oracle.com/otn_software/linux/instantclient/193000/oracle-instantclient19.3-sqlplus-19.3.0.0.0-1.x86_64.rpm

Next, install the RPM's using alien

sudo alien -i oracle-instantclient19.3-*.rpm

sqlplus will more than likely require libaio package, so install that dependency

sudo apt install libaio1

Set the environment up:

# Create Oracle environment script
sudo -s

printf "\n\n# Oracle Client environment\n \
export LD_LIBRARY_PATH=/usr/lib/oracle/19.3/client64/lib/${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}
export ORACLE_HOME=/usr/lib/oracle/19.3/client64\n" > /etc/profile.d/oracle-env.sh

exit

So, just to have that in the full script, it should look like:

#!/bin/bash
printf "Automated installer of oracle client for Ubuntu" 
# Install dependencies
sudo apt updatesudo apt install -y alien

# Download files. Example specific to 19.3 
# Some links were not correct on the downloads page
# (still pointing to a license page), but easy enough to
# figure out from working ones 
wget https://download.oracle.com/otn_software/linux/instantclient/193000/oracle-instantclient19.3-basiclite-19.3.0.0.0-1.x86_64.rpm
wget https://download.oracle.com/otn_software/linux/instantclient/193000/oracle-instantclient19.3-devel-19.3.0.0.0-1.x86_64.rpm
wget https://download.oracle.com/otn_software/linux/instantclient/193000/oracle-instantclient19.3-sqlplus-19.3.0.0.0-1.x86_64.rpm 

# Install all 3 RPM's downloaded 
sudo alien -i oracle-instantclient19.3-*.rpm

# Install SQL*Plus dependency  
sudo apt install -y libaio1

# Create Oracle environment script
printf "\n\n# Oracle Client environment\n \
export LD_LIBRARY_PATH=/usr/lib/oracle/19.3/client64/lib/${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}
export ORACLE_HOME=/usr/lib/oracle/19.3/client64\n" | sudo tee /etc/profile.d/oracle-env.sh > /dev/null

. /etc/profile.d/oracle-env.sh

printf "Install complete. Please verify"

Finally,

We verify we're all set up by launching sqlplus

sqlplus /nolog