Tuesday, 25 February 2020

Iterating OCI CLI list data in bash

For automating tasks with OCI, you have a few options:

  • OCI CLI (bash)
  • Python SDK
  • Go SDK
  • Java SDK
  • REST API

I'm a bash first kinda guy, so would usually opt for trying with the bash solution if the requirement is simple enough. Anything beyond, it's worth moving over to Python or GO.

When you run a command, you will typically get a JSON payload. But I'm in bash, how to I interact with this data?

That's where this nice tool jq comes in. An interface to JSON data, where you can pass in a json path to get the data you want. When you get started with this you will want to leverage the website jqplay.org. It provides a visual interface to the query paths you build, along with some common examples.

So, now over to iterating the data structure. I'm going to take the compartment list as an example.

When calling the command: oci iam compartment list, the data set in my tenancy looks like this:

{
  "data": [
    {
      "compartment-id": "ocid1.tenancy.oc1..xxx",
      "defined-tags": {},
      "description": "education",
      "freeform-tags": {},
      "id": "ocid1.compartment.oc1..xxx",
      "inactive-status": null,
      "is-accessible": null,
      "lifecycle-state": "ACTIVE",
      "name": "education",
      "time-created": "2019-09-20T01:06:31.731000+00:00"
    },
    {
      "compartment-id": "ocid1.tenancy.oc1..xxx",
      "defined-tags": {},
      "description": "idcs-xxx|22540605|foo@gmail.com-12345",
      "freeform-tags": {},
      "id": "ocid1.compartment.oc1..xxx",
      "inactive-status": null,
      "is-accessible": null,
      "lifecycle-state": "ACTIVE",
      "name": "ManagedCompartmentForPaaS",
      "time-created": "2019-09-17T02:56:55.916000+00:00"
    },
    {
      "compartment-id": "ocid1.tenancy.oc1..xxx",
      "defined-tags": {},
      "description": "Learning to use terraform",
      "freeform-tags": {},
      "id": "ocid1.compartment.oc1..xxx",
      "inactive-status": null,
      "is-accessible": null,
      "lifecycle-state": "DELETED",
      "name": "terraform",
      "time-created": "2019-09-25T12:29:43.421000+00:00"
    }
  ]
}

So, in my bash script, what I will normally do is get a list of indexes so I can look at these data sets one by one. To do these, you want to use the "keys" function which will turn an array of all the indexes. We want to remove the array brackets and just end up with a number on each line representing the index. So we end up with a json path of: .data | keys | .[]


And so when we are looping over our data, we just reference the index, to get individual properties for that element.

So, with all this info, our list script looks like this:

#!/bin/bash
set -e

compartmentList=$(oci iam compartment list)

for i in $(echo "$compartmentList" | jq '.data | keys | .[]')
do
    ID=$(echo $compartmentList | jq -r ".data[$i].\"id\"")
    name=$(echo $compartmentList | jq -r ".data[$i].\"name\"")
    desc=$(echo $compartmentList | jq -r ".data[$i].\"description\"")
    lifecycleState=$(echo $compartmentList | jq -r ".data[$i].\"lifecycle-state\"")

    echo "ID: $ID"
    echo "Name: $name"
    echo "Desc: $desc"
    echo "Desc: $lifecycleState"
    echo "****"
done

(side note: with this simple example, there's probbaly a one liner you could do with jq, but real world example are likely more complex and require some use of one or two properties)

Compartments usually underpin other operations you may be analysing in your tenancy, and one thing I discovered the other day is that if a compartment gets removed, things start going haywire! So, what we'll want to do is restrict our list to only include ones with the lifecycle-state of ACTIVE (depending of course on your business requirements).

So, in our script, we could just add a condition:

if [[ "$lifecycleState" == "ACTIVE" ]]
then
    # TODO
fi

However, just to revisit one of my previous blog posts which discussed the query capabilities of the command line client, we can reduce the code and create a reusable component/query.

Go to your oci_cli_rc file and add a new query called active, that looks like this:

active=data[?"lifecycle-state" == `ACTIVE`]

If we use this query in our command, it's worth noting that the data is transformed slightly. Previously it contains an array with the name "data". That property is removed when using this query and now we end up with a raw array, of objects.

So, if we want to use this query to filter out only active compartments, our script now looks like this:

#!/bin/bash
set -e

compartmentList=$(oci iam compartment list --query query://active)

for i in $(echo "$compartmentList" | jq 'keys | .[]')
do
    ID=$(echo $compartmentList | jq -r ".[$i].\"id\"")
    name=$(echo $compartmentList | jq -r ".[$i].\"name\"")
    desc=$(echo $compartmentList | jq -r ".[$i].\"description\"")
    lifecycleState=$(echo $compartmentList | jq -r ".[$i].\"lifecycle-state\"")

    echo "ID: $ID"
    echo "Name: $name"
    echo "Desc: $desc"
    echo "Desc: $lifecycleState"
    echo "****"
done

I made bold that parts that have been changed. You'll notice we just removed the reference to .data; Otherwise pretty well the same.

Friday, 14 February 2020

Oracle Cloud Infrastructure Cloud Shell - Here's what I discovered

This week I spotted a new terminal-like icon at the top of my OCI tenancy. Upload clicking it, its a GCP-esque terminal emulator directly in the browser. I was doing some tweets about my discoveries, but thought I'd be a good idea to collate those in once place in a more consumable place.

So, first I would say, it's possible you may not have it yet in your environment. At least for me, in my personal account, I do have it appearing. This was with a home region of Sydney.

Once you launch the shell, type help and you will see a link to the official documentation. You can find it here: https://docs.cloud.oracle.com/en-us//iaas/Content/API/Concepts/cloudshellintro.htm

Like in GCP, you get 5gb of storage. This will persist for at least 8 months. You get 6 months before OCI will email your tenant administrator and another 60 days after that before your data will be purged. So, a decent enough amount of time I would think.

You can verify this with the command: df -h.
On a fresh connection, I seem to have about 100MB used. Nothing to frown at.

In my fresh shell, the .bash_history file was not present. This meant your history wouldn't persist between sessions. Easy fix, just touch that file and then your history will persist between sessions.

In /home, there are two accounts:

1. Your own
2. oci

The OCI CLI client is installed to the oci user directory.
It is using a profile with whatever access you have in OCI that you are connecting from. The region is the one to which you connect from, although, you can easily switch to another region without connecting to a new shell after switching regions in the console. Just run the command:

export OCI_CLI_REGION=eu-zurich-1 # or whichever region you want

One change I'd suggest is something like the following into your `bashrc`.

export OCI_CLI_REGION=$OCI_CLI_PROFILE

(OCI_CLI_PROFILE is set to the region in which you are active in the console, and the bash prompt has the region hard coded into the prompt)

Then modifying your PS1 variable to reference OCI_CLI_REGION. The reason for this, is if you do modify the region variable, your bash prompt will be misleading and could lead to some confusion.

The documentation states the following are pre-installed:

  • Git
  • Java
  • Python (2 and 3)
  • SQL Plus
  • kubectl
  • helm
  • maven
  • gradle
  • terraform

Some useful information in relation to pre-installed software.

Git is version 1.8. That's not so old, released back in November, 2019. Just worth noting in case them are some recent feature set you're expecting. Current stable version is 2.25

Java version is 1.8

Python2 includes the oci SDK. Python3 does not.
If you try to install new packages, you will run into issues, so the best thing to do is create a virtual environment in your home directory and use that instead. Most especially if you wish to target python3, which you should be.
You can follow these steps:

cd $HOME
mkdir python3
python3 -m venv python3/
cd python3
bin/pip3 install oci



I would suggest then updating your path to point to this new $HOME/python3/bin folder so that then becomes the default python3 that your system uses.

SQL*Plus is the latest current release - 19.5.
SQLcl is omitted from the list. According to their social media account, this is something they're working to include - so keep your eyes out for that one!

Not mentioned, Golang is also installed. It is on version 1.13. Perfect!

Also not mentioned, Docker. It is also installed and is currently at 19.03

Minor software not mentioned, jq. This is a very useful tool for working with JSON on the command line. So goes hand-in-hand with the cli client.

There seemed be sufficient about of memory for any tasks:

              total        used        free      shared  buff/cache   available
Mem:           7.5G        650M        5.5G         16M        1.4G        6.6G
Swap:          8.0G          0B        8.0G


The documentation does mentions it times out after 20min of inactivity. This seems to be slightly flaky in my experience. Even though I've been executing commands, I've noticed I would still get disconnected.

Well, so far as looking pretty nice. Kudos Oracle!

Monday, 20 January 2020

OCI CLI Environment Variables

The OCI CLI client provides a number of arguments that are useful as you look up your components from the command line. As you build an internal tooling to query either your or your customers resources, it can start to get "interesting" if you need to start building a custom argument list.

Since version 2.6.9 of the client, a number of environment variables are supported that you can set rather than building a complex argument list. I think leveraging these variables makes for a cleaner code base in your bash scripts.

If you check the release notes from version 2.6.9, you will see they introduced they following variables:

  • OCI_CLI_PROFILE
  • OCI_CLI_REGION
  • OCI_CLI_USER
  • OCI_CLI_FINGERPRINT
  • OCI_CLI_KEY_FILE
  • OCI_CLI_TENANCY
  • OCI_CLI_ENDPOINT
  • OCI_CLI_CONFIG_FILE
  • OCI_CLI_RC_FILE
  • OCI_CLI_CERT_BUNDLE
  • OCI_CLI_AUTH
  • OCI_CLI_DELEGATION_TOKEN_FILE
  • OCI_CLI_SECURITY_TOKEN_FILE
For the most recent version of what's supported, you can review the code base where they define all the available environment variables that you can use:

https://github.com/oracle/oci-cli/blob/master/src/oci_cli/cli_constants.py


So, what are some examples I use in leveraging these variables?

I take an argument into my script to receive the profile. If this is set I set the variable OCI_CLI_PROFILE to whatever was passed in and unset the variable OCI_CLI_AUTH which I have pre-set to instance_principal.

if [[ "$ociProfile" != "" ]]
then
    export OCI_CLI_PROFILE=$ociProfile
    unset OCI_CLI_AUTH
fi

Another example. I have some scripts that are auditing the whole tenancy, and whenever you run an OCI command it is against a single region at a time. There is a command that lists any region you have a subscription to, so we can pull that list, run a loop and then export the relevant region in each iteration.

regions=$(oci iam region-subscription list)

for regionIdx in $(echo "$regions" | jq '.data | keys | .[]'); do
    regionName=$(echo "$regions" | jq -r ".data[$regionIdx].\"region-name\"")
    regionRequiresNotification=false

    export OCI_CLI_REGION="$regionName"
    echo "exported region $OCI_CLI_REGION"
done

Well - that's just a couple of examples. The names of the variables are pretty self explanatory for what they relate to.

OCI: Enabling X11 Forwarding on an Oracle Linux instance

I was connecting to one of my works servers the other day hoping to copy the contents of a file into my clipboard, which would require the use of an X11 forwarded session, but as I connected to the server with X11 forwarding enabled, I was sad to see the following message:

X11 forwarding request failed on channel 0

This is on Oracle Linux.

Note, if I connect to an Ubuntu instance, I get the message:

/usr/bin/xauth:  file /home/ubuntu/.Xauthority does not exist


But this file is created, and X forwarding works from the get-go.

OK, back to Oracle Linux - how do we fix this?

Actually, the fix is a simple one.

SSH is already configured to enable X11Forwarding. The other change you need make is to turn off the setting X11UseLocalhost. So part of your configuration would likely look like this:

X11Forwarding yes
#X11DisplayOffset 10
X11UseLocalhost no

After that, you will want to reload the SSH daemon. Do this by running the command:

sudo systemctl reload sshd

Finally, you need to install xauth. This is enabled through the package xorg-x11-xauth.

sudo yum install xorg-x11-xauth

The next time you connect to the server, you should see the file .Xauthority in your home directory. And you will be able to run any X apps remotely.

You connect to the server with the X flag:

ssh -X opc@server



For what it's worth, you use the xclip package to copy the contents of files.

So, for example, to copy the sshd_config, you would run:

cat /etc/ssh/sshd_config  | xclip -selection c 


Friday, 17 January 2020

Trimming down on OCI CLI output with a query in the RC file

Oracle Cloud Infrastructure has a command client, which leverages their REST API. One thing you can do to streamline your usage is to make use of an RC file. The RC file that would be automatically used is the one at the path: `~/.oci/oci_cli_rc`, but you can also point to an alternative file path if you have named it something different.

At the most basic level what you would typically want to do is provide some alias for arguments. A common argument being `--compartment-id`, but we can alias that to simply `-c` by specifying in our RC file:

[OCI_CLI_PARAM_ALIASES]
-c = --compartment-id


This is all good, but what I think is neat is that you can provide queries to manipulate the output that is returned to the screen.

What's a use case for this? Well one example, it's not uncommon to list the compartments so you can figure out the compartment ID, but by default there is a lot of information returned that I am almost never interested in. When your tenancy isn't very complex, it's not a problem, but when it starts to get more complex, the last thing you want to do is be scrolling pages of compartment properties you aren't interested in.

So, you define queries in a confiuration section named: [OCI_CLI_CANNED_QUERIES]. So, first lets look at the full output, so we can define which properties we want returned. What I want, and in this order is:

1. Parent compartment ID
2. Compartment ID
3. Name

So, I define my query like:

[OCI_CLI_CANNED_QUERIES]

simple_list=data[*].{"id": "id", "parent-id": "compartment-id", "name": "name"}


With that saved into my RC file, now I can run my list operation and point to by pre-defined query like so:

oci iam compartment list --query query://simple_list

And my output becomes:

[
  {
    "id": "ocid1.compartment.oc1..xxx",
    "name": "education",
    "parent-id": "ocid1.tenancy.oc1..xxx"
  },
  {
    "id": "ocid1.compartment.oc1..xxx",
    "name": "ManagedCompartmentForPaaS",
    "parent-id": "ocid1.tenancy.oc1..xxx"
  },
  {
    "id": "ocid1.compartment.oc1..xxx",
    "name": "terraform",
    "parent-id": "ocid1.tenancy.oc1..xxx"
  }
]


Much more easy to consume, right?

As you can see, the display order doesn't match the way in which I defined the query fields - it looks to come out in alphabetical order, if that matters to you.

This is just one example of what you can do. Other examples in the documentation include:

  • Returning data a simple comma separated list of values
  • Applying a filter - check out Christoph's blog with some advances examples:
  • Restricting output to a number of records

This query leverages the JMESPath technology. 

You can also see the Oracle provided query examples here: https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/cliconfigure.htm#SpecifyingNamedQueries

Wednesday, 16 October 2019

Attaching a second VNIC card to compute in OCI

Well, just under 30 days ago, Oracle announced a series of resources you can use in OCI for free. One thing that had stopped me signing up and trying out OCI in the past was that I wanted to make the best use of the free credits, and knowing I wouldn't get a full chance to try things out in the 30 days, I didn't want to sign up prematurely. Now that they offer some free resources, this prompted me to sign up.

I am now at the end of the 30-day period where I have some credits to use non-free resources. One final thing I wanted to try out was attaching multiple VNIC (Virtual network interface card) to a single compute instance. One use-case of these is that you may want a machine accessible in 2 different networks.

It's not just a matter of attaching it in the OCI console - to bring the interface up you have to perform a couple of extra steps. When I was first trying this, I didn't read the docs and figured I would just have to edit the interface config script and bring it up, but no, this is not the correct method.

So first, create you instance. It's worth noting the free machine shape can only have 1 VNIC. Without upgrading your account, you will see you can allocate only 2 VNIC's, but if you looks at the documentation, it is certainly possible to have many more attached.

As a side note: At first I thought to assign a public IP address where you missed that step during the creation, I couldn't see the UI to assign a new one and thought I had to attach a new VNIC. Not the case - the setting is just buried deep!

On the instance page, there is an Edit VNIC link. However this is not where you can enable a public IP Address.



Instead, you have to go to the VNIC resource (go to the details page) and you will see a Resources section where you can update details about the IP address.



OK, back to the secondary VNIC. Back on the compute instance, under resources click Attached VNIC's and create a new VNIC. This will attach it to the server.

After you attach it, you will notice the new interface appear as one of your network devices, but without any IP address allocated.



Here, the interface we are interested in is "ens5".

Now, this is where we need to turn to the documentation. Here, they provide a script that you can run.

So, what we will want to do is login to the server as root, put a copy of that script and run it.


Perfect - all looking good. At this point if you reboot the server and check the IP information, you will notice it's not right - keeping the interface up hasn't persisted after a reboot.

There are a number of way you can configure this script to run at boot time, but for this example, I will leverage CRON. It has the frequency attribute of "@reboot" you can use to get a script to run whenever you boot the system.

So I would expect the crontab to have a line resembling:

@reboot /root/secondary_vnic_all_configure.sh -c

One thing you will also have to do is make sure /sbin is in your path as it calls a few commands in that directory and by default cron only includes /usr/bin and /bin.

And that's a wrap. You can reboot to verify, but otherwise your newly minted VNIC is all set up and configured.

Wednesday, 2 October 2019

OCI: Logging Object Events with the Streaming Service

There are two functionalities in OCI that we can leverage in order to support logging - Streaming and Events service.

With the events service we can define on which events to match whereby you specify a service name and the corresponding event types. So for object storage, we log events based on create, update and delete:


 The next part is that we can define the action type, with three possible options:

  1. Streaming
  2. Notifications
  3. Functions
For this article, we are looking into Streaming. So, the first step is to go ahead and make a stream. Nothing too complex here, just go to the Analytics, Streaming menu in the console, and create a new stream. When you create it, you specify a retention policy where it's defaulted to 24 hours. SO I will leave it at the default. Actually, I'm leaving it all at the default.

The next step is that we need to define an IAM policy so that cloud events can leverage this streaming functionality. So, head over to IAM and create a new policy with the text:

allow service cloudEvents to use streams in tenancy

You will want this policy in your root compartment.

Now, we can go ahead and create our event logging. Back over at Events Services (Application Integration -> Event Service), create a new rule. I called mine "StreamObjectEvents".

In the action, you want to specify action type as streaming and the specific stream events should go into. It should look like this:


 

With all that set up, go ahead and perform some operations on your bucket. Once done, head back over to your stream, and refresh the events, and you should see new rows in there.


Now that all the pieces are in place, it's time to figure out how we'll consume this data. In this example I'll be creating a bash script. It's a simple 3 part process:

Step 1 - We need to determine our stream OCID.

oci streaming admin stream list \
    --compartment-id $TS_COMPART_ID \
    --name ObjLog \
    | jq -r '.data[].id'


So here, I have my compartment ID set in an environment variable named "TS_COMPART_ID" and I want to get the stream with the name ObjLog.

Step 2 - Create a cursor

Streams have a concept of cursors. A cursor tells OCI what data to read from the stream, and a cursor survives for 5 minutes only. There are different kinds of cursors and the documentation kindly lists 5 types of cursors for us:

  • AFTER_OFFSET
  • AT_OFFSET
  • AT_TIME
  • LATEST
  • TRIM_HORIZON 
I found that AT_TIME returned logs after a given time, so I opted to use that type.

My code looks like this:

oci streaming stream cursor create-cursor \
    --stream-id $objLogStreamId \
    --type AT_TIME \
    --partition 0 \
    --time "$(date --date='-1 hour' --rfc-3339=seconds)" \
    | jq -r '.data.value'


Basically, I'm saying here I will want to get any events that occured since the last hour.

Step 3 - Reading and reporting the data

Now we have all the pieces, we can consume the data in our log. One note is that I think it would be better if this event data actually returned the user performing the action from an auditing point of view. Maybe it will be added in the future.

Also note that the data is encoded in base64, so we first need to decode it which returns JSON in a data structure that resembles the following:

{
    "eventType": "com.oraclecloud.objectstorage.updateobject",
    "cloudEventsVersion": "0.1",
    "eventTypeVersion": "2.0",
    "source": "ObjectStorage",
    "eventTime": "2019-10-02T01:35:32.985Z",
    "contentType": "application/json",
    "data": {
        "compartmentId": "ocid1.compartment.oc1..xxx",
        "compartmentName": "education",
        "resourceName": "README.md",
        "resourceId": "/n/xxx/b/bucket-20191002-1028/o/README.md",
        "availabilityDomain": "SYD-AD-1",
        "additionalDetails": {
            "bucketName": "bucket-20191002-1028",
            "archivalState": "Available",
            "namespace": "xxx",
            "bucketId": "ocid1.bucket.oc1.ap-sydney-1.xxx",
            "eTag": "bdef8e2e-fa20-4889-8cdc-fc1cb7ee5e3b"
        }
    },
    "eventID": "e8e5ef3b-1a98-4bf7-4e47-2827f517feae",
    "extensions": {
        "compartmentId": "ocid1.compartment.oc1..xxx"
    }
}

So, I iterate and output the data like so

tabData="eventType\teventTime\tresourceName\b" 
for evtVal in $(oci streaming stream message get \
    --stream-id $objLogStreamId \
    --cursor $cursorId \
    | jq -r 'select(.data != null) | .data[].value' \
    )
do
    evtJson=$(echo $evtVal | base64 -d)

    evtType=$(echo $evtJson | jq -r '.eventType')
    evtTime=$(echo $evtJson | jq -r '.eventTime')
    resourceName=$(echo $evtJson | jq -r '.data.resourceName')

    line=$(printf "%s\t%s\t%s" "$evtType" "$evtTime" "$resourceName")
    tabData+="$line\n"

done
 
printf "$tabData" | column -t 

I place this code on GitHub so you can see the complete code:

https://github.com/tschf/oci-scripts/blob/master/objlog.sh