Wednesday, 16 October 2019

Attaching a second VNIC card to compute in OCI

Well, just under 30 days ago, Oracle announced a series of resources you can use in OCI for free. One thing that had stopped me signing up and trying out OCI in the past was that I wanted to make the best use of the free credits, and knowing I wouldn't get a full chance to try things out in the 30 days, I didn't want to sign up prematurely. Now that they offer some free resources, this prompted me to sign up.

I am now at the end of the 30-day period where I have some credits to use non-free resources. One final thing I wanted to try out was attaching multiple VNIC (Virtual network interface card) to a single compute instance. One use-case of these is that you may want a machine accessible in 2 different networks.

It's not just a matter of attaching it in the OCI console - to bring the interface up you have to perform a couple of extra steps. When I was first trying this, I didn't read the docs and figured I would just have to edit the interface config script and bring it up, but no, this is not the correct method.

So first, create you instance. It's worth noting the free machine shape can only have 1 VNIC. Without upgrading your account, you will see you can allocate only 2 VNIC's, but if you looks at the documentation, it is certainly possible to have many more attached.

As a side note: At first I thought to assign a public IP address where you missed that step during the creation, I couldn't see the UI to assign a new one and thought I had to attach a new VNIC. Not the case - the setting is just buried deep!

On the instance page, there is an Edit VNIC link. However this is not where you can enable a public IP Address.



Instead, you have to go to the VNIC resource (go to the details page) and you will see a Resources section where you can update details about the IP address.



OK, back to the secondary VNIC. Back on the compute instance, under resources click Attached VNIC's and create a new VNIC. This will attach it to the server.

After you attach it, you will notice the new interface appear as one of your network devices, but without any IP address allocated.



Here, the interface we are interested in is "ens5".

Now, this is where we need to turn to the documentation. Here, they provide a script that you can run.

So, what we will want to do is login to the server as root, put a copy of that script and run it.


Perfect - all looking good. At this point if you reboot the server and check the IP information, you will notice it's not right - keeping the interface up hasn't persisted after a reboot.

There are a number of way you can configure this script to run at boot time, but for this example, I will leverage CRON. It has the frequency attribute of "@reboot" you can use to get a script to run whenever you boot the system.

So I would expect the crontab to have a line resembling:

@reboot /root/secondary_vnic_all_configure.sh -c

One thing you will also have to do is make sure /sbin is in your path as it calls a few commands in that directory and by default cron only includes /usr/bin and /bin.

And that's a wrap. You can reboot to verify, but otherwise your newly minted VNIC is all set up and configured.

Wednesday, 2 October 2019

OCI: Logging Object Events with the Streaming Service

There are two functionalities in OCI that we can leverage in order to support logging - Streaming and Events service.

With the events service we can define on which events to match whereby you specify a service name and the corresponding event types. So for object storage, we log events based on create, update and delete:


 The next part is that we can define the action type, with three possible options:

  1. Streaming
  2. Notifications
  3. Functions
For this article, we are looking into Streaming. So, the first step is to go ahead and make a stream. Nothing too complex here, just go to the Analytics, Streaming menu in the console, and create a new stream. When you create it, you specify a retention policy where it's defaulted to 24 hours. SO I will leave it at the default. Actually, I'm leaving it all at the default.

The next step is that we need to define an IAM policy so that cloud events can leverage this streaming functionality. So, head over to IAM and create a new policy with the text:

allow service cloudEvents to use streams in tenancy

You will want this policy in your root compartment.

Now, we can go ahead and create our event logging. Back over at Events Services (Application Integration -> Event Service), create a new rule. I called mine "StreamObjectEvents".

In the action, you want to specify action type as streaming and the specific stream events should go into. It should look like this:


 

With all that set up, go ahead and perform some operations on your bucket. Once done, head back over to your stream, and refresh the events, and you should see new rows in there.


Now that all the pieces are in place, it's time to figure out how we'll consume this data. In this example I'll be creating a bash script. It's a simple 3 part process:

Step 1 - We need to determine our stream OCID.

oci streaming admin stream list \
    --compartment-id $TS_COMPART_ID \
    --name ObjLog \
    | jq -r '.data[].id'


So here, I have my compartment ID set in an environment variable named "TS_COMPART_ID" and I want to get the stream with the name ObjLog.

Step 2 - Create a cursor

Streams have a concept of cursors. A cursor tells OCI what data to read from the stream, and a cursor survives for 5 minutes only. There are different kinds of cursors and the documentation kindly lists 5 types of cursors for us:

  • AFTER_OFFSET
  • AT_OFFSET
  • AT_TIME
  • LATEST
  • TRIM_HORIZON 
I found that AT_TIME returned logs after a given time, so I opted to use that type.

My code looks like this:

oci streaming stream cursor create-cursor \
    --stream-id $objLogStreamId \
    --type AT_TIME \
    --partition 0 \
    --time "$(date --date='-1 hour' --rfc-3339=seconds)" \
    | jq -r '.data.value'


Basically, I'm saying here I will want to get any events that occured since the last hour.

Step 3 - Reading and reporting the data

Now we have all the pieces, we can consume the data in our log. One note is that I think it would be better if this event data actually returned the user performing the action from an auditing point of view. Maybe it will be added in the future.

Also note that the data is encoded in base64, so we first need to decode it which returns JSON in a data structure that resembles the following:

{
    "eventType": "com.oraclecloud.objectstorage.updateobject",
    "cloudEventsVersion": "0.1",
    "eventTypeVersion": "2.0",
    "source": "ObjectStorage",
    "eventTime": "2019-10-02T01:35:32.985Z",
    "contentType": "application/json",
    "data": {
        "compartmentId": "ocid1.compartment.oc1..xxx",
        "compartmentName": "education",
        "resourceName": "README.md",
        "resourceId": "/n/xxx/b/bucket-20191002-1028/o/README.md",
        "availabilityDomain": "SYD-AD-1",
        "additionalDetails": {
            "bucketName": "bucket-20191002-1028",
            "archivalState": "Available",
            "namespace": "xxx",
            "bucketId": "ocid1.bucket.oc1.ap-sydney-1.xxx",
            "eTag": "bdef8e2e-fa20-4889-8cdc-fc1cb7ee5e3b"
        }
    },
    "eventID": "e8e5ef3b-1a98-4bf7-4e47-2827f517feae",
    "extensions": {
        "compartmentId": "ocid1.compartment.oc1..xxx"
    }
}

So, I iterate and output the data like so

tabData="eventType\teventTime\tresourceName\b" 
for evtVal in $(oci streaming stream message get \
    --stream-id $objLogStreamId \
    --cursor $cursorId \
    | jq -r 'select(.data != null) | .data[].value' \
    )
do
    evtJson=$(echo $evtVal | base64 -d)

    evtType=$(echo $evtJson | jq -r '.eventType')
    evtTime=$(echo $evtJson | jq -r '.eventTime')
    resourceName=$(echo $evtJson | jq -r '.data.resourceName')

    line=$(printf "%s\t%s\t%s" "$evtType" "$evtTime" "$resourceName")
    tabData+="$line\n"

done
 
printf "$tabData" | column -t 

I place this code on GitHub so you can see the complete code:

https://github.com/tschf/oci-scripts/blob/master/objlog.sh

Friday, 20 September 2019

Setting up a simple web server on OCI

Did you miss the news? Oracle has announced that a free tier for usage of OCI which includes 2 compute instances, 2 autonomous databases, among a set of other free resources up to a certain limit. This tier isn't going to be suitable for high performance workloads, but hey, it's a pretty good deal I think.

if you've been following my activity, you will notice I've been starting to do a bit more with OCI, and for me, what better time to have an actual play around.

In this post, starting from a completely clean slate (no Virtual networks, no compute instances, etc), I wanted to see how I go about setting up an accessible web server. I opted to try with Ubuntu, since that is my daily driver so I'll just be consistent.

So, head over to the console and navigate to the compute section:



Once there, click the Create Instance button. You will see that it has by default selected Oracle Linux. So, let's see what else is available by clicking the Change Image Source button.


So, on this dialog - I am going to opt for Canonical Ubuntu 18.04 Minimal. Everything else I am going to leave as default - before creating the instance you will want to upload your public key in order to be able to connect to the server over SSH. So upload your public key either by pointing to the file in your system or by pasting it.

One other piece to notice is that because I don't have a network, OCI is going to create one for us.



Now, click the create button.

For me, the provisioning took under a couple of minutes.

Now that it's complete on the summary details page you will see it reports the private and public IP address information. So, naturally, our next step would be to ssh in to the server. I had read that instances come with the user opc, but in the case of ubuntu, the username is ubuntu.

First what you will want to do is update apt cache and upgrade any out of date packages.

sudo apt update
sudo apt upgrade

Then, I will install nginx.

sudo apt install nginx-light

Once that process completes you can verify it's working by checking on the status and also calling wget on localhost - you should get an index.html downloaded to your current working directory.


So far so good. Now, if you go to your local system and try and access the server from the public IP address you will not get the page you expect.

Further, if you run nmap against the server, you will only see port 22.


So we need to perform 2 more steps before our server can be accessible to the internet.

Firstly, we need to modify our security list to accept connections on port 80.
So back in OCI, navigate to your virtual networks (Networking -> Virtual Cloud Networks).






On that page, you will see the newly create network. So open that, and navigate to Security Lists.





On that page, we will want to add a new ingress rule to accept connections for port 80. In this basic example, I'm just opening it for the whole subnet - much like SSH is. In a real world scenario, the architecture would likely be different.

My rule list looks like this:




After that rule, you will notice it's still not accessible. The next part is the firewall at the OS level. So, I'm just going to flush my ruleset on the server by running the following:

sudo iptables -P INPUT ACCEPT
sudo iptables -P OUTPUT ACCEPT
sudo iptables -P FORWARD ACCEPT
sudo iptables -F

Source:

- https://serverfault.com/questions/129086/how-to-start-stop-iptables-on-ubuntu
- https://stackoverflow.com/questions/54794217/opening-port-80-on-oracle-cloud-infrastructure-compute-node

 After that, we can finally access the server in our web browser over the internet. Yay!

Tuesday, 10 September 2019

Saving git credentials when pushing to a remote

When you want to push changes upstream, git will prompt for your login details. To ease pushing changes, you may want to avoid doing this each and every time.

Last time I set this up I followed this steps detailed on AskUbuntu. This answer basically details the steps to:

1. Install the package libgnome-keyring-dev
2. Compile some files that git provides
3. Updating your Git config to use this compiled code

Just reviewing that to set up a new instance, this method is actually deprecated now since it's steps specific to gnome. Actually, the steps are very much the same, with one underlying change - the package you install in the first step.

More detail on StackOverflow, but basically the steps to set this up:

sudo apt install libsecret-1-0 libsecret-1-dev
cd /usr/share/doc/git/contrib/credential/libsecret
sudo make
git config --global credential.helper /usr/share/doc/git/contrib/credential/libsecret/git-credential-libsecret

That's all there is to it! Now when you push, you won't be prompted for your password each time (after an initial push where you enter your credentials).

Monday, 9 September 2019

Correctly classifying PL/SQL source code on GitHub

GitHub provides an engine that classifies source code that takes into account various factors, so may not always get it right. When it comes to relational database development, a common file extension would be .sql. However, with many different relational databases, it can be hard to determine which is the correct RDBMS the code relates to.

A case in point is a repository I came across, has the following classifications:




However, I happen to know in this scenario all source code directly relates to an Oracle database and such I believe all should be classified as PL/SQL.

So, how can we solve this dilemma for accurate reporting?

The engine for determining the language is under the package linguist. Within that repository there is a section Override which explains how you can override the chosen language very easily.

As it explains, create a file in the root of your repository if you don't already have one, .gitattributes, and specify the linguist-language property to that of any file extensions that are being miscategorised.

So, within that file, to clasify all sql files as PL/SQL code, create a line that looks like this:

*.sql linguist-language=PLSQL


After this change, this repository will start reporting the correct language:



Not only is this good for showing useful file stats within the repository, but the project will now have that source type as the primary language - so if I'm searching for some code, I could specify the language - and my project will be returned (before, it was classified as TSQL so wasn't being returned in this search)






Sunday, 8 September 2019

Get OCI compartment ID by name from bash

If you work with Oracle Cloud, it stands to reason you probably want some tooling around it to simplify your regular tasks. You have the option of using a client library with your programming language of choice, or you can use the command line client and have some bash scripts for your regular tasks.

One common argument you will need in performing some tasks is the compartment ID. For this we can run the command: "oci iam compartment list --all".

This will give us a JSON list of all the compartments:

{
  "data": [
    {
        "compartment-id": "ocid1.tenancy.oc1..xxxxx",
        "defined-tags": {},
        "description": "Compartment for Foo",
        "freeform-tags": {},
        "id": "ocid1.compartment.oc1..xxxxx",
        "inactive-status": null,
        "is-accessible": null,
        "lifecycle-state": "ACTIVE",
        "name": "Foo",
        "time-created": "2019-01-22T13:16:26.592000+00:00"
    }
  ]
}

So - what's a good way we could filter out to get the compartment by name?

There is a handy command line tool called jq, which allows you to query a json document easily. So, if we take the above sample and add it into jqplay.org we can develop the syntax for our selector. We come up with the following rule:

.data[] | select(.name == "Foo") | .id



So we can make this simple bash function:


function getCompartmentId {
   local compartmentName=$1

   oci iam compartment list --all | jq -r ".data[] | select(.name == \"${compartmentName}\") | .id"
}

That way in our script we can reference this function to perform some action on that specific compartment.

compartmentId=$(getCompartmentId Transat)
printf "Compartment ID for Foo is \"%s\"\n" $compartmentId 
 
 

Wednesday, 4 September 2019

Installing Oracle Instant Client on Ubuntu

Now that Oracle has enabled us to download instant client without any click through for accepting the license, I wanted to revisit a seamless install of the instant client on a new set up.

Ubuntu has the documentation about installing the instant client here: https://help.ubuntu.com/community/Oracle%20Instant%20Client.

First - because Oracle provides their releases in RPM archive format (or a tarball), in order to have an installer you need to create a DEB archive. There is a package in the archives, alien, which aids this process.

This gives the start of the script:

#!/bin/bash
# Install dependencies
sudo apt install alien


The 3 packages the Ubuntu documentation tells us to retrieve are:

- devel
- basic (I opt for basiclite instead)
- sqlplus

So, over at the downloads page: https://www.oracle.com/database/technologies/instant-client/linux-x86-64-downloads.html we can grab the link.

# Download files. Example specific to 19.3 
# Some links were not correct on the downloads page
# (still pointing to a license page), but easy enough to
# figure out from working ones 
wget https://download.oracle.com/otn_software/linux/instantclient/193000/oracle-instantclient19.3-basiclite-19.3.0.0.0-1.x86_64.rpm
wget https://download.oracle.com/otn_software/linux/instantclient/193000/oracle-instantclient19.3-devel-19.3.0.0.0-1.x86_64.rpm
wget https://download.oracle.com/otn_software/linux/instantclient/193000/oracle-instantclient19.3-sqlplus-19.3.0.0.0-1.x86_64.rpm

Next, install the RPM's using alien

sudo alien -i oracle-instantclient19.3-*.rpm

sqlplus will more than likely require libaio package, so install that dependency

sudo apt install libaio1

Set the environment up:

# Create Oracle environment script
sudo -s

printf "\n\n# Oracle Client environment\n \
export LD_LIBRARY_PATH=/usr/lib/oracle/19.3/client64/lib/${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}
export ORACLE_HOME=/usr/lib/oracle/19.3/client64\n" > /etc/profile.d/oracle-env.sh

exit

So, just to have that in the full script, it should look like:

#!/bin/bash
printf "Automated installer of oracle client for Ubuntu" 
# Install dependencies
sudo apt updatesudo apt install -y alien

# Download files. Example specific to 19.3 
# Some links were not correct on the downloads page
# (still pointing to a license page), but easy enough to
# figure out from working ones 
wget https://download.oracle.com/otn_software/linux/instantclient/193000/oracle-instantclient19.3-basiclite-19.3.0.0.0-1.x86_64.rpm
wget https://download.oracle.com/otn_software/linux/instantclient/193000/oracle-instantclient19.3-devel-19.3.0.0.0-1.x86_64.rpm
wget https://download.oracle.com/otn_software/linux/instantclient/193000/oracle-instantclient19.3-sqlplus-19.3.0.0.0-1.x86_64.rpm 

# Install all 3 RPM's downloaded 
sudo alien -i oracle-instantclient19.3-*.rpm

# Install SQL*Plus dependency  
sudo apt install -y libaio1

# Create Oracle environment script
printf "\n\n# Oracle Client environment\n \
export LD_LIBRARY_PATH=/usr/lib/oracle/19.3/client64/lib/${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}
export ORACLE_HOME=/usr/lib/oracle/19.3/client64\n" | sudo tee /etc/profile.d/oracle-env.sh > /dev/null

. /etc/profile.d/oracle-env.sh

printf "Install complete. Please verify"

Finally,

We verify we're all set up by launching sqlplus

sqlplus /nolog

Sunday, 1 September 2019

Testing software targeting Ubuntu with multipass


If you a regular reader of my blog, you're probably across the fact that my primary system is Ubuntu (Linux). When developing, we test our software to make sure everything is working as expected - especially if we want other users to install it, you may have some software installed on your system that other users don't have. So it's a good idea to test in a "clean" environment.


One strategy is with Docker - you can just boot up a docker container and run your scripts:

docker run -it ubuntu:latest


This will put you in a new shell where you can begin trying out your software, or install instructions to make sure there's nothing missing.


Docker however is not the purpose of this article - so on to multipass. Mutlipass is a tool to fire up instances of Ubuntu - it's the technology that snapcraft uses when building snaps. The GitHub repo for the projects describes it best:

Multipass is a lightweight VM manager for Linux, Windows and macOS. It's designed for developers who want a fresh Ubuntu environment with a single command. It uses KVM on Linux, Hyper-V on Windows and HyperKit on macOS to run the VM with minimal overhead. It can also use VirtualBox on Windows and macOS. Multipass will fetch images for you and keep them up to date.

So, first we need to install it. This is done with:

sudo snap install multipass --beta --classic


If you are on another system such as Windows or Mac, you can download the installer directly from the products' website.

So, first thing you will want to do is decide on which version of Ubuntu you would like to run. You can get a list of available versions with the command

multipass find




Once you settle on a version you'd like to target, you would run

mutlipass launch <imagename> 


If you notice the help output for launch, you will see that you can restrict the resources allocated to the VM - disk, CPU, memory.

If you reference "ubuntu" for imagename, it would be the current LTS. Similarly, if you omit the imagename, it would be the current LTS.

Assuming you haven't previously pulled the image, this could take some time (at least in my experience - it took over 30 minutes to pull the current LTS).

Once that is complete, you will have a running instance in the background. When you launch an instance, it is allocated a name. In my first case, it was allocated beaming-gnatcatcher. This is so that I can control it by an easy idenitifiable name. You could have also allocated your own name with the argument
-n|--name
.


So now, to actually run commands on the system you can either connect to the machines console, or pass commands in one by one.

You connect in with the
shell
command. You will notice that the user is
multipass

and thus
 
$HOME=/home/multipass




Or if you want to build a script that just runs commands one by one, you would use the
 
exec image -- command




One other scenario you will probably want to do is copy files across to the image. This is easily done with the transfer command. Depending on the direction you are transferring the file, you would prefix the file path with the name of the image, like so:

multipass transfer daemon.json beaming-gnatcatcher:/home/multipass/daemon.json


And of course, the other option is just to mount a path from your host into the VM. Here we use the mount command.

multipass mount /home/trent beaming-gnatcatcher:/home/trent


Finally, to wrap up - you would want to delete, at least stop your images to save resources. Do this by the delete and purge commands.


Tuesday, 30 July 2019

Consuming node packages from the GitHub registry service

Recently I was using GitHub, and I noticed at the top of the page a new button/number combo, used by.



So, that was interesting - it's showing how many repositories are using a particular node package.

Today, I was scrolling through by Twitter feed and I saw a JavaScript develop posting that he's using the GitHub package registry service for his packages moving forward, and not so much the npm registry service.

That GitHub had a registry service was news to be, so I was curious to see how one would use packages stored there rather than on npm.

It's worth pointing out, that this service currently seems to be in the beta phase, but here is the page that describes the new offering: https://github.com/features/package-registry.

Within GitHub itself, if you navigate to an Organisation or User profile, you will spot a new tab "Packages".








A simple search on GitHub reveals there are currently 125 npm packages on the GitHub registry service. So, how do we actually use this package registry in our environment when pulling packages.

For this example, I will focus on the package with the most downloads - i18next-axios-backend.

The relevant section in the documentation that describes how to use the GitHub registry service: https://help.github.com/en/articles/configuring-npm-for-use-with-github-package-registry#installing-a-package

So, basically in your project folder you will want to have a file names .npmrc in the project root.

Because the particular package falls under the GitHub user providenceinnovation, the contents of the file should contain the following:

@providenceinnovation:registry=https://npm.pkg.github.com/

You also need to authenticate yourself. You can do this with the npm login command on your system, or specifying a personal access token in a ~/.npmrc file:

//npm.pkg.github.com/:_authToken=PERSONAL-ACCESS-TOKEN

For now, we can just do the manual login process:

npm login --registry=https://npm.pkg.github.com --scope=@providenceinnovation


(You need to scope the login the owner of the package you want to pull)

Then, package reference is the username/package name, prefixed with an @ symbol. For example:

npm install @providenceinnovation/i18next-axios-backend

Or in your package.json, within the dependencies, specify

"@providenceinnovation/i18next-axios-backend": "1.0.2"

and run

npm install.


Monday, 18 March 2019

Oracle Cloud Infrastructure command line client and object storage


Yesterday I blogged about a Google Drive client, and 2 years back a blogged about a custom workflow I was using to push and pull file(s) from/to Google Drive on Linux. I recently got access to Oracle Cloud Infrastructure so thought doing an equivalent task might be a good way to get my toes wet.

As with most of the cloud infrastructure platforms available, Oracle provides us with a command line tool we can use. This project source code is open sourced (under UPL 1.0 and Apache 2.0) and hosted over on GitHub. Christoph Ruepprich has previously blogged about this tool, but I wanted to go through it myself - so a lot of this information may be redundant if you already followed along with his blog post.

For my test case, I wanted to test in an isolated environment so I went ahead and pulled the latest ubuntu release using Docker:

docker pull ubuntu:latest

Then I enter that environment by running:

docker run -it ubuntu:latest

This will bring me to a shell prompt with root access. Some initial steps you will want to do is install some needed packages along with some extras that may be helpful in testing with.

apt-get update
apt-get install python3 python3-distutils curl jq

Before we move onto to installing the client, we will need to gather some information. First, go ahead and grab your user OCID and your tennancy OCID.

So first, go to the OCI console: https://console.us-ashburn-1.oraclecloud.com/

User OCID is accessed by going to user settings









And the tenancy OCID is accessed by opening the hamburger menu and accessing tenancy details.


note: all this is documented on Oracle's documentation here: https://docs.cloud.oracle.com/iaas/Content/API/Concepts/apisigningkey.htm#Other

So, with that information gathered, let's now go ahead and create a bucket to store our files. This is done through Object Storage within Core Infrastructure.






Then create a bucket that you will store your files in.

Now with all that done, we can move on to installing the client.

Per the documentation of the client (on the GitHub project page), run the following command:

bash -c "$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)"

Throughout this process, you should see the following prompts:

===> In what directory would you like to place the install? (leave blank to use '/root/lib/oracle-cli'):
===> In what directory would you like to place the 'oci' executable? (leave blank to use '/root/bin'):
===> In what directory would you like to place the OCI scripts? (leave blank to use '/root/bin/oci-cli-scripts')
===> Modify profile to update your $PATH and enable shell/tab completion now? (Y/n):

For the example, I just left everything as default.
You will also want to set some environment variables before trying to use the tool (it will prompt you to set these if they are not set).

export LC_ALL=C.UTF-8
export LANG=C.UTF-8

Since, I'm not re-initialising my shell, I'm also going to set my path so that it included the oci tool:

export PATH=$PATH:/root/bin

Now, the next step is to store the config about your cloud infrastructure. So run the following command and fill out the prompts that we retrieved in previous steps

oci setup config

You should see prompts:

Enter a location for your config [/root/.oci/config]:
Enter a user OCID:
Enter a tenancy OCID:
Enter a region (e.g. ca-toronto-1, eu-frankfurt-1, uk-london-1, us-ashburn-1, us-gov-ashburn-1, us-gov-chicago-1, us-gov-phoenix-1, us-langley-1, us-luke-1, us-phoenix-1):
Do you want to generate a new RSA key pair? (If you decline you will be asked to supply the path to an existing key.) [Y/n]:
Enter a directory for your keys to be created [/root/.oci]:
Enter a name for your key [oci_api_key]:
Enter a passphrase for your private key (empty for no passphrase):

So, now we need to upload the public key into the cloud infrastructure so you can be authenticated. Output the contents of your public key and then copy it into your public keys within your user settings page. Depending on what you configuration looks like, you may output the public key with a command like:


cat /root/.oci/oci_api_key_public.pem









So, with that all done you should now be able to perform commands against your OCI instance. A good initial test is to list all compartments:

oci iam compartment list --all




Now, with that done, lets play around with what we're here for. Downloading and Uploading files.

If you run oci without any arguments, you will see a list of all available sub-commands. A quick scan of this list we can see that we want to deal with the os sub-command (short for object storage). A delve into that, and we can identify two commands we will want to use will become:

oci os object get
oci os object put

A quick scan of the documentation, we then have our full command not disimilar to the following:

oci os object get --name Portfolio.xlsx --bucket-name exampleBucket --file Portfolio.xlsx

and

oci os object put --bucket-name exampleBucket --file Portfolio.xlsx

Sunday, 17 March 2019

Updating my CLI Google Drive Client

A couple of year back I blogged about a custom Google Drive workflow. The tool I'm using I recently noticed is facing some issues - when trying to pull the file I am presented with the following:


The problem seems to be that this tool is using a single API key/secret amongst all users - and with the popularity of this tool it's exceeding the daily usage limit each day. The product seems to be not actively maintained for a while now - so back to the drawing board.

There is another popular tool on GitHub, which supports overriding the API credentials used based on some environment variables. This tool is aptly called "drive" - and the project is found here: https://github.com/odeke-em/drive

Per the documentation, you can set up your own API client credentials to use with this tool to avoid the possibility of any usage limit violations (assuming it's just for personal use, it's unlikely you would exceed these):

> Optionally set the GOOGLE_API_CLIENT_ID and GOOGLE_API_CLIENT_SECRET environment variables to use your own API keys.

For this, you need to go to the Google API Console and create new credentials against a new/existing project.

Once installed, it will be slightly different behaviour. First you need a Drive context folder. So, per the documentation, I call drive init ~/gdrive. This will prompt you to go to a URL and paste the generated token.

With that done, if you navigate into your folder, and run the command drive ls, you should see all your Drive files and folders.

Further, if you want to pull a specific file you can do so with the filename or the ID. Since I was using the file ID with the previous tool I was using, I will just continue down that path. So my command ends up looking like this for pulling:

~/gdrive$ drive pull -quiet -no-prompt -id <fileToken>

And similarly for pushing (pushing doesn't seem to support the -id flag).

~/gdrive$ drive push -no-prompt -files MyFile.txt

Wednesday, 6 March 2019

Why is my date format not staying in uppercase?

In my application, I have my date format defined as "DD-MON-YYYY".




In my page, I have defined a default date as `sysdate`. So my date renders with the current date, and everything looks good:




However, as soon as I change the date, the month name is not persisting to be in all caps, per:




So, what is going on here?

If we look at the HTML node for selecting another date, we can see it runs the following code:

$.datepicker._selectDay(
    id, 
    +this.getAttribute("data-month"), 
    +this.getAttribute("data-year"), 
    this
);


In the latest APEX, we can view the source for this call in: https://static.oracle.com/cdn/apex/18.2.0.00.12/libraries/jquery-ui/1.12.0/jquery-ui-apex.js?v=18.2.0.00.12

This in turn makes a call to:

this._selectDate( 
    id, 
    this._formatDate( 
        inst, 
        inst.currentDay, 
        inst.currentMonth, 
        inst.currentYear
    )
);


Which in turn returns by:

return 
    this.formatDate( 
        this._get( inst, "dateFormat" ), 
        date, 
        this._getFormatConfig( inst )
    );


OK, so if we put a break point within this function, and try and change the date, we will be able to see that: this._get( inst, "dateFormat" ) is returning the format dd-M-yy. That means the mapping of MON APEX is making is to M is jQuery's date format. If you take a closer look at the jQuery docs, you will see that this is the only option for the short month name.

Therefore, if you want to stick with this format (short month name in upper-case), an easy UI change you could make to add a CSS rule to your application to enforce date picker fields to render in uppercase.

input.apex-item-datepicker {
    text-transform: uppercase;
}

Wednesday, 27 February 2019

Personal Project Activity Stream on JIRA

I wanted to show my personal activity within JIRA - sometimes it's tricky to find tickets but that you know you recently commented on one, so an activity stream can be a way to do this. I know that if on my project view, I click the project icon:


I get taken to a page that will show an activity stream. This however shows all activity on all tickets on the project, not just for the current user.

I also know that I can go to my user profile I can get an activity stream just for my account, but this will show activity across all projects.






A better approach I would say is to create a dashboard.

From your main menu, select Dashboard -> Manage Dashboards.

At the top right, you will see a button to create a new dashboard - click that button.

Now that you have the dashboard created, once you navigate to it you will see a blank slate that you can add gadgets to. Gadgets are little components to present data to the viewer, whether that be graphs, data grids or activity streams.

You may wish to alter your layout - for this example I will stick to the default 2 column layout.
In the right hand column, click the"add a new gadget" link.

By default, only 2 gadgets will be displayed. You will need to load all gadgets. Once done, the top entry will be the activity stream.



So, click the Add gadget button against the activity stream.



Here you can apply some global filters so that you get only data you want to see in the activity stream.



..and voila, mission accomplished. You can then easily access this page by going to your dashboard from the main menu.