Monday, 18 March 2019

Oracle Cloud Infrastructure command line client and object storage


Yesterday I blogged about a Google Drive client, and 2 years back a blogged about a custom workflow I was using to push and pull file(s) from/to Google Drive on Linux. I recently got access to Oracle Cloud Infrastructure so thought doing an equivalent task might be a good way to get my toes wet.

As with most of the cloud infrastructure platforms available, Oracle provides us with a command line tool we can use. This project source code is open sourced (under UPL 1.0 and Apache 2.0) and hosted over on GitHub. Christoph Ruepprich has previously blogged about this tool, but I wanted to go through it myself - so a lot of this information may be redundant if you already followed along with his blog post.

For my test case, I wanted to test in an isolated environment so I went ahead and pulled the latest ubuntu release using Docker:

docker pull ubuntu:latest

Then I enter that environment by running:

docker run -it ubuntu:latest

This will bring me to a shell prompt with root access. Some initial steps you will want to do is install some needed packages along with some extras that may be helpful in testing with.

apt-get update
apt-get install python3 python3-distutils curl jq

Before we move onto to installing the client, we will need to gather some information. First, go ahead and grab your user OCID and your tennancy OCID.

So first, go to the OCI console: https://console.us-ashburn-1.oraclecloud.com/

User OCID is accessed by going to user settings









And the tenancy OCID is accessed by opening the hamburger menu and accessing tenancy details.


note: all this is documented on Oracle's documentation here: https://docs.cloud.oracle.com/iaas/Content/API/Concepts/apisigningkey.htm#Other

So, with that information gathered, let's now go ahead and create a bucket to store our files. This is done through Object Storage within Core Infrastructure.






Then create a bucket that you will store your files in.

Now with all that done, we can move on to installing the client.

Per the documentation of the client (on the GitHub project page), run the following command:

bash -c "$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)"

Throughout this process, you should see the following prompts:

===> In what directory would you like to place the install? (leave blank to use '/root/lib/oracle-cli'):
===> In what directory would you like to place the 'oci' executable? (leave blank to use '/root/bin'):
===> In what directory would you like to place the OCI scripts? (leave blank to use '/root/bin/oci-cli-scripts')
===> Modify profile to update your $PATH and enable shell/tab completion now? (Y/n):

For the example, I just left everything as default.
You will also want to set some environment variables before trying to use the tool (it will prompt you to set these if they are not set).

export LC_ALL=C.UTF-8
export LANG=C.UTF-8

Since, I'm not re-initialising my shell, I'm also going to set my path so that it included the oci tool:

export PATH=$PATH:/root/bin

Now, the next step is to store the config about your cloud infrastructure. So run the following command and fill out the prompts that we retrieved in previous steps

oci setup config

You should see prompts:

Enter a location for your config [/root/.oci/config]:
Enter a user OCID:
Enter a tenancy OCID:
Enter a region (e.g. ca-toronto-1, eu-frankfurt-1, uk-london-1, us-ashburn-1, us-gov-ashburn-1, us-gov-chicago-1, us-gov-phoenix-1, us-langley-1, us-luke-1, us-phoenix-1):
Do you want to generate a new RSA key pair? (If you decline you will be asked to supply the path to an existing key.) [Y/n]:
Enter a directory for your keys to be created [/root/.oci]:
Enter a name for your key [oci_api_key]:
Enter a passphrase for your private key (empty for no passphrase):

So, now we need to upload the public key into the cloud infrastructure so you can be authenticated. Output the contents of your public key and then copy it into your public keys within your user settings page. Depending on what you configuration looks like, you may output the public key with a command like:


cat /root/.oci/oci_api_key_public.pem









So, with that all done you should now be able to perform commands against your OCI instance. A good initial test is to list all compartments:

oci iam compartment list --all




Now, with that done, lets play around with what we're here for. Downloading and Uploading files.

If you run oci without any arguments, you will see a list of all available sub-commands. A quick scan of this list we can see that we want to deal with the os sub-command (short for object storage). A delve into that, and we can identify two commands we will want to use will become:

oci os object get
oci os object put

A quick scan of the documentation, we then have our full command not disimilar to the following:

oci os object get --name Portfolio.xlsx --bucket-name exampleBucket --file Portfolio.xlsx

and

oci os object put --bucket-name exampleBucket --file Portfolio.xlsx

Sunday, 17 March 2019

Updating my CLI Google Drive Client

A couple of year back I blogged about a custom Google Drive workflow. The tool I'm using I recently noticed is facing some issues - when trying to pull the file I am presented with the following:


The problem seems to be that this tool is using a single API key/secret amongst all users - and with the popularity of this tool it's exceeding the daily usage limit each day. The product seems to be not actively maintained for a while now - so back to the drawing board.

There is another popular tool on GitHub, which supports overriding the API credentials used based on some environment variables. This tool is aptly called "drive" - and the project is found here: https://github.com/odeke-em/drive

Per the documentation, you can set up your own API client credentials to use with this tool to avoid the possibility of any usage limit violations (assuming it's just for personal use, it's unlikely you would exceed these):

> Optionally set the GOOGLE_API_CLIENT_ID and GOOGLE_API_CLIENT_SECRET environment variables to use your own API keys.

For this, you need to go to the Google API Console and create new credentials against a new/existing project.

Once installed, it will be slightly different behaviour. First you need a Drive context folder. So, per the documentation, I call drive init ~/gdrive. This will prompt you to go to a URL and paste the generated token.

With that done, if you navigate into your folder, and run the command drive ls, you should see all your Drive files and folders.

Further, if you want to pull a specific file you can do so with the filename or the ID. Since I was using the file ID with the previous tool I was using, I will just continue down that path. So my command ends up looking like this for pulling:

~/gdrive$ drive pull -quiet -no-prompt -id <fileToken>

And similarly for pushing (pushing doesn't seem to support the -id flag).

~/gdrive$ drive push -no-prompt -files MyFile.txt

Wednesday, 6 March 2019

Why is my date format not staying in uppercase?

In my application, I have my date format defined as "DD-MON-YYYY".




In my page, I have defined a default date as `sysdate`. So my date renders with the current date, and everything looks good:




However, as soon as I change the date, the month name is not persisting to be in all caps, per:




So, what is going on here?

If we look at the HTML node for selecting another date, we can see it runs the following code:

$.datepicker._selectDay(
    id, 
    +this.getAttribute("data-month"), 
    +this.getAttribute("data-year"), 
    this
);


In the latest APEX, we can view the source for this call in: https://static.oracle.com/cdn/apex/18.2.0.00.12/libraries/jquery-ui/1.12.0/jquery-ui-apex.js?v=18.2.0.00.12

This in turn makes a call to:

this._selectDate( 
    id, 
    this._formatDate( 
        inst, 
        inst.currentDay, 
        inst.currentMonth, 
        inst.currentYear
    )
);


Which in turn returns by:

return 
    this.formatDate( 
        this._get( inst, "dateFormat" ), 
        date, 
        this._getFormatConfig( inst )
    );


OK, so if we put a break point within this function, and try and change the date, we will be able to see that: this._get( inst, "dateFormat" ) is returning the format dd-M-yy. That means the mapping of MON APEX is making is to M is jQuery's date format. If you take a closer look at the jQuery docs, you will see that this is the only option for the short month name.

Therefore, if you want to stick with this format (short month name in upper-case), an easy UI change you could make to add a CSS rule to your application to enforce date picker fields to render in uppercase.

input.apex-item-datepicker {
    text-transform: uppercase;
}

Wednesday, 27 February 2019

Personal Project Activity Stream on JIRA

I wanted to show my personal activity within JIRA - sometimes it's tricky to find tickets but that you know you recently commented on one, so an activity stream can be a way to do this. I know that if on my project view, I click the project icon:


I get taken to a page that will show an activity stream. This however shows all activity on all tickets on the project, not just for the current user.

I also know that I can go to my user profile I can get an activity stream just for my account, but this will show activity across all projects.






A better approach I would say is to create a dashboard.

From your main menu, select Dashboard -> Manage Dashboards.

At the top right, you will see a button to create a new dashboard - click that button.

Now that you have the dashboard created, once you navigate to it you will see a blank slate that you can add gadgets to. Gadgets are little components to present data to the viewer, whether that be graphs, data grids or activity streams.

You may wish to alter your layout - for this example I will stick to the default 2 column layout.
In the right hand column, click the"add a new gadget" link.

By default, only 2 gadgets will be displayed. You will need to load all gadgets. Once done, the top entry will be the activity stream.



So, click the Add gadget button against the activity stream.



Here you can apply some global filters so that you get only data you want to see in the activity stream.



..and voila, mission accomplished. You can then easily access this page by going to your dashboard from the main menu.

Tuesday, 2 October 2018

Acting when the user pastes an image

I recently saw a question about how to act when someone paste's an image (from clipboard data). There is a plugin that can handle that - to store the image into a table, but I was interested from a purely academic perspective how this would work without using the plugin.

So, taking a look on MDN, I can see that there is a 'paste' event that you can listen to. See the documentation here: https://developer.mozilla.org/en-US/docs/Web/Events/paste.

Ok, so lets give this a whirl! I'm not going to cover loading this into a table/BLOB column, as it's been covered before. What we'll do, is create a region with an image element this will show our image as we paste - this can easily be extended to load it into a canvas, load into a table, whatever your hearts content!

Firstly, the region to show what the user pastes:


Next, we need a dynamic action. This will be a custom event ("paste") with the selector being the body.


Our True Action will be "Execute JavaScript Code" with code resembling:

function setImage(evt){
    var dataUrl = evt.target.result;
    $('#imageMagick').attr('src', dataUrl);
}

var pasteEvt = this.browserEvent;
var pasteItems =
    (pasteEvt.clipboardData || pasteEvt.originalEvent.clipboardData).items;

for (index in pasteItems){
    
    var item = pasteItems[index];
    if (item.kind === "file"){
        var pasteBlob = item.getAsFile();
        if (pasteBlob.type === "image/png"){
            var reader = new FileReader();
            reader.onload = setImage;
            reader.readAsDataURL(pasteBlob);
        }
    }
}


Now, when you paste, the image will get loaded into the region, with the img placeholder we set up. Neat! You may wish to cancel event propagation, but I'll leave that up to you to experiment with depending on your needs.

If you want to try out this specific example, I have an example:  https://apex.oracle.com/pls/apex/f?p=14882:32


Sources used in the making of this example:

- https://developer.mozilla.org/en-US/docs/Web/Events/paste
- https://stackoverflow.com/questions/6333814/how-does-the-paste-image-from-clipboard-functionality-work-in-gmail-and-google-c

Sunday, 30 September 2018

SQL*Plus, Node and Docker

This post is a continuation of my post from earlier in the week, in which I showed how to set up SQL*Plus, through a docker container that Oracle provides in their container registry.

In that post, I set up the sqlplus command through an alias. Almost all examples I've seen in the past use an alias when setting up a command line program that is in effect a docker container.

However, I have an extension in VSCode that I use that compiles my code which in effect is calling sqlplus and running my script. Even though I was able to connect in gnome-terminal and use SQL*Plus effectively, in my extension, no output was coming through.

My code looked like this:

var ChildProcess = require('child_process');
 
var sqlplusArgs = ["vmtest/vmtest@//192.168.0.104", "@test.sql"];
var sqlplusProcess = ChildProcess.spawn("sqlplus", sqlplusArgs);

sqlplusProcess.stdout.on('data', (data) => {
    console.log( data.toString() );
});


The first issue here is this is only set up to display the standard output channel - we need to also give the user feedback when any errors occurred. So we can tweak this slightly to become:

var ChildProcess = require('child_process'); 
 
var sqlplusArgs = ["vmtest/vmtest@//192.168.0.104", "@test.sql"]; 
var sqlplusProcess = ChildProcess.spawn("sqlplus", sqlplusArgs);

function printOutput(data){
    console.log(data.toString());
}

sqlplusProcess.stdout.on('data', printOutput);
sqlplusProcess.stderr.on('data', printOutput);


So just a reminder, the alias command looked like this in my previous post:

alias sqlplus='docker run -ti --rm container-registry.oracle.com/database/instantclient sqlplus'

So now when we try and run this little node demo program, we get an exception:


events.js:167
      throw er; // Unhandled 'error' event
      ^

Error: spawn sqlplus ENOENT


So, it looks like node isn't finding the alias we set up, and per this logged issue: https://github.com/nodejs/node/issues/3916, bash doesn't expand aliases in non interactive shells.

Ok, so instead of using an alias we should consider wrapping our docker command in a shell script. So we end up with a script per:

#!/bin/bash
docker run \
    -it \
    --rm \
    container-registry.oracle.com/database/instantclient sqlplus \
    "$@"


I've already set this up on my path and removed my alias, so I can go ahead and try my little node program again. This time, we get the following error:

the input device is not a TTY

This is an easy fix. In our original docker run command we passed in the arguments -it. The -i is for interactive and the -t is for TTY. So we can safely go ahead and remove that from our shell script. So, line 3 in the shell script above would just read: -i \.

At this point, we are not entirely ready, because with SQL*Plus, we want to be able to compile scripts, and suppose I'm within the folder: /home/trent/Projects/blog/ and have a script test.sql. If I launch sqlplus, it won't have access to those files, which you will see if you try to run:

sqlplus vmtest/vmtest@//192.168.0.104/xe @test.sql

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production

SP2-0310: unable to open file "test.sql"
SQL>


So this solution to this in 2 parts. Firstly, exposing your files into the container. Personally, I think it makes enough sense that you should just expose your entire home folder where, at least for me, is most likely where all my source folders will be stored. So, I append the following flag to my docker run command:

-v $HOME:$HOME.

This alone will be perfect if I want to compile with absolute paths. e.g.

sqlplus vmtest/vmtest@//192.168.0.104/xe @/home/trent/Projects/blog/test.sql

But usually you will probably just be specifying a relative path. So, we need to set the working directory. The docker run command has a flag for this too, so we can add in:

-w $(pwd)

So finally, the docker run command in our shell script would look like:

#!/bin/bash
docker run \
    -i \
    --rm \
    -v $HOME:$HOME \
    -w $(pwd) \
    container-registry.oracle.com/database/instantclient sqlplus \
    "$@"


So that brings us to the next point. Performance.

You are surely aware by now that Oracle now has a new command line tool for connecting to the database named SQLcl, which is effectively a drop in replacement for SQL*Plus.

So after my last post, there was a reply from Jeff Smith:



One of the reasons I advocate having SQL*Plus installed is because of the plugin I already mentioned that I have in VSCode (there are similar ones for Atom and Sublime) makes use of SQL*Plus (or SQLcl) to connect to the database and compile the active script/file. Unlike say SQL Developer, the connection isn't kept alive. It's purely launching SQL*Plus (or SQLcl) to compile the script, and then exits. And the startup time is so much faster with SQL*Plus. From my experience, SQL*Plus is well under 1 second to startup and run the script, where SQLcl takes at least 3 seconds or more.

Since this post, and the last, were about running SQL*Plus through a docker container, a co-worker replied:


In my initial testing, it was still very fast to startup. But don't take my word for it, let's get some hard facts. We can make use of the time command to startup each environment, immediately issue an exit statement to see how long the round trip takes in each.

1. SQL*Plus installed natively


3 tests produced: 0.010s, 0.003s, 0.009s (in that order) for user time


2. SQL*Plus running through a Docker container


3 tests produced: 0.073s, 0.065s, 0.065s (in that order) for user time (a little longer than a native installation though probably not enough of a difference to be noticeable by the user)

3. SQLcl


3 tests produced: 3.513s, 3.399s, 3.560s (in that order) for user time

So as you can see, not a lot of difference with SQL*Plus installed natively or by running through a container.

Wednesday, 26 September 2018

Setting up SQL*Plus with the help of an Oracle Docker container

Working with Oracle Database's I like to install the Oracle instant client - probably the biggest reason for me is that it gives me access to SQL*Plus. Personally, I think the installation is very straight forward, but the number one complaint I hear is its too difficult to set up.

So, how can you get access to SQL*Plus without this "difficult" installation process?

Say hello to the Oracle instant client docker container.

If you head over to https://container-registry.oracle.com, navigate to the database section, you will notice 3 repositories:



So, you will see the second one in the list is named "instantclient" - exactly what we need!

side note: this container is a little out of date, being last updated over a year ago, but I'd say the latest version isn't crucial for most needs.

So, if you click into instantclient, you will need to sign in first so that you can accept the license agreement. You will also need to sign into the registry from your shell. Once that is done, we can grab that image with:

docker pull container-registry.oracle.com/database/instantclient:latest

With that installed, you will now want to create an alias on your OS. So, go ahead an run the following in your shell:

alias sqlplus='docker run -ti --rm container-registry.oracle.com/database/instantclient sqlplus'

Now, if you run:

type -a sqlplus

You should see that SQL*Plus is now aliased to run that docker container.

I happen to have a VirtualBox VM running with an Oracle XE database, with the schema vmtest. It is on my local network with the IP address 192.168.0.104. So, now if I run:

sqlplus vmtest/vmtest@//192.168.0.104/xe


I'll be connected to the database, which the following screenshot demonstrates.



How simple is that! I didn't need to go through any rigorous process to install the instant client, and I now have SQL*Plus at my disposal.