Sunday, 31 December 2017

Windows Instant Client Setup

On my systems, I always opt for the instant client so I can get going with SQL*Plus as quickly as possible - the full client is just beyond my needs. If you are a regular consumer of my posts, you may have noticed I'm not primarily a Windows user, but of late I've had a need to be using Windows. So this particular post will be guiding the set up of the instant client, setting up your tnsnames and getting SQL Developer to use the OCI/thick driver.

So first, head over to the download page for Instant Client: I'm just grabbing the latest version as at the time of this article (

The packages I usually grab are:

  1. Instant Client Package - Basic Light
  2. Instant Client Package - SQL*Plus
  3. Instant Client Package - SDK

The basic and basic light packages make mention that one pre-requisite is the Microsoft Visual Studio 2013 Redistributable. So go ahead and install that before hand if necessary.

From the three files downloaded, you should end up with 3 zip files. Now, you will want to extract the contents of each into the same folder - usually I end up with a single folder named instantclient_12_2 containing all the files.

Each zip file Oracle provides contains a single folder (instantclient_12_2), so if you apply the same path during extraction, all files will go to the same location. In my example, I end up with all my instant client files at the following path:


So, I then cut the folder in the last segment of the path and put it in my C drive, such that I end up with C:\instantclient_12_2.

Now that we have all the files, we need to a couple of extra steps so we can start using SQL*Plus.

Our folder contains a binary for sqlplus, so we should add this path to our Path environment variable, so that we can type sqlplus within command prompt without needing to specify the full path. So open the Environment Variables configuration tool within Windows. On Windows 10, this is easily accessible by searching for it in the Start menu.

There are two options - User or System variables. Since I'm the only one using the system, I'll just opt for User variables. So focus on the Path line, click edit, and add the location where you stored your instant client files.

It's also not a bad idea to set an environment variable ORACLE_HOME to point to the same location.

With all that done, you should now be able to open up command prompt and type: sqlplus /nolog, to get an SQLPlus command prompt. If you have a database server you can connect to, try connecting too to make sure all is good.

Ok, the next part is how do we connect with our tnsnames entry?

I know my entry will look like this:

XE =
    (ADDRESS = (PROTOCOL = TCP)(HOST = = 1521))

So, option one is within ORACLE_HOME, make a folder in the path: $ORACLE_HOME\network\admin named tnsnames.ora with your TNS entry specifications (like above), and you don't need to do anything else - it will be automatically picked up. You can now connect to the server with the convenient shorthand of XE. E.g. sqlplus user@XE

If by chance you would prefer to keep it in another location, you just need an environment variable for TNS_ADMIN.

With the instant client now installed, I like to tell SQL Developer to use the instant client I have installed on my system, so navigate to Tools -- Preferences... -- Database -- Advanced, and set up the configuration based on where you store your instant client. 

Read more about the thick driver support within SQL Developer on Jeff Smith's blog:

Sunday, 22 October 2017

Keep the APEX builder tab and application tab in the same Firefox window

A long while ago, I switched to using Google Chrome, and I use Firefox usually only to test some feature so I literally have not run the APEX builder in Firefox since version 3.2. However, recently I heard about Firefox having a neat new feature which is called "Containers" and is part of their Test Pilot program, where you can have a set of tabs each with their own cookie/storage/session space - you can read more about the technology here -

This is good news, because it would allow be to be logged in with different user account simultaneously and thus speeding up testing time.

However, much to my surprise, unlike it Chrome when you run the application, it opens the application in a new tab, Firefox opens the application in a new window! I don't particularly like this behaviour, so after some digging I firstly found there is a preference to open new windows in a new tab instead.

The bad news is, this seemed to have no bearing on the behaviour - at least from APEX (I didn't test any other sites that open new tabs).

So, this is where we can turn to some more advanced configuration options. In the address bar, go to about:config and accept the risks. From here, filter the list by applying the search "".

So, the highlighted setting "" has possible values of:

  • 0: Apply the setting under (A) to ALL new windows (even script windows)
  • 1: Override the setting under (A) and always use new windows
  • 2: Apply the setting under (A) to normal windows, but NOT to script windows with features

So, we will want to go ahead and update that value to 0, now in APEX, when you run the application, you will get a new tab instead of a new Window.

One downside with having them in tabs though is the fact that Firefox doesn't support focusing other tabs. The only way to do it that I've seen is to show an alert on the other tab, which will force the tab to steal focus. Not entirely desirable. That said, I set up an example user script to achieve this.

Saturday, 30 September 2017

ES6 with SQLcl / JDK9

In JDK 8, the Nashorn engine was still on ES5. With the latest release (JDK9), they have included optional support for ES6, which can be enabled by way of an argument. So, to test out this functionality I will be trying with a new variable/constant construct called `const`.

Before developing my script for SQLcl, I sometimes like to use jjs to verify everything. jjs is an interactive shell that invokes the Nashorn engine. So by defualt it uses ES5, so we can write a script like so:

jjs> var a = Math.floor(Math.random()*100) * 55
jjs> print (a)

With JDK9, we can invoke this tool with ES6 support via an optional argument `--language` where we can specifiy either es5 or es6.

$ jjs --language=es6
jjs> const a = Math.floor(Math.random()*100) * 55
jjs> a = 2
<shell>:1 TypeError: Assignment to constant "a"
jjs> print (a);

The above snippet shows I can use the new `const` construct that is a part of the ES6 spec within the Nashorn engine. See this article for more information about some of what's included:

This is good and all, but the question is, how do we get this working in SQLcl?

Well, thankfully, when invoking our Java programs we can set nashorn orguments with a command: `-Dnashorn.orgs=.`.

If you take a look at the SQLcl bash script, you will see that if you set some JAVA_OPTS values, that will get passed along to the call to SQLcl. So, if you want to run ES6 (which, why wouldn't you!) in your SQLcl scripts, simply set that variable accordingly.

export JAVA_OPTS="-Dnashorn.args=--language=es6"

So, to verify it works, I set up a little test script using `const` instead of `var` (which isn't supported in ES5). The script looks like:

print ("Set variable `unchangeableValue` to 55");
const unchangeableValue = 55;
try {
    print ("Trying to change variable to 0");
    unchangeableValue = 0;
} catch (e) {
    print ("Exception");
    print (e);

First pass, we will run using ES5 (the default), and the engine picks up on a unrecognized key word (const) and throws an exception:

$ sql /nolog

SQLcl: Release 17.3.0 Production on Sat Sep 30 16:27:27 2017

Copyright (c) 1982, 2017, Oracle.  All rights reserved.

SQL> script test.js
javax.script.ScriptException: :2:0 Expected an operand but found const
const unchangeableValue = 55;
^ in  at line number 2 at column number 0

Second pass, we set the JAVA_OPTS as specified above. The output then becomes:

$ JAVA_OPTS="-Dnashorn.args=--language=es6" sql /nolog

SQLcl: Release 17.3.0 Production on Sat Sep 30 16:29:17 2017

Copyright (c) 1982, 2017, Oracle.  All rights reserved.

SQL> script test.js
Set variable `unchangeableValue` to 55
Trying to change variable to 0
TypeError: Assignment to constant "unchangeableValue"

And the script runs as we expect!

Tuesday, 19 September 2017

Setting up Oracle XE in Docker - Lessons Learned

For a little side project I'm working on, I had the need to set up a Docker instance of Oracle. I have read about Docker a while back and understand the general concept - but up until now, I hadn't had much experience with it (other that running the basic hello-world Docker example).

So, since I'm about using the free version of the Oracle Database at the moment (Oracle XE), I wanted specifically to build that in a VM. Now, I know that Oracle has a GitHub repository with some sample Docker files - unfortunately, I didn't have success with the Oracle XE version, so it wasn't a bad time to go and build my own.

Here are some things I learnt:

1. The default storage location on my Ubuntu system is: /var/lib/docker

This is a problem as I'm developing, since the way my disk is set up, the root partition soon fills up.

2. You can change the location that images are stored, by creating a json file: /etc/docker/daemon.json.

In here, you will want an entry "data-root" with the path to your desired location. The Docker daemon runs as a service, which is effectively the `dockerd` binary. So, whilst testing your config, it's a good idea to stop that (systemctl stop docker). Then, update your config and run `dockerd`. There is also a `debug` property (true|false) that you can set so you get additional output. The first time I ran the daemon with the config file in place, my system had to set up a storage pool

DEBU[0001] devmapper: Pool doesn't exist. Creating it.

This seemed to bring my system to a hault, but leaving my system and coming back it was doing something - so keep in mind that isn't a quick process!

Something curious is that this config file allows you also to specify the storage driver. Now, I don't know much about the Docker storage drivers at this stage, but without the config file it uses the `aufs` storage driver, however if I try to set that in my config file, I get an error about the driver not being available. So it's possible I'm missing some other corresponding entry.

Once all looks good, switch back to using the daemon as a service: systemctl start docker

3. The CMD argument in your Dockerfile is best as a single executable

The CMD argument is what get executed whenever you run a new instance of your image. For the image I built, my command was to start the oracle-xe service. So I started with CMD ["/etc/init.d/oracle-xe", "start"]. This is good, but the problem is when running in detached mode, because the CMD completes, the container also exits.

After doing a bit of research and looking at the Oracle's example Dockerfile, it is a good idea to watch a file - the Oracle one was monitoring an log file, but I so another example online suggested just to watch /dev/null. So effectively I needed to run two commands:

1. oracle-xe start
2. tail -f /dev/null

When I tried chaining these in the CMD property of the Dockerfile, the container threw errors.

Again, after some more research and looking at the example on Oracle's repository, if you have more than one program to run, you should place this in a simple shell script, and pass that into the CMD property.

So, I end up with a script:

/etc/init.d/oracle-xe start
tail -f /dev/null

Then the Dockerfile referencing:

CMD ["/root/install/"]

4. The images are generally very minimal and need to have context set up

If you run the Ubuntu image:

docker run -it ubuntu:16.04

You will see there is very little environment variables set up:

root@b7535b34011e:/# pwd

So, it's not a bad idea to set up some basic environment such as the user

USER root
ENV HOME /root 

When copying files over, at least now we are placing them within the users home (/root) rather than the root of the file system.

5. It's a good idea to run interactively whilst developing

Docker supports two methods to run an image

-i, --interactive=true|false
    Keep STDIN open even if not attached. The default is false.

-d, --detach=true|false
    Detached mode: run the container in the background and print the new container ID. The default is false.

I like to run interactively most of the time to see everything is behaving as expected and to see any output. So the full command would be something like:

docker run -it <image_tag>

Replace i with d when all looks good!

If your CMD property isn't giving the desired result, you can override what is set in the image by passing the --entrypoint command line argument with the value of /bin/bash

docker run -it --entrypoint=/bin/bash <image_tag>

6. You need to set the shared memory size in the command line argument

XE requires at least 1gb of shared memory, so I am a generous guy and give 2gb. This needs to be set when running the image through an argument. So the previous example becomes:

docker run --shm-size=2g -it --entrypoint=/bin/bash <image_tag>

7. Avoid anything that depends on a hostname

When installing XE, the generated listener.ora will reference the hostname. Since each time you run an image in a new container the hostname will be something different, it's a good idea to update this so that you will be able to connect on each corresponding container.

See the example of the listener.ora from the Oracle repository here:

8. Finally, some useful commands

docker build -t   
docker run --shm-size=2g -it 

# Stop running containers
docker stop $(docker ps -q -a)
# Remove all containers
docker rm $(docker ps -a -q)
# Delete images with no tag/name
docker rmi $(docker images -f "dangling=true" -q)
# Delete all images
docker rmi $(docker images -q)

Friday, 18 August 2017

Bulk loading a template APEX application in a single workspace

I recently had a need to import the same application multiple times - 30+, so each person accessing the workspace could use their own application to play around with. And the user list was supplied to me in the format of an Excel spreadsheet. So, first in the Excel sheet, I had to make a new column to derive some sort of unique value - what better than the first letter of their first name and the surname. Like John Smith -> JSMITH.

So, first I made a new column with the value as a formula, which was: =LEFT(A2,1). That would give me "J" for "John.

Then, for the actual account ID I come up with the formula =UPPER(CONCAT(C2, B2)) Where C2=J and B2 = SMITH, giving me JSMITH.

With that list, I can copy it over to my trusty text editor in order to manipulate to create an array.

The first part I wanted to do was create all the users, so for that I can use raw PL/SQL, and come up with a block of code like so:


    type t_users is table of varchar2(200)
        index by PLS_INTEGER;
    l_all_users t_users;
    l_ws_id NUMBER;

    procedure append_list(
        p_emps in out t_users,
        p_new_person in varchar2
        p_emps(p_emps.count + 1) := p_new_person;
    end append_list;


    l_ws_id := apex_util.find_security_group_id('WORKSPACE_NAME');

    append_list(l_all_employees, 'USER1');
    append_list(l_all_employees, 'USER2');

    for i in 1..l_all_employees.count

            p_user_name => l_all_employees(i),
            p_web_password => l_all_employees(i),
            p_developer_privs => 'CREATE:DATA_LOADER:EDIT:HELP:MONITOR:SQL',
            p_change_password_on_first_use => 'N',
            p_allow_app_building_yn => 'Y',
            p_allow_sql_workshop_yn => 'Y',
            p_allow_team_development_yn => 'Y'
    end loop;


Run that, and you should find all the users now have accounts in the workspace - I just assigned the password value as the same as the username to keep things simple.

So, the next part was loading the applications. APEX comes with the API APEX_APPLICATION_INSTALL, so if we look at the documentation, we can see a simple import script as:

  apex_application_install.set_application_alias( 'F' || apex_application_install.get_application_id );

So, we need to run an anonymous block, and then the actual application export script - which we can't put inside an anonymous block.

So, there comes in the scripting functionality of SQLcl. It support running JavaScript, so from the first step, I can make an Array of all the users, but this time in JavaScript. Then we can just loop over that structure and set the statement to run - which we do two separate statements, 1. the anonymous block; and 2. the application import script:

function runCmd(cmd){

var people = [

for (var i = 0; i < people.length; i++){

    print ('Loading application for: ' + people[i]);

            "    l_ws_id number;",
            "    l_ws_id := apex_util.find_security_group_id('WS_NAME');",
            "    apex_util.set_security_group_id(l_ws_id);",
            "    apex_application_install.generate_application_id;",
            "    apex_application_install.generate_offset;",
            "    apex_application_install.set_application_alias( 'F' || apex_application_install.get_application_id);",
            "    apex_application_install.set_application_name( '" + people[i] + " - Workshop Application' );",


    print (' ->');

Then, I just connect to my database in SQLcl and run that script (named appLoader.js)

conn user/pass@DEV
script appLoader.js

note: I could have just run everything (creating the user account and importing the app) in the SQLcl script, but this is just the process I followed as I was trying to figure things out :)

Thursday, 13 July 2017

Report row buttons firing a dynamic action

A lot of the time on reports of data, we'll define a link column, and specify an image which is the beloved pencil icon.

Which is good and all, but looking in the Sample Database Application (packaged application), they have taken a different approach to style links - a button that stretches the width of it's containing cell. So, the UI ends up looking like so:

So, is this done?

Well first, you define a new column for the placeholder. Suppose I want my link column to be Report, I define an extra column in my query:

, 'Report' report_link

Then I need to go to my new column's attributes, and change the column type to a link.

And, in the link attributes, set the text to what you want displayed. Here, I'm just going to display the value of the new column and point to my desired page. So I set the target page, then also the Link text as #REPORT_LINK#.

At this point the report will just show link text, as we expect.

So, then to replicate the same style, we just need to apply the same classes, which happen to be:

  • t-Button 
  • t-Button--simple 
  • t-Button--hot 
  • t-Button--stretch
So, in the link, set the link attributes as:

class="t-Button t-Button--simple t-Button--hot t-Button--stretch".

Now when we run the page, we get the desired behaviour:

How about for a dynamic action?

Well, in the columns link, there is no option for "Defined by Dynamic Action" as we usually see in regular page buttons. We just get page or URL options.

Typically, you will assign a special class to your element so that you can specify a jQuery selector on your dynamic action target that will fire and optionally a data attribute used to store the ID of the row (such as data-order-id="xx"). More on this later.

So, back to the column action. A typical pattern here will be to specify URL and that a target of #, javascript:void(0);, or javascript:;.

The # is usually a trick when you want to go to a position in the same page, so not using href as it was designed for.

The void is a JavaScript operator that returns void, and the other one is obviously an empty javascript expression!

When using void, it is important to pass in 0 to the function, or it's likely you will receive the following error:

Uncaught SyntaxError: Unexpected token )

With all that said, I think it is better to avoid these (hacky?) solutions. It is not a "link" so the href tag should ideally not be there. If you want a row button that will fire a dynamic action, instead you should set the column type to plain text and then set a HTML expression on the column.

Since it is a button we want, it is a button we shall use. We will end up with markup as follows,  that we add into the HTML expression:

<button class="orderReportButton
           t-Button--stretch" data-order-id="#ORDER_ID#" type="button">
    <span class="t-Button-label">Report</span>

There we have.

Now, we just need to finish off our dynamic action. We create a click dynamic action based on a jQuery selector.

Then, in our true action, we can just reference the expression "this.triggeringElement.getAttribute('data-order-id')" or jQuery "$(this.triggeringElement).data('order-id');

Monday, 10 July 2017

Custom workflow to download or upload specific Google Drive files

I had an interest in downloading and uploading a specific file into my Google Drive at certain timing points. For examples sake, let's call the files "Designs.cad". There is a command line project on GitHub which is Go based. So if you have Go installed, it is just a matter of running:

go get

This will download from source, so if you don't want to do this or if you don't have Go installed (you should - there are lots of great go projects), you can see a list of releases on the project page - - which you can download and place in your system somewhere, with the command becoming "gdrive".

When you first install it, you will need to authenticate your account. Running any command will prompt you to go to a URL to get an authorization code and paste in back on your console. So for example, if I run:

gdrive list

Which is designed to list files in my Drive. After running the command, if you haven't previsouly authenticated, you will be prompted to go to a URL to paste the authorization code. Sample output:

trent@birroth:~$ gdrive list
Authentication needed
Go to the following url in your browser:

Enter verification code: xxx

Where xxx is the verification code I pasted back in.
If successful, you will end up with a file at: $HOME/.gdrive/token_v2.json, containing on the necessary authorization codes so the program can continue to function whilst ever the authorization exists for this application.

So, looking at the help, there are two commands that will be useful - download and update.

We can find out the program usage with the help command, like so for the download operation.

$ gdrive help download
Download file or directory
gdrive [global] download [options] 

  -c, --config          Application path, default: /home/trent/.gdrive
  --refresh-token       Oauth refresh token...
  --access-token        Oauth access token...
  --service-account     Oauth service account filename...

  -f, --force           Overwrite existing file
  -s, --skip            Skip existing files
  -r, --recursive       Download directory recursively...
  --path                Download path
  --delete              Delete remote file when download is successful
  --no-progress         Hide progress
  --stdout              Write file content to stdout
  --timeout             Set timeout in seconds...

To use this, I need to find the file ID, which I can do by using the list command, or looking at the web interface of Google Drive. The filename will be the same as that as it is on Google Drive. So, with that information I end up with the command:

gdrive download --path /home/trent/ --force 1GUG3Y3Ce56Pa0k2

Which gives output like so:

Downloading Designs.cad -> /home/trent/Designs.cad
Downloaded 1GUG3Y3Ce56Pa0k2 at 18.2 KB/s, total 18.2 KB

Similarly, to send the file in the other direction, we need to look at the usage for the command update

$ gdrive help update
Update file, this creates a new revision of the file
gdrive [global] update [options]  

  -c, --config           Application path, default: /home/trent/.gdrive
  --refresh-token        Oauth refresh token...
  --access-token         Oauth access token...
  --service-account      Oauth service account filename...

  -p, --parent           Parent id...
  --name                 Filename
  --description          File description
  --no-progress          Hide progress
  --mime                 Force mime type
  --timeout              Set timeout in seconds...
  --chunksize            Set chunk size in bytes...

That means, our command will become:

gdrive update 1GUG3Y3Ce56Pa0k2 /home/trent/Designs.cad

Sunday, 9 July 2017

Converting a physical machine for use in VirtualBox

I recently had an interest in converting a physical machine (Windows) I have into a virtualised environment. For my personal Virtual needs, I tend to use and favour VirtualBox - so my end goal is to have the system running in VirtualBox.

A quick google with these keywords reveals this VirtualBox article: However, this talks about Windows XP, and then one of the software it references is 10 years old. And for some extra context, Windows XP was first released in 2001 and the last release (SP3) in 2008. Ok, so I'm deciding not to go down that path.

Putting aside the previous article, I happen to know there are two solutions that support converting a physical machine.
  • VMWare
  • Microsoft (Disk2vhd)
So, I will go with the VMWare option for this article/guide. The software is found over at: So, I download and install the software (note, you will need a VMWare account in order to download this software).

The software includes official documentation in the form of a PDF document, so if you want to review that, head over to: - chapter 6 is the relevant chapter.

Firstly, you will want to start the program as Administrator to ensure you have full privileges. Search for VMWare and right click the relevant entry, and then click "Run as administrator".  Once it's opened, click on the "Convert machine" toolbar button - this will begin the wizard.

For the source, it's pretty safe to leave the default options

On the second step, Destination system, I will opt to target VMWare Player 7.x. And I think it goes without saying you should be targeting this to be saved on an external disk. So, the desitnation configuration ends up looking like:

After this step, you will be presented with all the system options that you may want to update - such as reducing the memory, since your virtual environment would probably have far less memory than your physical machine. I also change the number of virtual cores down to 2 (from 4).

That's all done we can run the conversion. Click Finish and the process will begin. After the job submits, the process will take some time, so you can re-visit the process later (it offers a time estimation - for me, it estimated 1 hour, but jumped up and down periodically).

In the meantime, we can go and grab VMWare Workstation player, if we don't already have anything to verify the conversion in VMWare. This can be found here: note: this is free for non-commercial use. This is just if you want to verify the conversion in VMWare - if you prefer, or that license doesn't suit, you can skip this step and grab the ovftool directly over here:

With the conversion finished, you should end up with two files: a vmdk and vmx file. Here, we can test it works by opening it up in VMWare Player - for performance, you may want to copy the files over to your host computer, space permitting. Aside from that, all should be working.

Now to convert this for use in VirtualBox. You can either just point at the vmdk file and create the machine manually or create an appliance - I will go forward with the latter approach, though pointing at the existing storage file (vmdk) will get you up and running much quicker!

VMWare player comes with a command line program named ovftool which supports converting from "VMX" to "OVF". (This tool can also be downloaded standalone: - the next step is based on if you installed VMWare player). So, go to the folder where workstation player was installed to - on my system, it is: "C:\Program Files (x86)\VMware\VMware Player". You should see a folder named "OVFTool" - right click that whilst holding the shift key, and click "Open PowerShell Window Here".

Then run (adjusting to where your VMWare VM was saved to and where you want the VirtualBox appliance saved to) the following:

ovftool.exe "F:\VMWareConverted\WindowsVM\WindowsVM.vmx" "F:\VBAppliance\WindowsVM.ovf"

This is another fairly long process, so sit back!

Once that finishes, head over to VirtualBox, and import the appliance we just created - the ovf file.

The name comes through as "vm" so you'll probably want to update that, I also changed the Guest OS Type property to Windows to match my system.

I also disabled all the network adapters (4) that came through in the settings, so I can manually add them later - I encountered issues where the network adapters weren't being detected properly in the guest, so this is probably the safer option - we add the network interfaces manually after import.

Click import to create the VM within VirtualBox.

Once that finishes, when attempting to run the machine, you will find you're not out of the woods yet. So, here are some extra steps you'll want to do before starting the virtual machine.

  • Enable EFI (if your computer is recent and EFI based)
  • Move the HDD to the SATA controller 
  • Add an optical disk (so we can install the guest additions)
  • Enable the network
  • Switch to Intel HD Audio
  • Install guest additions

I found that when starting the system, I would receive the following error:

Enable EFI

After doing a bit of digging, I can see in the VirtualBox settings, EFI is not enabled but my system is fairly recent so that is in use and needs to be enabled. So head into the configuration and make sure that is enabled.

So, now this time when trying to boot we will get another error. Progress!

Move the HDD to the SATA controller 

Looking in the VM configuration, I see the storage device is attached to a SAS controller. Just on a hunch, I try moving the moving the device into the SATA controller (delete and re-add the disk from the storage sections in the machine settings).

So, remove the disk "WindowsVM-disk1.vmdk" from the SAS controller

Then re-attach it to the SATA controller. When prompted, specify Choose existing disk and navigate to where the disk is stored. This will a path similar to be: $HOME/VirtualBox VMs/ConvertedPC/WindowsVM-disk1.vmldk.

Add an optical disk

In the same settings area (Storage), click on the optical disk icon that is on the same line as the controller, in order to attach an optical drive to the system.

When prompted, specify to leave empty.

Add a network interface

Since we disabled all network interfaces when importing the appliance, head over to network settings and add a NAT adapter so you can access the internet.

Switch to Intel HD Audio

When I first booted, I found the audio controller wasn't working so I switched it over to Intel HD Audio, and all worked well after that.

And after doing all this, we are able to successfully boot into our PC that has now been virtualised.

Install guest additions

Log in, and install guest additions. This is done by clicking on the Devices menu and selecting Insert Guest Additions CD image...

Follow the steps in the VM. After the process completes it will want to reboot your system - after which point you have your fully functional converted PC as a VM in VirtualBox.

edit: If you have input issues due to hardware differences, this thread will help: - you need to modify some registry entries

Sunday, 25 June 2017

Using the Oracle node driver on Windows

I have a pretty fresh installation of Windows, and I wanted to get going with the Node driver on that system. I know from memory it requires the Oracle client, so the first thing I do is go and install the Oracle instant client.

So, I head over to:

And what I normally stick to is three packages:
  1. Basic (Light)
  2. SQL*Plus
  3. SDK

With those downloaded I then unzip them all - as long as you downloaded to the same directory, each should hopefully unzip into a single folder: instantclient_12_2 (depending on the version you grabbed, of course). After I unzip all those, I moved the instantclient folder to a more convenient location. So, I just place it at: C:\instantclient_12_2

The next step after that is to set up some environment variables.

First, you will want to update your path so that `sql*plus` is on it. So, add a new entry on your path pointing to: C:\instantclient_12_2

Usually, you will want to set your ORACLE_HOME environment variable, so I also do that.

So, path should have a new entry:

And, similarly ORACLE_HOME

A good test to check this is working is, is SQL*Plus working?

Yep, that looks good.

I already have node and npm installed, so now without consulting the node oracledb install documentation, I'll just give a test to install that package.

Still in command prompt, in a new folder somewhere - I'm testing out in a tmp folder, I run: npm install oracledb.

Ok, it looks like the system is missing some dependencies. Probably a good time to turn to the install documentation. The README says we need the following:

  • Python 2.7
  • C compiler with support for C++ 11
  • Oracle client (done)

My thought process - Python 2.7 is easy, I can head to the python website and download - but a C compiler? Where is the best place to get that? OCI_LIB_DIR and OCI_INC_DIR.. what paths should these be set to, the README doesn't expand.

For the first two requirements - there is a convenient node package, windows-build-tools, which just so happens to include python 2.7 and a C compiler. So, let's first install that.

Just to caution on the safe side, I launch Command Prompt as Administrator and run: npm install -g windows-build-tools

Great - that's looking good. 

How about OC_LIB_DIR and OCI_INC_DIR environment variables. Well, the documentation also links to an INSTALL document which contains more detail. It states:

set OCI_LIB_DIR=C:\oracle\instantclient\sdk\lib\msvc
set OCI_INC_DIR=C:\oracle\instantclient\sdk\include

This is a but different install path, but the same concept applies. Head to your envionrment variables area and add those in, except if you followed my general install path for the instant client C:\oracle\instantclient would be replaces with c:\instantclient_12_2.

Your environment variables page would look like:

Now with that all done, we should be able to successfully install the node Oracle drive. Start a new command prompt window to make sure all your variables are re-evaluated.

Now, if all that was done correctly, you should be able to install the driver. I verified by again running npm install oracledb:

If you have to use Windows, at least you have a path to get up and running! Some would encourage OS X, but I'll leave you by encouraging you to make a switch to Linux ;-)

Sunday, 21 May 2017

SQLcl scripting and npm modules

Node, or rather npm, has a huge collection of submitted modules! And if you've been using SQLcl to connect with the database, you should know it has scriptable ability where you can for example run JavaScript code and Java APIs. So the question is, how do we get the script to pick up node modules?

During my research, I found a few suggested methods:
  1. J2V8
  2. Project Avatar
  3. jvm-npm
I didn't try the first two methods, but have had success using the third (jvm-npm). This is a library with the source code hosted over on GitHub -

At this point, it's worth noting - not everything will work, as it does not include the full NodeJS API. 

So, first step to do is you'll probably want to host that library somewhere in your project - in my example, which I will provide the link to at the end of the article, I place it at ./lib/jvm-npm.js. Worth mentioning, you can also source this over http in your script, but probably best to keep a local copy.

For this example project, I'm going to focus on the lodash node module. So, the next step will be to set up a package.json file where you list any node dependencies - this is purely so I can run `npm install` to install any dependencies my script requires. So go ahead and run `npm init` in your project directory. After completing the wizard add your dependencies. You should end up with something like:

  "name": "sqlcl-npm-demo",
  "version": "1.0.0",
  "description": "Example of using node modules in an SQLcl script",
  "main": "lodashExample.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  "author": "Trent Schafer",
  "license": "Apache",
  "dependencies": {
    "lodash": "^4.17.4"

Now, after that is done, run: `npm install`.

trent@birroth:~/Projects/sqlcl-npm-demo$ npm install
sqlcl-npm-demo@1.0.0 /home/trent/Projects/sqlcl-npm-demo
└── lodash@4.17.4
trent@birroth:~/Projects/sqlcl-npm-demo$ ls node_modules/

Ok, so we have all of our dependencies installed. Now, how to we get this to work in our SQLcl script?

In a typical node app, you would have something like:

var _ = require('lodash');

If you try this out now, you will get an error like so:

SQL> script lodashExample
javax.script.ScriptException: ReferenceError: "require" is not defined in  at line number 1
<stack trace trimmed>

So, in order to get `require` available to us, we need to make it so our script has this jvm-npm library loaded - that we place in our project earlier. If you're not aware, you can load other scripts into a nashorn script with the `load` function. Read more over at this article:

So, at the start our script, add the following call:


Then try re-running your script, and you should notice no errors this time around. So, it looks like we are good to go! Just to verify, let's take something out of the lodash API to see if all is working as expected. Lodash has an `nth` function that basically retrieves the specified index of the passed in array - with one little enhancement, it supports negative indexes to search backward - so let's try that out.

My full script becomes:

var _ = require('lodash');

var testArray = [

var secondLast = _.nth(testArray, -2);

print("The second last item was", secondLast);

Then, the test output:

trent@birroth:~/Projects/sqlcl-npm-demo$ sql vmtest/vmtest@//

SQLcl: Release 4.2.0 Production on Sun May 21 19:40:07 2017

Copyright (c) 1982, 2017, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Express Edition Release - 64bit Production

SQL> script lodashExample.js
The second last item was 164

Which all looks correct. Yay, success.

As I mentioned, it doesn't include the full node API, so you will probably need to be cautious going forward if you use this - but just offers that little bit more.

Working files from this example:

Sunday, 14 May 2017

Loading data from a web service. An alternate approach

I was recently working on a project where my client had just implemented a new system, and now they wanted to utilise the provided web services on this platform for some more reporting that wasn't provided out of the box. Being we use APEX, we had the nice APEX_WEB_SERVICE API to use - which made things nice and simple, but before that we needed to perform a couple of extra steps, which is required each time you want to pull data over the web from another domain.

It's actually, I feel, too complicated to get going - but I understand these restrictions are in place to keep systems nice and secure.

In a nut shell, we should be dealing with resources secured with some certificates - so we need to create both an ACL to the resource followed by creating a wallet with all the certificates the site uses. After that, we are good to go - until some time down the track when the service updates there certificates, then we once again need to update our wallet with the sites certificates.

Now, we got everything all working - the web service returns an XML feed. We parse the result and load it into tabular format into one of our tables. This was set up to run once a day - real time data isn't a strong requirement.

Given we are only running this once a day, I thought another approach could be the leverage SQLcl. It supports running Java and JavaScript through it's scripting engine. So, first I fired up my Java IDE to prototype performing a GET request, and came up with the following:

CredentialsProvider provider = new BasicCredentialsProvider();
UsernamePasswordCredentials creds = new UsernamePasswordCredentials("fakeuser", "fakepassword");
provider.setCredentials(AuthScope.ANY, creds);

HttpClient client = HttpClientBuilder.create()
HttpGet get = new HttpGet("");

try {

    HttpResponse resp = client.execute(get);


    HttpEntity entity = resp.getEntity();

    String respBody = EntityUtils.toString(entity);

} catch (IOException e ) {

That worked well, so now I needed to convert this into a script that could be used with nashorn. So I figured out all the classes I needed, and added the following to a JavaScript (effectively, import statements):

//All the required Java clasees
var CredentialsProvider = Java.type("org.apache.http.client.CredentialsProvider"),
    BasicCredentialsProvider = Java.type("org.apache.http.impl.client.BasicCredentialsProvider"),
    UsernamePasswordCredentials = Java.type("org.apache.http.auth.UsernamePasswordCredentials"),
    AuthScope = Java.type("org.apache.http.auth.AuthScope"),
    HttpClient = Java.type("org.apache.http.client.HttpClient"),
    HttpClientBuilder = Java.type("org.apache.http.impl.client.HttpClientBuilder"),
    HttpGet = Java.type("org.apache.http.client.methods.HttpGet"),
    HttpResponse = Java.type("org.apache.http.HttpResponse"),
    HttpEntity = Java.type("org.apache.http.HttpEntity"),
    EntityUtils = Java.type("org.apache.http.util.EntityUtils"),
    HashMap = Java.type("java.util.HashMap");

So, with all those now available in my script, I could begin copying the same implementation that I had.

Once I had my response coming through, I needed to add it to a HashMap, for a Clob datatype (I discovered that SQLcl cuts off values larger than 4000 characters, so it seems a Clob was necessary - especially if the web service response becomes too large).

I took a look at some example code, and saw an example with a blob with `conn.createBlob()`. So I copied this style but used it for a clob instead, as per the following example:

map = new HashMap();
xmlAsClob = conn.createClob();
xmlAsClob.setString(1, respBody);

map.put("xml", xmlAsClob);

This map allows us to use a bind value (xml) with our query using the provided function `util.execute`.

So, for this basic example, I just wanted to log the count that was returned in the XML to run each day. So, for the insert statement, I came up with the following query and implementation:

insertCountQuery =
"insert into org_count (date_run, count)" +
"select " +
"    sysdate" +
"  , orgs.org_count " +
"from " +
"    xmltable( " +
"        '/Result' " +
"        passing xmltype.createxml(:xml) " +
"            columns " +
"                org_count number path '/Result/count' " +
"    ) orgs";

insertCountResult = util.execute(insertCountQuery, map);

if (insertCountResult){
    print("Successfully updated from the web service");
} else {
    print ("An error occurred");
    print ("sqldev.last.err.message.forsqlcode reported:")
    print (ctx.getProperty("sqldev.last.err.message.forsqlcode"));

I saved this script as `fetchAndLoad.js`. So when I'm connected to SQLcl, I just need to run the command: `script fetchAndLoad.js`. And voila - our data is sent into your table.

To get this to run daily, it would be a good idea to wrap this in a little shell script and add it to your crontab (or task scheduler if you're on windows).

Take a look at the full example code:

Wednesday, 5 April 2017

Debugging those JavaScripts

So, you have some code that is playing up, and you want to debug them. We have a few options depending on the situation we are in.

1. Debug APEX/web app in Chrome

I primarily use Google Chrome, so will be focusing on using that browser to debug. The next thing we want to do is enable a break point so that the execution will pause in you can inspect variables and step over the code line by line.

Typically, in the developer tools you will want to navigate to the sources tab, find your script and add the break point. In the case of inline, I found the following interesting behaviour (unsure if its a buggy browser version or what).

When I run the page for the first time, and open the developer tools, trying to open the page source will result in an empty document.

However, if I re-run the page with the developer tools open, the full page source will be displayed.

Another neat trick that I discovered, is that you can map inline code to a "dummy" source file, as documented here. Wherever you define your functions, add a comment line in the format: //# sourceURL=global.js, where you would want your functions to appear in the source tree as "global.js". These sourceURL seem to be on a per-script block (<script>..</script>) basis.

So, if I add that above comment line to be global variable and function definitions in my page, the source tree in developer tools will now come in like so:

Neat, huh!

The other way of course is if you reference an external script.

Ok, now we know how to find our sources, what we want to do is add a break point to see what is going on. That is just a matter of clicking in the gutter where the line numbers are. You'll see a blue mark appear and be ready to run whatever action causes the code to run.

This UI gives all sorts of goodies - watches, call stack, variable values, etc.

As you can see, the console allows also to run expressions against the current scope where the debugger is paused at.

(One thing that i haven't mentioned is that you can also have a line with the word `debugger` to trigger a breakpoint)

Another neat feature of this is that you can actually modify scripts in the developer tools when you are prototyping an enhancement, before you solidify the changes in your actual code base. All the changes you make are then visible through the history panel.

2. Debug node program with Chrome

The neat thing with node, is you are able to use this same UI to debug a node program. On the command line, you just need to run the command: node --interactive[-brk] script.js, and you will be given a URL that you can load in your browser. Note, there are two arguments you can use when starting the debug process:


Depending on the type of program, interactive-brk is not a bad idea as it will break on the first line of execution.

The console output looks like this:

So, you just need to paste that URL into Chrome, and get debuggin!

3. Debug node program on the command line

Node also comes with the debug argument. This does seem to be deprecated, but even so.

So, you run the command: node debug foo.js. This will pause execution on the first line where you can set break points, continue, step over, etc. You can evaluate expressions by typing the command `repl` and than the expression you want to evaluate.

Refer to the documentation for the full list of available commands:

4. Debug node program with Atom

Atom is a open sourced text editor by GitHub, where certain languages have a package that supports debugging code in the editor - JavaScript being one of those languages. So, go ahead and install the node-debugger package

After installing, you will want to check the configuration. At a minimum, you will probably want to make sure the node path is valid. For example, by default it expects: /bin/node, but I installed node through umake so is located at: /home/trent/.local/share/umake/nodejs/nodejs-lang/bin/node.

To toggle a break point, you will navigate to the line and then run the command "node-debugger:toggle-breakpoint" - note the key binding by default (f9) conflicts with another package I have (build), so I would need to re-map that binding in order to use that. In the meantime, I just run the command.

You will see the breakpoints highlighted by orange in the gutter. Now we can start the debugger by hitting f5. This will still pause execution on the first line, and bring a panel on the right hand side where we can step over, continue, etc.

And at below the editor there is another panel for standard out/error where we can also evaluate expressions.

5. Debug node with VS Code

VS code is another open source text editor that is possible to debug node programs in. This comes with debug functionality on a fresh install for node. So, if you open up your script then hover in the line number gutter, you will see a little break point icon appear.

So, click in the gutter to enable the break point, and then open the debug panel and click Run.

I find this interface a bit more polished than what was provided in the Atom example. Add to that, you can hover over variables to see their values, as is demonstrated with the `i` variable above - which I personally find immensely useful.

One additional feature of VS code which I won't explore in this article is that you can use the debugger in VS code by having it connect to Chrome. Read more here: