Tuesday, 2 October 2018

Acting when the user pastes an image

I recently saw a question about how to act when someone paste's an image (from clipboard data). There is a plugin that can handle that - to store the image into a table, but I was interested from a purely academic perspective how this would work without using the plugin.

So, taking a look on MDN, I can see that there is a 'paste' event that you can listen to. See the documentation here: https://developer.mozilla.org/en-US/docs/Web/Events/paste.

Ok, so lets give this a whirl! I'm not going to cover loading this into a table/BLOB column, as it's been covered before. What we'll do, is create a region with an image element this will show our image as we paste - this can easily be extended to load it into a canvas, load into a table, whatever your hearts content!

Firstly, the region to show what the user pastes:


Next, we need a dynamic action. This will be a custom event ("paste") with the selector being the body.


Our True Action will be "Execute JavaScript Code" with code resembling:

function setImage(evt){
    var dataUrl = evt.target.result;
    $('#imageMagick').attr('src', dataUrl);
}

var pasteEvt = this.browserEvent;
var pasteItems =
    (pasteEvt.clipboardData || pasteEvt.originalEvent.clipboardData).items;

for (index in pasteItems){
    
    var item = pasteItems[index];
    if (item.kind === "file"){
        var pasteBlob = item.getAsFile();
        if (pasteBlob.type === "image/png"){
            var reader = new FileReader();
            reader.onload = setImage;
            reader.readAsDataURL(pasteBlob);
        }
    }
}


Now, when you paste, the image will get loaded into the region, with the img placeholder we set up. Neat! You may wish to cancel event propagation, but I'll leave that up to you to experiment with depending on your needs.

If you want to try out this specific example, I have an example:  https://apex.oracle.com/pls/apex/f?p=14882:32


Sources used in the making of this example:

- https://developer.mozilla.org/en-US/docs/Web/Events/paste
- https://stackoverflow.com/questions/6333814/how-does-the-paste-image-from-clipboard-functionality-work-in-gmail-and-google-c

Sunday, 30 September 2018

SQL*Plus, Node and Docker

This post is a continuation of my post from earlier in the week, in which I showed how to set up SQL*Plus, through a docker container that Oracle provides in their container registry.

In that post, I set up the sqlplus command through an alias. Almost all examples I've seen in the past use an alias when setting up a command line program that is in effect a docker container.

However, I have an extension in VSCode that I use that compiles my code which in effect is calling sqlplus and running my script. Even though I was able to connect in gnome-terminal and use SQL*Plus effectively, in my extension, no output was coming through.

My code looked like this:

var ChildProcess = require('child_process');
 
var sqlplusArgs = ["vmtest/vmtest@//192.168.0.104", "@test.sql"];
var sqlplusProcess = ChildProcess.spawn("sqlplus", sqlplusArgs);

sqlplusProcess.stdout.on('data', (data) => {
    console.log( data.toString() );
});


The first issue here is this is only set up to display the standard output channel - we need to also give the user feedback when any errors occurred. So we can tweak this slightly to become:

var ChildProcess = require('child_process'); 
 
var sqlplusArgs = ["vmtest/vmtest@//192.168.0.104", "@test.sql"]; 
var sqlplusProcess = ChildProcess.spawn("sqlplus", sqlplusArgs);

function printOutput(data){
    console.log(data.toString());
}

sqlplusProcess.stdout.on('data', printOutput);
sqlplusProcess.stderr.on('data', printOutput);


So just a reminder, the alias command looked like this in my previous post:

alias sqlplus='docker run -ti --rm container-registry.oracle.com/database/instantclient sqlplus'

So now when we try and run this little node demo program, we get an exception:


events.js:167
      throw er; // Unhandled 'error' event
      ^

Error: spawn sqlplus ENOENT


So, it looks like node isn't finding the alias we set up, and per this logged issue: https://github.com/nodejs/node/issues/3916, bash doesn't expand aliases in non interactive shells.

Ok, so instead of using an alias we should consider wrapping our docker command in a shell script. So we end up with a script per:

#!/bin/bash
docker run \
    -it \
    --rm \
    container-registry.oracle.com/database/instantclient sqlplus \
    "$@"


I've already set this up on my path and removed my alias, so I can go ahead and try my little node program again. This time, we get the following error:

the input device is not a TTY

This is an easy fix. In our original docker run command we passed in the arguments -it. The -i is for interactive and the -t is for TTY. So we can safely go ahead and remove that from our shell script. So, line 3 in the shell script above would just read: -i \.

At this point, we are not entirely ready, because with SQL*Plus, we want to be able to compile scripts, and suppose I'm within the folder: /home/trent/Projects/blog/ and have a script test.sql. If I launch sqlplus, it won't have access to those files, which you will see if you try to run:

sqlplus vmtest/vmtest@//192.168.0.104/xe @test.sql

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production

SP2-0310: unable to open file "test.sql"
SQL>


So this solution to this in 2 parts. Firstly, exposing your files into the container. Personally, I think it makes enough sense that you should just expose your entire home folder where, at least for me, is most likely where all my source folders will be stored. So, I append the following flag to my docker run command:

-v $HOME:$HOME.

This alone will be perfect if I want to compile with absolute paths. e.g.

sqlplus vmtest/vmtest@//192.168.0.104/xe @/home/trent/Projects/blog/test.sql

But usually you will probably just be specifying a relative path. So, we need to set the working directory. The docker run command has a flag for this too, so we can add in:

-w $(pwd)

So finally, the docker run command in our shell script would look like:

#!/bin/bash
docker run \
    -i \
    --rm \
    -v $HOME:$HOME \
    -w $(pwd) \
    container-registry.oracle.com/database/instantclient sqlplus \
    "$@"


So that brings us to the next point. Performance.

You are surely aware by now that Oracle now has a new command line tool for connecting to the database named SQLcl, which is effectively a drop in replacement for SQL*Plus.

So after my last post, there was a reply from Jeff Smith:



One of the reasons I advocate having SQL*Plus installed is because of the plugin I already mentioned that I have in VSCode (there are similar ones for Atom and Sublime) makes use of SQL*Plus (or SQLcl) to connect to the database and compile the active script/file. Unlike say SQL Developer, the connection isn't kept alive. It's purely launching SQL*Plus (or SQLcl) to compile the script, and then exits. And the startup time is so much faster with SQL*Plus. From my experience, SQL*Plus is well under 1 second to startup and run the script, where SQLcl takes at least 3 seconds or more.

Since this post, and the last, were about running SQL*Plus through a docker container, a co-worker replied:


In my initial testing, it was still very fast to startup. But don't take my word for it, let's get some hard facts. We can make use of the time command to startup each environment, immediately issue an exit statement to see how long the round trip takes in each.

1. SQL*Plus installed natively


3 tests produced: 0.010s, 0.003s, 0.009s (in that order) for user time


2. SQL*Plus running through a Docker container


3 tests produced: 0.073s, 0.065s, 0.065s (in that order) for user time (a little longer than a native installation though probably not enough of a difference to be noticeable by the user)

3. SQLcl


3 tests produced: 3.513s, 3.399s, 3.560s (in that order) for user time

So as you can see, not a lot of difference with SQL*Plus installed natively or by running through a container.

Wednesday, 26 September 2018

Setting up SQL*Plus with the help of an Oracle Docker container

Working with Oracle Database's I like to install the Oracle instant client - probably the biggest reason for me is that it gives me access to SQL*Plus. Personally, I think the installation is very straight forward, but the number one complaint I hear is its too difficult to set up.

So, how can you get access to SQL*Plus without this "difficult" installation process?

Say hello to the Oracle instant client docker container.

If you head over to https://container-registry.oracle.com, navigate to the database section, you will notice 3 repositories:



So, you will see the second one in the list is named "instantclient" - exactly what we need!

side note: this container is a little out of date, being last updated over a year ago, but I'd say the latest version isn't crucial for most needs.

So, if you click into instantclient, you will need to sign in first so that you can accept the license agreement. You will also need to sign into the registry from your shell. Once that is done, we can grab that image with:

docker pull container-registry.oracle.com/database/instantclient:latest

With that installed, you will now want to create an alias on your OS. So, go ahead an run the following in your shell:

alias sqlplus='docker run -ti --rm container-registry.oracle.com/database/instantclient sqlplus'

Now, if you run:

type -a sqlplus

You should see that SQL*Plus is now aliased to run that docker container.

I happen to have a VirtualBox VM running with an Oracle XE database, with the schema vmtest. It is on my local network with the IP address 192.168.0.104. So, now if I run:

sqlplus vmtest/vmtest@//192.168.0.104/xe


I'll be connected to the database, which the following screenshot demonstrates.



How simple is that! I didn't need to go through any rigorous process to install the instant client, and I now have SQL*Plus at my disposal.

Sunday, 16 September 2018

Clearing error state of an Interactive Grid column


On a recent project, there was a requirement that when they save data they don't want any validations to come into play, and it's only when they go to submit the transaction that validation errors should appear.

So I had a set up where the Save button (AJAX) was removed from interactive grid toolbar, and the form page form has two buttons: "SAVE" and "SUBMIT".


Following this design pattern, if I click SUBMIT with a validation failure in the data, the relevant row would be highlighted in the interactive grid:


Now as a user I decide I don't want to submit the page anymore, but rather just do a save.

However, clicking save I won't be able to get the page submitted without changing the value of that field (or as an alternate situation, the validation may be dependent on some other page item, which I could have corrected but still the error state of the grid/line will be present). APEX will not submit the page and I would get an error that I need to correct any errors first.


So, what we need to do is change the button behaviour to first clear the error state to ensure I can submit the page.

If you are using interactive grid, you hopefully know it is built up of multiple components, and one such component is the model (apex.model). When our validation caused a failure, the model has flagged it as not valid. So, before we submit/save the page, we need to clear that validity state which will enable the page submission to occur.

So, rather than a straight submit button, we can alter our button to run the following JavaScript code (submitting the page only after our data is set to a valid state):

var gridView = apex.region("emp").widget().interactiveGrid("getViews", "grid"),
    model = gridView.model;

model.forEach(function clearIgErrorState(record, recordIndex, recordId){
    model.setValidity("valid", recordId, "COMM")
});

apex.page.submit("SAVE")

In this example, I am specifically making it so that the "COMM" column is no logger flagged as having an error.

To see this function specs, refer to the documentation to see what other inputs it accepts.

Thursday, 12 July 2018

TypeScript and APEX

TypeScript is a typed superset of JavaScript. That is, you can use the familiar JavaScript syntax you are used to, but where it gets enhanced is the fact it is can be strongly typed to give you compile time warnings.

The most example is the sum of two numbers. A typical JavaScript function would looks like:

function add(num1, num2){
    return num1 + num2;
}


By looking at this code, by the function name and the return statement, we can see that the idea would be to add two numbers together. In JavaScript, depending what the user passed in, it could do string concatenation, implicit type casting.

So if you do:

add(1,2)

This will return 3

If you do:

add ("1", 2)

This will return 12 (string concatenation).

Since we want this function to always add two numbers, using TypeScript, we can change the definition to be:

function add(num1: number, num2: number): number {
    return num1 + num2;
}

Now the TypeScript engine is always going to expect the inputs to be numbers. So in our TypeScript file, if we issue the same statement:

add("1", 2)

The compiler is going to complain and you won't be able to transpile into JavaScript code.



What's really neat, a colleague of mine Adrian Png, has gone ahead and created a TypeScript definition of the APEX JavaScript API reference (work in progress). Check out his project on GitHub. This project provides the documentation to start leveraging that good work that he has done, but for completeness, the basic steps are:

1. Ensure your project has been initialized with npm. Run:

npm init

..and follow the on screen prompts.

2. Install the library to your project folder by using npm:

npm install --save-dev https://github.com/fuzziebrain/orclapex-js.git.

What this does is installs the relevant node modules and updates your package json with the relevant dev dependencies.

3. Add a tsconfig.json file to your project:

{
  "compilerOptions": {
    "module": "commonjs",
    "lib": [
      "es6",
      "dom"
    ]
  }
}

4. Finally, add the following line to your TypeScript file:

/// <reference types="orclapex-js" />
 
Now, one of the JavaScript API's that has been done is the `apex.da` namespace, which has a function named `resume`. I'm using VSCode, so after adding that reference, there is also some code completion. Typping apex.da.resume give me this hint:



So not only do we get parameter names, we get to know the expected types. Its worth noting, if you don't add the reference or there is parts of the API that haven't been compiled yet, the TypeScript engine will complain about not knowing about that object.

OK, so now with that covered - how can we use TypeScript in our APEX projects.

It's not immediately clear from the apex-nitro documentation (credit to Adrian Png for adding TypeScript support into Nitro), but if you navigate into the examples folder you will find a demo-typescript folder - great, we can leverage apex-nitro to develop TypeScript libraries for our next APEX project. If you don't have apex-nitro installed, install with npm by running:
 
npm install apex-nitro -g
 
After you install, run apex-nitro config to set up your project. It should end up looking like this:



They key thing is that the JavaScript processor will be set to be TypeScript. Save that. Now, following the set up guide for apex-nitro for your application, you just need to do a couple of tasks.

1. Create a new before header application process with a negative sequence so it's the first thing that runs. Set conditions to owa_util.get_cgi_env('APEX-Nitro') is not null and source to apex_application.g_flow_images := owa_util.get_cgi_env('APEX-Nitro');

2. Now, you will want to add a reference to your compiled script. Despite the fact we developed in TypeScript, the output will be JavaScript. So go ahead and somewhere in your application, add the following reference:

#APP_IMAGES#js/app#MIN#.js

Now, this was a really quick and dirty example of using apex-nitro. For for full options and usage, you should read the docs.

Additional resources:

TypeScript website: https://www.typescriptlang.org

There is a good article on FreeCodeCamp that is worth having a read of: https://medium.freecodecamp.org/when-should-i-use-typescript-311cb5fe801b

Friday, 4 May 2018

Git branch information in your bash prompt

I saw a post the other day by Barry McGillin about improving bash to include repository information - see here: http://barrymcgillin.blogspot.com.au/2018/04/making-git-cmd-line-fancy-ish.html. I had seen similar behaviour using zsh - specifically when I had tried out the oh-my-zsh project, but I always find my self turning back to bash.

So, I started doing some searching on how to achieve this in bash to see what other options there were - and I found a few articles basically with the same solution as in the post I found. Then I came across the fact that Git provides a script that can be leveraged to accomplish the same. You can view the script hosted on GitHub: https://github.com/git/git/blob/master/contrib/completion/git-prompt.sh

The header of the script includes instructions on how to leverage this file, which in a nutshell is to place the script somewhere on your system, sourcing it within your .bashrc file, and updating your PS1 variable, again in bashrc, to reference __git_ps1.

Since I already had git installed, I thought I'd just look to make sure it exists on my installed version. So I ran the following search:

trent@birroth:~$ dpkg -L git | grep prompt
/etc/bash_completion.d/git-prompt
/usr/lib/git-core/git-sh-prompt

And digging into the script, I see that /usr/lib/git-core/git-sh-prompt is effectively the same script referenced above. I also found I didn't need to copy that script anywhere, so it must be already sourced elsewhere.

Within that script, the suggestion is to make PS1 look like:

PS1='[\u@\h \W$(__git_ps1 " (%s)")]\$ '

What I didn't like about this is that it only included the current folder you are in, whereas my original PS1 prompt includes the full path, or relative to home if within my home directory. So, I found out that I can just append my existing PS1 variable with $(__git_ps1) to include branch information when the current folder is within a Git repository.

So, my default PS1 included colours so is quite a bit longer - but to include this, my full PS1 then became:

PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]$(__git_ps1)\$ '

This above is actually in a conditional block, one for the non-colour option - I'm sure you can figure out how to add the relevant section in.

With that then, my prompt then became:



Which shows the branch I'm on.

Some other options to provide more feedback about the state of the branch/folder, as described in the script:


  • GIT_PS1_SHOWDIRTYSTATE: shows a * for unstaged files; a + for staged files
  • GIT_PS1_SHOWSTASHSTATE: shows a $ if something is stashed
  • GIT_PS1_SHOWUNTRACKEDFILES: shows a % if something is not being tracked
  • GIT_PS1_SHOWUPSTREAM: with a value of auto, shows < when you are behind, > when you are ehad, <> when you have diverged, = when there are no differences. There are some more options for this - but I suggest you review the script linked earlier for more details.

If you're not in a git repository, your prompt will just display without the Git portion as you would hope and expect.