Tag Archives: Oracle

#doag2016 my picks and suggestions


As many conferences evolve over the years, the number of sessions on offer can easily be overwhelming. I have overheard many conference attendees wrestling with their choices for what to see and which sessions to attend.

For DOAG 2016 I have a short overview with my picks and with one or two tip-sessions. I hope this helps, though it is just my personal preference of course…

Please note that this post is based on the printed version of the conference planner and this may obviously be subject to change. Find the on-line version of the conference planner here!

Tuesday, November 15th
08:30
Goto-session
Connecting Oracle & Hadoop by Tanel Poder
Tip
Structuring an APEX Application by Alex Nuyten
Meet your match: Advanced Row Pattern Matching by Stew Ashton

11:00
Goto-session
How to identify the Right Workload for Database In-Memory by Andy Rivenes
Tip
Die Schlechten ins Kröpfchen – SQL analyse für DBAs by Martin Klier

12:00
Goto-session
Was die IT von der Luftfahrt lernen kann by Uwe Küchler
Tip
Using image copies for Oracle Database Backup by Ilmar Kerm
Using SQL Transaction Framework to rewrite Bad SQL on the fly by Kerry Osborne

13:00
Goto-session
Plötzlich Multitennant – was ändert sich für den DBA by Uwe Hesse
Tip
Oracle VM auf Exadata – Erfahrungen aus der Praxis by Christian Pfundtner
Einsatz von Maps in APEX by Denis Kubicek

14:00
Goto-session (TOP-tip)
–> Session got cancelled, but will be at UKOUG!
Patch you application with No Downtime (& No extra Costs!) by Oren Nakdimon
Tip
Hacking Oracle’s memory – About Internals & Troubleshooting by Stefan Koehler

15:00
Goto-session
XML in der Oracle DB by Wolfgang Nast
Tip
PL/SQL Performance – Best practices für Laufzeitoptimierung by Jan Gorkow

16:00
Goto-session
The Oracle Optimizer – Upgrading Without Pain by Nigel Bayliss
Tip
Erfahrung nach einem Jahr Fusion Middleware 12c by Jan-Peter Timmerman

17:00
Goto-session
Active Session History: Advanced Analytics by David Kurtz
Tip
MySQL for Oracle DBAs by Philipp Michaly
Deploying PL/SQL Applications, Building Rome in a Day by Alan Arentsen

Wednesday, November 16th.
08:00
Goto-session
Logical Replication in 12cR2 – What are the options now? by Vit Špinka
Tip
Function madness: Use and Abuse of PL/SQL Functions by Piet de Visser

09:00
Goto-session
Ensuring your Physical Standby is Usable by Michael Abbey
Tip
RMAN – From Beginner to Advanced by Marcin Przepiorowski

10:00
Goto-session
Oracle Secure Backup – eine Livedemo by Sven-Olaf Hilmer
Tip
Oracle Hacking Session by Kamil Stawiarski
Advanced Interactive Grids by Patrick Wolf

11:00
Goto-session
The Battle: Linux vs. Windows by Dierk Lenz, Johannes Ahrends and Martin Klier
Tip
Adaptive Features or: How I Learned to Stop Worrying… by Ludovico Caldara
Controlling Execution Plans – Workshop by Kerry Osborne

12:00
Goto-session
Application Express für den DBA? Geht das? by Joel Kallman
Tip
Und Sie bewegt sich doch by Lothar Flatz
APEX Desktop Apps – Interaktion mit dem Client System by Daniel Hochleitner

13:00
Goto-session
Hash Joins and Bloom Filters by Toon Koppelaars
Tip
Ansible für Oracle DBAs by Alexander Hofstetter

14:00
Goto-session
Delivering Continuous Availability for Database Services by Michael Timpanaro-Perrotta
Tip
Dbvisit – Oder doch lieber Data Guard by Andreas Kother
Chase the Optimizer Every Step of the Way by Mauro Pagano

15:00
Goto-session
Top 7 Plan Stability Pitfalls & How to Avoid Them by Neil Chandler
Tip
Advanced RAC Programming Features by Martin Bach
Weblogic 101 for DBA by Osama Mustafa

16:00
Goto-session
Bad Boys of Replication – Changing Everything by Björn Rost and yours truly

17:00
Goto-session
Oracle System Statistics by Paul Matuszyk
Tip
Compression – Technik und sinnvolle Umsetzung by Klaus Reimers
Node.js der Alleskönner by Kai Donato

Thursday, November 17th.
08:00
Goto-session
FAQ about Masking Sensitive Data in Oracle Database by Maja Veselica
Tip
Data Guard in Oracle 12.2 – Crash Course by Zoran Pavlovic

09:00
Goto-session
Mining the AWR v2: Trend Analysis by Maris Elsins
Tip
Regular Expressions: Say What? by Alex Nuyten

10:00
Goto-session
Databases Clone Using ACFS by David Hueber
Tip
R.I.P. Oracle Database by Markus Lohn

12:00
Goto-session
Writing Efficient SQL Statements by Joze Senegacnik
Tip
Validate User Input in APEX by Richard Martens

13:00
Goto-session
Backup und Recovery PoC auf der Recovery Appliance by Frank Schneede
Tip
Ready, Steady, GIT: Einführung eines Versionskontrollsystems by Carolin Hagemann

14:00
Goto-session
Warum sollte man die Multitennant Database Option Verwenden by Johannes Ahrends
Tip
Collections in PL/SQL by Frank Haney

15:00
Goto-session
Saving Lives at Sea – At an Industrial Scale Using Oracle Cloud Technology by Oliver Limberg and yours truly
Tip
Part 1: The NoPL/SQL and Thick Database Paradigms by Toon Koppelaars and Bryn Llewellyn

16:00
Goto-session
Part 2: The NoPL/SQL and Thick Database Paradigms by Bryn Llewellyn and Toon Koppelaars

And!!
Do not forget…
The first ever APEX Hack’a’thlon is going down on Friday the 18th of November at the DOAG Education day. If you are interested or just want more information, don’t hesitate to drop a line.


#OOW16, San Francisco

This year, 2016, is turning out to be an amazing year again, with #OOW16 being once again on of the apices!

Looking back

After the discovery of the Oracle community in 2012, as a result of a very first trip to downtown San Francisco in 2010 for #OOW10, an amazing chain of events was set in motion. This very first introduction in the Oracle World was as ‘a mere participant’ in this awe-inspring, large than life event.

Over these past few years I have met so many people, made so many new friends around the globe… This all literally changed my work, my life; basically everything changed.

After visiting Oracle Open World for the first time, I had the opportunity to work with Arjen Visser and the team of Dbvisit on building a strong brand for this amazing company in Europe. This also brought me back to San Francisco in 2014.
And boy, things have changed!
Not only was it a coming back, it was a fest of friendship, with so many people to meet, either brand new or in a chance to catchup once again. It was also the first time I had the opportunity to participate & share. With #RepAttack I had the opportunity to share knowledge about logical replication and the many benefits it holds for making the most out of your data.
Did I mention the utterly amazing fact of getting not only accepted by the Oracle Community, but also recognized, together with my dear friend from Belgium, Mr. Philippe Fierens, as a genuine Oracle ACE?

A new step

This edition of Oracle Open World, OOW16, again adds a brand new dimension to the visit to San Francisco!
Not only will it be as the Director Operations of Portrix Systems, supporting the Annual Swim in the bay event in cooperation with Oraclenerd Chet Justice, it will be as a selected speaker too. An opportunity I would have never anticipated to be possible.

Speaking-OOWWhen Your Database Server Crashes

I will be discussing the various aspects around the protection of data and how you can justify various investments to accomplish this.

Sunday, Sep 18, 10:30 a.m. – 11:15 a.m. | Moscone South—306

I cannot start tp imagine what the impact of this years trip shall be, I do know that I am looking forward to meeting many of you again. This year too, the OTN Lounge will be the base camp for the travels through the Open World landscape. Don’t hesitate to stop by and say hi!!

See you in San Francisco for #OOW16

SYSAUX LOB segment for auditing bug not released in Standard Edition

Last week we were struck by an issue, which turned out to be a bite from a bug!
SYSAUX table-space had quickly filled up to the “my data-file is full”-limit, which in the end was fixed by adding a data-file.

Strange thing though, that for a very small footprint database, we now have a very big SYSAUX table-space.

Some investigation brought me to the Unified Auditing being standard active in database 12c (you can read up on that background with my friend Ann Sjökvist here).
We are faced though with a different (and possibly a little more obscure) Bug 20077418 – RECLAIMING THE SECUREFILE LOB SEGEMENT IN 12.1 Standard Edition.
What this bug boils down to is the following:
There is a lot of audit data recorded by default, the ORA_SECURECONFIG profile is running out of the box. I haven’t taken the time to figure exactly out what is written, where and how, but I know it involves a LOB segment (SYS_LOB0000091833C00014$ by SYSAUD) which is, in our case in comparison to the total database size, HUGHE!! The management of this audit data, usually driven by DBMS_AUDIT_MGMT, has absolutely no effect on this segment (at least not on shrinking it).

Searching for the mentioned bug you just find to EE bugs (18109788 & 22272580) but they at least they give _some_ clues… The actual bug is undisclosed and in status 11 (being worked on).
In the end it means that auditing is fine, even in SE, but, for the moment, restrain yourself… The data you gather cannot be managed (yet). And for the rest:

If
select policy_name
from audit_unified_enabled_policies
;

yields any results, consider switching this auditing off (eg.SQL> noaudit policy ORA_SECURECONFIG;)

Hope this helps…

@HrOUG_2015 in Rovinj, Croatia

In a hectic year it is good to attend and contribute to Oracle user group sessions. This adds an element of a ‘Working Holiday’ to someones schedule. I can promise you, the vacation isle of Rovinj is a perfect venue for this and especially since it is the last week of the opening of the Hotel for this season.
Of course you can find all information about contributing to these events right here!!

@HrOUG_2015, as the official twitter-account of the conference goes, brings just this!! Content combined with pleasure. Ranging from quality sessions by Rock star speakers to relaxation in the pool and late night party in “The Castle”.

Currently the biggest worry is rain… At least for the attendees. As always the (very) hard working organizers are doing their best to create a super experience for everyone attending the conference, and my personal biggest worry is that the participants will actually bring their laptop to the hands-on experience. Actually doing logical replication yourself is so much cooler than seeing it demonstrated. It will be an interesting experience anyhow.

This conference also led to another series of Oracle Hero’s I got to meet in person!

And as always there is really serious stuff going on as well. One of the main challenges or worries today is the developments surrounding Oracle Database Standard Edition Two, and the impact it brings for the development of the European market.
Eliminating this database version forces emerging projects to use the Oracle Cloud, as the super-sharp priced project startup version is no longer available. We had this with Standard Edition One. It also counters Oracles own statement, quite recently presented by Andrew Sutherland, of hybrid cloud functionality, since there is no “on-premise” equivalent for a small scale project anymore!
We are hoping for a good discussion on Friday during the Standard Edition Round Table version at HrOUG, co-hosted by Philippe Fierens, as this development is very heartfelt in Croatia as it is in many European countries.

If you want to read more about this years event in Croatia, please checkout the many tweets and facebook entries by @helifromfinland, @alexnuijten, @roelhartman (ps. Vote for Roel as member of the ODTUG board) and many more!!

Oh, and as far a basic life’s needs go… The Internet on the island is the best ever!!

dbms_redefinition houskeeping

dbms_redefinition actually is a nifty, but powerful little toolkit that let’s you change table-definitions without actually locking the table in such a manner that it would prevent regular operations from being interrupted.

You can read loads about it in the Oracle documentation or in the wealthy library by Mr. Tim Hall.

housekeepingOne thing I noticed, and which I want to share here has lots to do with the house keeping that is automatically done by dbms_redefinition. Actually it talks about some of the bits it didn’t brush up after itself.

dbms_redefinition works using triggers and materialized views to help switch from your current active production table, via a so-called interim table, back to your shiny new, redefined production table. You can follow this beautifully by querying the dba_segments view along the way.
For this it obviously creates this materialized view and the other required components and it removes them after you finish your redefinition-trip. After all that is done, you can just remove your interim table and be done with it.

At least, that is what happened in most of the cases and is what you would expect!

Though, in some cases… it proved impossible to drop the interim table. To me this was somewhat scary… did the redefinition not finish, or did it not finish correctly?

What happened?

There was this table that I redefined. It had referential integrity constraints (aka. foreign key constraints) pointing towards it. Of course dbms_redefinition neatly created version of these to the interim table to be sure nothing went wrong.build-in-flight

When finishing redefinition (with dbms_redefinition.finish_redef_table) most of the interim bits and pieces are cleared away and you just have to drop your interim table manually (okay, we can discuss if this actually would / could / should be automated, but let’s leave that).

But… when you are then manually dropping this interim table (in a busy production system, I tend to want to be careful and just issue ‘drop table int_<tablename>‘. That does not work. dbms_redefinition “forgets” to remove these referential integrity constraints in the other tables (which are neatly names tmp$$_<constraintname>).
This than means either issue ‘drop table int_<tablename> cascade constraints‘, which is more then the basic ‘drop table‘ or find these constraints and remove them manually first:

select 'alter table '||owner||'.'||table_name||' drop constraint '||constraint_name||';'
from dba_constraints dc
where constraint_type='R'
and r_constraint_name in
(
select constraint_name
from all_constraints
where table_name = 'INT_<tablename>'
);
alter table <schema>.<foreign table> drop constraint TMP$$_<constraint name>;

I guess, personally, I would like dbms_redefinition to do this for me…

It’s smart enough! it created them!

Just a quick and additional note, setting ddl_lock_timeout to 30 or 60 for your session can actually help and prevent a lot of non-sense on a busy system.

Hope this helps someone sometime 😉

Introducing FETCHER in a running replication process

This is no regular bit of work and it will probably (and hopefully) never hit you in a production setup…

The prerequisite is that you know how on-line data replication in general, and Dbvisit Replicate specifically, work.

The following case is true:
I had half of a replication pair running.
It means that the MINE process was running, converting REDO-log in PLOG-format. The APPLY process had not yet started because the target database was still being prepared.

dbvisit-replicate-logical-replication-made-easy-18-638-300x225The reason for this is that we needed to start converting redo-log information to PLOG information while we were setting up the target environment. The reason for that was that the setup (exporting source, copying dump to target and importing) was taking quite a bit of time, which would impact redo-log storage to heavily in this specific situation.

It was my suspicion that the MINE process was unable to get enough CPU-cycles from the production server to actually MINE more redo-log seconds than wall-clock seconds passed. In effect, for every second of redo-log information that was mined, between 1 and 6 seconds passed.

This means that the replication is lagging behind and will never be able to catch up.

To resolve this, the plan was to take the MINE process of the production server and placed on an extra server. On the production server, a process called FETCHER would be introduced. The task of this process is to act as a broker between the database and the MIN process, forwarding the requested on-line an archived redo log files.

Normally (!) you would use the nifty opportunities that Replicate offers with the setup wizard and just create a new setup. And actually, this is what I used to figure out this setup. And, if you can, please do use this…

Why didn’t I then, you would rightfully ask?

Well… The instantiation process would take to long, and did I say we were under time-pressure?

  • Setup wizard, 5 minutes
  • The famous *-all.sh script, ~ 1 hr.
  • Datapump Export, ~ 10 hrs.
  • Copy from DC old to DC new,  ~ 36 hrs.
  • Datapump Import, ~ 10 hrs.

So, totally we could spend 57:05 hrs. to try to fix this on the go…

Okay, here we go:

Note: cst-migration is the name of the replication project as you specified it in setup wizard when setting up Replication.

TIP: When setting up on-line replication, it is worth your effort to create separate tnsnames.ora entries for your project, like ‘repl-source’ and ‘repl-target’ acros all nodes.
It can get hellishly confusing if you have, as in this case, a database that is called <cst> and is called the same on the source and target server!

1. Step one:
We obviously had the ./cst-migration/config directory from our basic setup with just MINE & APPLY. This directory holds (among others) the ./cst-migration/config/cst-migration-ontime.ddc file. This file holds the Dbvisit Replicate Repository contents that is needed to run the processes.

From this setup, MINE is actually running. We actually concluded the fact that we were not catching up from this process.

2. Step two:
Now we run dbvrep -> setup wizard again and create a Replicate setup directory with FETCHER and isolate the ./cst-migration+fetcher/config/cst-migration+fetcher-onetime.ddc.

By comparing the two files, I was able to note the differences and therewith conclude the changes necessary to introduce a FETCHER process. It is a meticulous job to make sure all the paths on all the three servers are correct, that port numbers are correct and that all the individual steps are take in the right order. This is the overview.

Having these changes, it is all downhill from now.

3. Step three:
Using the Dbvisit Replicate console, the new entries and the changes were made to the DDC-information stored in the Replicate repository. You can enter these manually or execute your change-file by executing @<change-file-name> inside the console.

4. Step four:
Create the ./cst-migration directory on the system you will use for the relocated MINE process and copy the cst-migration-MINE.ddc and cst-migration-run-source-node.sh in this directory.
Rename the cst-migration-run-source-node.sh to cst-migration-run-mine-node.sh to reduce confusion.
Make sure that the paths mentioned in the cst-migration-MINE.ddc are correct for the system you are starting it on!

NOTE: Please make sure that you can reach both the source and the target database from this node using the tnsnames-entries you have created for the replication setup.

5. Step five:
Rename the cst-migration-MINE.ddc on the source node (!) to cst-migration-FETCHER.ddc and change the cst-migration-run-source-node.sh file to start the FETCHER process in stead of MINE process.

You are now ready to start your new replication processes!

NOTE: If you are running APPLY already, there are some additional things you need to be aware of.

Although it was not the case when I came across this challenge, I am happy to say that Dbvisit have verified and accepted this solutions as a supported action.

Hope this helps.

Kscope15, a celebration of tech…

Kscope15 promised to be a brand new experience in more than one way.

Kscope15LogoAs I start to write this report, I am flying from Düsseldorf airport to Atlanta. It will be the first time flying to the United States with a stopover, and because of Erik van Roon, I came prepared. With just carry-on luggage, I should end up at my final destination, Fort Lauderdale, Florida, together with my ‘stuff’. I am flying Delta Airlines this time, and for an airline that promises just a ‘lunch’ during the 9 hours flight, they do come up with a lot of food…

My colleagues of FOEX have already arrived at the event-venue and are setting up our booth.

On arrival at Fort Lauderdale airport, I am scheduled to meet-up with distinguished product manager for PL/SQL and EBR, Bryn Llewellyn. From there, we would travel to Hallandale Beach to check into our hotels. This plan was only hindered by sheer force of wind shear at Atlanta International, which delayed my flight.

The first day, the Sunday, started off with a boiling walk to the Diplomat hotel. Upon registration I was pleasantly surprised that FOEX had graciously upgraded my conference pass to a full pass, which is cool as I get to attend sessions! And the kind ladies of ODTUG had even attached an ACE Associate ribbon to my name-tag, of which I am kind of proud.

I had so many cool meet-ups and run-ins at Kscope. Just to name a few new friends in no particular order:

Of course I spent most of my time in the APEX and database development tracks. If you look at the momentum that APEX is generating, I think we can safely say that we are making a difference… We can say with confidence: #LetsWreckThisTogether!

The “together” bit was beautifully expressed by Joel Kallman as you could hear a pin drop when Carl Backstrom and Scott Spadafore of the APEX team were remembered…

But still there is a lot of work that has to be done to further spread the word on APEX. I guess I have had at least 4 conversations where I had the opportunity to talk about and explain APEX to people who were still oblivious. That is one of the most rewarding this to do.

Nikki beachThe week passed so quickly and most experiences are becoming great memories very quickly now. The countless meet ups with friend and heroes from the Oracle world, the white party at Nikki Beach and the after party at The Mansion and of course the Oracle content which was dished out with great quality.

Just on more thing… Travelling Über is the best! I have been doing this in San Francisco and used the service here to get back to the airport. Why would you take a taxi with this service around? Because of the way it works, the drivers I have met, have been much more friendly than regular ‘cabbies’. I would recommend this any day.

So, now I am heading home, hanging in the sky somewhere between Fort Lauderdale and Atlanta. Thinking back on Roel’s blog post on his first Kscope… will this have changed my life? Quite possibly, but on the other hand things could not get much more crazy than they have been over the last 6 to 12 months!!

If you are looking to read up on the business side of things, please check out the FOEX blog!

Please also don’t forget to check out the #Kscope15 hash tag on Twitter and remember, when you are at an Oracle conference, also use the #orclconf as additional hash tag. This will help to make it even easier to follow your favorite tech-community on-line!

Register redo-log manually with Divisit Replicate

For those of you who haven’t been working with on-line data replication; in short, it is a way to copy data from a source database to a target database and do this on-line (both databases are active) and do it near-real-time.
This means that when you enter data in you source database, you can immediately query it from your target database. This makes on-line data replication ideal for numerous tasks, like moving and / or upgrading your database while it is being used, with almost no downtime at all.

This tale is of an actual project that I conducted. I used Dbvisit Replicate as my tool of choice.

dbvisit-replicate-logical-replication-made-easy-18-638Dbvisit Replicate can use a so-called FETCHER process to act as the “long-arm” for the MINE process. Mining extracts the information from the redo-log files, but, in specific situations, this can be too much of an overhead for the source database server. By moving the MINE to a proxy server, this overhead can be significantly reduced.

In some cases it can be useful to manually transfer redo-log files to the mining stage directory of Dbvisit.
I came across this requirement when catching up a lot of redo from a RAC database. In this case, the RAC cluster creates two streams of redo. When starting the replication processes, the first thread is transferred by FETCHER from the source server to the proxy, before the second thread is transferred. This means mining will pause until the second thread successful delivers the first redo-log file of the second thread. The redo-log information from the second stream is necessary to create consistent and chronologically ordered SQL-statements for the target database. In effect, the SCN’s from first redo-log information of the first stream need to line up with the SCN’s of the second redo-log information.

In this case, this meant having to wait a day or more before mining can start. This is why I decided to copy a number of redo-log files from the source server to the proxy server, where the MINE process is running, manually.
After the copy, the files need to be registered with in the dbvrep-repository. Without this information, the MINE process has no knowledge of the files that are present and about what their contents are.

The update is an easy insert statement, but it should be handled with care, as this needs to be quite precise and it needs a bit of specific information about the redo-log files being added.
You can use the following insert statement to register the files:

insert into dbvrp.dbrsmine_redo_log_history
       (
       ddc_id
     , mine_process_name
     , sequence
     , thread
     , resetlogs_id
     , first_scn
     , next_scn
     , online_name
     , arch_name
     , read_count
     , from_fetcher
     , last_mine_start
     , last_mine_end
     , create_date
     , last_change_date
       )
values
       (
       1
     , ‘MINE’
     , 128779 -- sequence number of the copied file;
     , 2 -- assuming you are updating this thread.
     , 804864915 -- the reset-logs id from the redo-log file
     , 199910296688 -- the first scn from the redo-log file
     , 199911476897 -- the next scn from the redo-log file
     , null
     , ‘/u01/app/oracle/some-big-storage/dbvrep-mine/mine-stage/thread_2_seq_128719.1485.804864915’
       -- full path and name of the file
     , 0
     , ‘Y'
     , null
     , null
     , sysdate
     , sysdate
       )
;

And you can get the information you need about the files here:

select lh.sequence#
     , di.resetlogs_id
     , lh.first_change#
     , lh.next_change#
  from v$log_history lh
 inner join v$database_incarnation di
 using (resetlogs_change#)
 where sequence# = 128779
;

After registering the first file for the second thread, in the Replicate-console, you can watch the MINE process kick off. This process will then again halt after the first file of the second stream is processed in parallel with the first file of the first stream.

Schermafbeelding 2015-05-31 om 21.23.11

I kept adding files until the FETCHER process was able to take over, or you could do this until you test-case or PoC is over.

Updating SQL Developer to use newer Java version

I was being teased by SQL Developer.

Everytime I started it came nagging about that it was being forced to live in an old Java version called jdk1.7.0_45 and that is was not feeling happy about it.
So, I should remedy this, I thought to myself.

First visit was, inspired by some search-work on the WWW, a file called product.conf. Which offered two possiblities:

java

SetJavaHome to some logical location
or
SetJavaHome to nothing, and then SQL Developer would kindly ask me to point it to somwhere to live.

Well… no. My SQL Developer refused it all and just started with this jdk 1.7.

Same hack done in another file on another location, a file called sqldeveloper.conf.
Same result.

Freshly downloaded SQL Developer, put in place… No help!

Erm…

Rename
drwxr-xr-x  3 root  wheel  102 Jan  6  2014 jdk1.7.0_45.jdk
in /Library/Java/JavaVirtualMachines
to
drwxr-xr-x  3 root  wheel  102 Jan  6  2014 xxx-jdk1.7.0_45.jdk

Nope! Still the same nagging…

What now?

In the end, I wound up with one of Jeff Smits’s helpers.
This guy aksed me to “start SQL Developer from the commandline”. Right, but how?

So I finally found:
/Applications/SQLDeveloper.app/Contents/MacOS/sqldeveloper.sh

And that did start SQL Developer from the command-line…

But… wait… an .sh-file!! Interesting!!

And, behold… in this .sh-file lies the answer:sqldev_startup1

So the file reads:
export JAVA_HOME=`/usr/libexec/java_home -v 1.7`
Which I hacked to:
export JAVA_HOME=`/usr/libexec/java_home -v 1.8`

And, presto, error-message gone and SQL Developer now happily lives in Java 8.

Hope this helps somebody out!!

OUGN15, The “boat conference” revisited

Jan at shipsport
Reflections on OUGN

Sometimes things in life can change quickly! It is only two years ago that I came to Oslo for the first time to join the Scandinavian Oracle crew on a boat trip to Kiel.
At that time I had never actually participated in this kind of experience and I wasn’t into presenting either. Together with my good friend Philippe Fierens I discovered a whole new world back then. You could have read about these experiences in some blogpost, but this was lost in the move to my own site, sorry!

And this trip couldn’t have been more different though! With three presentations accepted the two days at sea will be a reunion with the friends I made over the last years, as well as a way to contribute to one of the most tight knit tech communities I know. And this will be in a scene that I remember vividly from being a newbie… And this is somewhat strange, believe me

After a quick and pleasant flight I touched down in Oslo, flying from Amsterdam with a decent sized crew of Dutch Oracle enthusiasts, including my good friends Patrick Barel and Alex Nuijten. Waiting in the Oslo airport for Luís Marques, I catches up with Gurcan Orhan, which was a great surprise.
Later that day we found ourselves in the Oslo harbor for the speakers dinner. You can imagine the collective amount of Oracle knowledge packed into that one restaurant!

frits
Enkitek’s Frits Hoogland on Ansible

After a somewhat restless night we arrived, on Thursday morning, at the ship Color Fantasy with the Heli Helskyaho-company, just in time for the keynotes. It was good to see Mark Rittman and James Morle made it on board too. Especially as James was up for the delivery of version 2.0 of his vibrant keynote! Next we proceeded to bring our luggage to our cabins and grab a spot of lunch on the exhibition floor down in the belly of the ship. The setup of the exhibition was quite nice and gave a good opportunity to mix and mingle.
The afternoon was spent on sessions, where I visited Frits Hoogland with the Ansible talk, and preparation for my own session at 18:00. This is the last run of this APEX presentation, as I have retired it after OUGN15. The slides will be archived here.
After finalizing the preparation for the third edition of the Standard Edition Round Table (aka “slide polishing”) with the #orclSERT team, comprising of Ann Sjökvist, Philippe and myself, it was time for the souree and for diner in the grand restaurant on board. It has been a good first day!

Diner
Dinner with the international crew on board the Color Fantasy.
Gin-tonic
Warm reception at Kiel port.

The second day of OUGN15 started with a multitude of sessions including the third edition of the Oracle Standard Edition Round Table, which was actually quite busy and interactive. We had some good discussions, and that at 09:00, so thank you, everybody.
Of course, as was declared a tradition, Björn Rost was present in the Kiel harbor. With the famous “Basil smash Gin & tonic” and sandwiches we were welcomed on German soil.
My afternoon comprised 3 sessions, starting with my own called “Okay, and now my database server crashed…” which was quite nicely received. Next Alex Nuijten on 12c new features for developer, topped off with Tim Gorman who taught us to be CSI people, in finding issues in the database.
After an enjoyable evening in the various bars and discotheques of the ship we retired the official part of the Oracle User Group Norway Vårseminar 2015, thanking the board and of course especial Øyvind Isene, for their hard work.

If you want to catch up further on the unconference communications surrounding this event, please do checkout the Twitter hashtag #OUGN15. This will also include a great set of snapshots and pictures taken along the way…

Oslo, until the next time!