Category Archives: Oracle tips

Idempotent, a parameter with adverse effect


To create and distribute, since long, we have had the power of Oracle Reports at our fingertips. These same report definitions can still be used successfully nowadays.

With all the technical changes over the years, we’ve come to a point where Oracle Reports Server, is now part of Oracle Fusion Middleware proposition. It is a (small) container, application or component of this software stack, where it is loaded in an application server and thus deployed.
Although this takes Oracle Reports Server into the modern age, it adds a lot of complexity for smaller implementations, a lot of tweaking and tuning to get it all running smoothly. One of these examples I would like to address here…

We were faced with a situation where we were running a report, an e-mail distribution report, sending out specific information to a multitude of e-mail addresses. And there was this situation where this report was run on a system, which was somewhat overloaded. This is not good, but not always completely predictable or avoidable.
For this report to run very long, actually was not a big issue as these individual e-mails were just informational and not time-bound on a minute-scale.

The problems started when we were getting word that some recipients of these e-mails were getting multitudes of copies of these e-mails! Which was sloppy at best…
Wait… multitudes of e-mail from a job which is running slow… That’s odd, they should be getting either a late e-mail or perhaps no e-mail at all! Not a lot of e-mails, that is contradictory!

This called for an investigation.

Finally we stumbled across a technology designed to do no harm, I would like to call your attention to idempotence!
Funny thing is, this setting is not part of Oracle Reports Server, but of another component used in this complexity, Oracle HTTP Server (OHS).

  • When set to ON and if the servers do not respond within WLIOTimeoutSecs (new name for HungServerRecoverSecs), the plug-ins fail over.
  • As stated on WikiPedia, an idempotent operation is “that can be applied multiple times without changing the result beyond the initial application

Obviously, this was not the case with our situation! This idempotent operation was applied multiple times and it did change the result far beyond the initial application. Up to 40 e-mails per recipients on one specific occasion.

In effect, what happened, was that the Oracle Report, which was called through a URL, did not complete within WLIOTimeoutSecs, was just restarted. The URL, as it was called to start the process, was called again from withing the Oracle Fusion Middleware Stack, starting a new run of e-mail distribution… over and over again.

We resolved the situation by two completely different actions.

  1. we made sure this operation was not run on a system which was too busy to handle the load
  2. we made sure Oracle Fusion Middleware wouldn’t get the crazy idea to re-run this operation again.

Number two was fixed by the adding of the following instruction to the installation guide (and actually doing this too):

Toggle parameter IdemPotent to ‘off’ in reports_ohs.conf in:

<drive>\oracle\middleware\asinst_1\config\OHS\ohs1\moduleconf\reports_ohs.conf

This can be accomplished through the Enterprise Manager

Webtier -> Ohs1 -> Oracle http Server -> Administration -> Advanced Configuration -> Choose a file -> reports_ohs.conf

Add the following line to the body:
## Added <name> <date>
IdemPotent off

Our problems are gone now!


Communication limit with mod_plsql through PlsqlMaxParameters

There is always a lot to do when you run your page. And, somehow, these days there are some default limits to the amount of work you are supposed to do.
Okay, but what does this mean…

In an application, we have a screen to create some appointment between a bunch of people. This bunch can become quite substantial indeed.
In those days we were running Oracle HTTP Server 9i to lift mod_plsql up in the IP, and we never had any issue creating these somewhat larger appointments in our application.

Recently we got a call on our service desk, which said there were some issues in creating an appointment with a greater number of participants. The message read, somewhere in the error-text:

HTTP-400 Too many arguments passed in. Got 2009 parameters. Upper limit is 2000

Okay, somehow this application function is not going to work!

On research we identified the parameter PlsqlMaxParameters to be the problem. This parameter was introduced with the first version of Oracle HTTP Server right after 9i. If unset, PlsqlMaxParameters defaults to a value of 2000.
I am guessing here, but my bet would be this to be a restriction to prevent a buffer-overrun of some kind.
To resolve the issue the PlsqlMaxParameter needed to be added to plsql.conf. (We tried to do this in dads.conf to keep most of Oracle HTTP Server unaffected, but got errored out).
We chose the value of 4000, meaning a 100% increase while maintaining a fair and safe limit.

For Oracle HTTP Server, the parameter is set in plsql.conf which can be found in either:

  • $ORACLE_BASE/Oracle_WT1/instances/instance1/config/OHS/ohs1/moduleconf
  • $ORACLE_BASE/ohs/Apache/modplsql/conf

For Oracle EPG, the parameter can be set as follows:

  • dbms_epg.set_global_attribute(‘max-parameters’, ‘100’);

And this is where it gets tricky!

Currently we are leveraging Oracle 11g Standalone HTTP Server (11.1.1.7), so without this bulky and difficult WebLogic overhead and we thought we’d aught to test this installation with these big appointments.
As expected, we received an error-message, so the behavior is consistent for this part. The scary bit though was that the error-message is no longer an error-message but a hint to go in the woods…

The request could not be understood by the server due to malformed syntax.

Which could virtually mean anything, and if we hadn’t run into the previous error, we’d have a helluvatime trying to resolve this.
Luckily the theater was set and we knew what we were searching for and testing, so we quickly resolved this also by adding PlsqlMaxParameters.

I hope this post will help you troubleshooting this catch-all message too!

Update on July 25th 2014
As we’re running into problems a new, I decided to call upon Oracle Support too… Raise and SR and see what comes floating to the top.

Well, a confirmation of what we already concluded, there is no conclusion…

I'm not able to determine what the maximum value is that you can set this to as it does not appear to have been documented in the bug, and the parameter has not been documented at all in the product documentation. I don't have access to the source code, so I'm not able to see if a hard limit has been set or not.

Raising the number of MaxPlsqlParameters is not considered to be a definite solution. Which kinda sounds logical since where you need a page-call with more than four thousand !! parameters (or even 2,000 when you think of it), you would think something could be optimized.

And the suggestions of MOS, which I will support:

  • I would suggest you raise the value to what you need it to be for your environment. If there is a hard limit in the code, and you configure this above that limit, then you will get an error.
  • I would then suggest you look into your application and work to reduce the number of parameters as per the recommendation of the developer of the parameter.

Visiting User Experience Event 18|3|2014

Today I had the privilege to visit the Oracle UX team from the USA. This special event was hosted by Amis Services (@AMIS_Services) and my visit was with Michel Koolwaaij, my esteemed colleague from VIR e-Care Solutions.

The event was super-well attended by a lot of enthusiastic people, comprised of students, novel developers and experienced seniors alike. I also got to meet a whole bunch of super interesting people (again) like:

  • Ultan Ó Broin (@ultan)
  • Patrick Barel (@patch72)
  • Noel Portugal (@noelportugal)
  • Lucas Jellema (@lucasjellema)
  • Lonneke Dikmans (@lonnekedikmans)
  • Mark Vilrokx (@mvilrokx)
  • Aylin Uysal

Through this post I would like to share some of what I picked up from the presentations and demos I went to and key learning points I figured out for myself.

Demo of Oracle voice (by Mark Vilrokx)
Oracle voice is a solutions based on Siri powered by Nuance which in fact now comprises a super lightweight front-end interface for voice-controlling Oracle Fusion Apps. The actual voice recognition and lexicon integration is done on the Nuance back-end.
A personal fun thing to find out is that actually the technology is again based on the work of the Belgian speech-specialists of Lernout & Hauspie, which I got to meet over 10 years ago!

Demo of Oracle mobilitics
Basically this is a demonstration of one of my key-take-aways for today.
These days you, as a classical developer, are challenged to step back, forget “grabbing data and throwing it into a grid or master-detail” and think about how you would “interact” with the data you have in your system.
If you think about it, you would not really want to scroll through master-detail… You want to visualize your data, so it becomes something more tangible and give you an overview with the ability to drill down or zoom in.
The “Designing for Mobility & Simplicity” talk of Aylin Uysal dove deeper into this.

Presentation UX directions with HTML5 by Mark Vilrokx
For me this was somewhat of a confirmation, strangely enough. Basically HTML5 is used as a rapid application development framework for Oracle ADF applications. In effect meaning that an Oracle APEX development environment supersedes Oracle ADF in both speed and diversity of application development (J) End of sentence

Presentation Designing for simplicity by Aylin Uysal
Stressing that person to person collaboration is still super important also (and perhaps even especially) for UX design. Organize several sesison consisting of different stakeholder groups, since UI design differs by user (or stakeholder) category.
Information abundance in classical UIs is to be replaced by minimal data UI design. Having less data, better (more visually represented) strongly increases UX!

Presentation Wireframing 101 by Ultan Ó Broin
Wireframing, in this context was new to me. Create a low fidelity “sketch” of what you want, don’t prototype anything yet! Create difference wireframes of applications and application flows to prevent “Squeak and poop” behavior of management or customers when presenting UX designs in wireframe format. A wireframe is no nicely worked out app, making it difficult to judge for outsiders.
A nice example of such rapid prototyping is the way Google Glasses is developed.
A good tool for digital wireframing (but also just for that) is Balsamiq which is used by the Oracle UX team as the preferred wireframing tool.

Presentation One picture worth a thousand words by Lucas Jellema
In this presentation of more pictures that words, Lucas gave some very cool examples of how pictures are able to, indeed, transfer much more information that words. An inspiration to use when you are UX-ing.

Visit to the Chamber of Secrets
I am so sorry, I had to sign quite an NDA before being let in… Please visit your local Oracle UX-session!

So, what are the key learning points:

  • Step back, and think free-flow how you would consume information. Unthink current UI design and… step back!
  • Less is more, also in UI design. A user experience is about getting to what the NEED as quickly as possible.
  • APEX is a viable development tool, in any situation!
  • User Interface design is becoming a serious trade, a trade to take into account.
  • Watch out for those InfoTiles!

A special “Thank you” to Wieteke Gaykema from AMIS who worked like crazy and still got me in at the Chamer of Secerts, even though I was shamefully late with my NDA!

Oracle XML DB content easily moved

We have this application where we just store some specific content in an XDB-schema.

After a quick move of the Oracle database from a legacy system to a new environment we found that the XDB-schema and it’s contents were not moved. Okay, this is what happens when you use “good-ole” imp/exp instead of some “newer” technologies like RMAN or expdp.

What now? We can start the entire move again (but that would mean downtime for recreating the database, amongst others) or we could do a specific move for the XDB-schema (but meanwhile new content was being added to the system already). Actually all of these are not the nicest scenario’s and seemingly adding too much complexity. Not what we want…

What about a a smart alternative here too! We could simply use ftp after all.

From the EPG (Embedded PL/SQL Gateway) functionality of the database, we can just enable ftp through the Oracle database listener. With this functionality we can access the application database on the legacy system through ftp and easily copy the content to a local directory, especially since there is just a few hundered Megabytes of data.

Enable this access by:
execute dbms_xdb.setFtpPort(2100);

With a tool like Filezilla, the contents were copied to a local directory.

After the action is complete, ftp-access to the database is closed, you can never be too careful!
execute dbms_xdb.setFtpPort(0);

Loading this contents in the new location is a repetition of the actions. Enable the EPG-ftp-port on the new database, use an ftp-tool to upload the data and don’t forget to disable the EPG-ftp-port afterwards.

One tricky thing is that you should mind the data-ownership. This is easiest done by connecting to the ftp-account with the same user that owned the database in the source database!

When you run into errors, probably there could be something wrong with your XDB-installation. Please look at this post for some more on that!!

Okay, and now my database server crashed…

RTO/RPO, who has ever heard of that! That was Star Wars, right?
Storing data and never having to go without or losing any… Yes, that’s more like it.

Server Crashed

Okay, and these two have everything to do with each other!

Talking about these two fancy IT abbreviations I have raised many eyebrows and aided securing businesses!

What is it:
RTO: Recovery Time Objective, or rather, how long should it take before your database is up-and-running again!
RPO: Recovery Point Objective. How much data can you stand to lose?

It is customary to put real amounts of time for these both parameters. This is one of these true points where IT ‘meets’ business, one of those do or die SLA parameters.
How long before you can start working again after something has gone somewhat horribly wrong? Dependent on the business (and for sake of argument), you will get something like; “Oh well, if we are back in business in say an hour, I guess we’ll be fine.” Okay, so we have RTO = 1hr.
And, how much data can you afford to lose? “Losing data, what do you mean?” Well, let’s say you have been on the phone and in the field harvesting order data and putting this in the database… how much of this information can be reproduced when your environment fails? We’ll go with two scenario’s. We will presume “Oh no, NOTHING!” and “Hmmm, well, 10 minutes, if needs be!”, making respectively RPO = 0min. and RPO=10min.

  • RTO = 1 hour
  • RPO = 0 minutes or 10 minutes.

Let us investigate what this means, assuming we have a functional backup running every night and that our drama happens at 15:45 on a working day.

What do we have when we do nothing?
After establishing we have a system crash at hands we need to start working immediately to rebuild something, but do we have something to build upon?
Do we have hardware? And does it somewhat meet specs? Can we run our OS (version) on it? Do we have OS media to install with? Do we have Oracle media to install with? Can we get network, and so on…
And if we have this do we have enough expertise to get it installed?
Well, I guess it’s clear… We need to invest big-time! Few hours getting all the facts straight and getting hardware, a few hours to install and configure the OS, a few more for Oracle, getting it to resemble the former production environment and then restoring the backup!
RTO = starting at 8 hours.
Looking at our RPO? Well, okay, that’s easy! We backup at midnight (0:00) and we crash at 15:45. So we will have lost 15 hours and 3 quarters.
RPO = 15:45 hours.
Acceptable? No, not really!

It’s clear we have to do something.
The first step is to reduce RTO, we need to be able to continue work faster.
We can do this by making sure we have a second server standing by in a different location. Have it installed, have it configured and ready to jump into action. You could call this a Standby Server.
But even now there is no guarantee we make our target since restoring a backup and getting the database up and running could still easily take over 1 hour, when dealing with red-tape and decision levels. To hit the home run we need to add one more feature, we need to have not only a Standby Server, we also need to have a Standby Database. A database that can be “opened” or “activated” in mere minutes.

  • Are you running Enterprise Edition Database then you can use Oracle Data Guard, included in your database license.
  • Are you running Standard Edition Database then you can get the Smart Alternative from Dbvisit.

With Standby Database in place:
RTO = 5 minutes!!

Now we need to tackle RPO!
Or… do we still?
RPO = 10 minutes, actually is tackled by the Standby Database implementation.
Because of the characteristics of Standby Database, we do not only have an RTO of mere minutes, we also have an RPO of a configurable duration.
Data is transferred to the Standby Database environment by means of archived Redo Log files and this mechanism is influenced by manual switching of log files and if you do this with small enough intervals (less than our target of 10 minutes) we make sure that age of the data in the Standby Database meets the target “Recovery Point Objective”!
RPO = 0 minutes
Well, okay, this is something else. And if we think about this a little, it’s something completely different!
Recovery Point Objective, the amount of data we can stand to lose, is 0 (nothing!). Actually meaning we have to create a Standby database setup which is kept up to date with the primary environment. This kind of Standby Database environment allows you to switch to this second environment within seconds and continue your business operation without delay!

And, with your Active-Active Standby Database solutions in place:
RPO = 0 minutes!

So, now you know about RTO/RPO to secure your data and know this guy is something else.

r2-d2

Increasing the reach of your SE database license

Imagine the following situation…

Since a few years your business has been investing in centralizing valuable business information. After some research in the market you have found the Oracle database to be the best fit for your requirements.
Using the free Oracle Application Express (APEX) framework, helping you to rapidly develop the web-applications needed to support both internal and external users, was a premium. Making this installation available based on the Oracle Standard Edition One database, you have created this solution against the lowest possible investment!

As many great projects go, the use and the number of APEX applications is growing. With the addition of ready to use applications to inspire you, many cool plug-ins to ever increase the usability and integration possibilities you get caught up in the data growth dogma!
With an ever increasing user population and expansion of data-reporting for ever faster business reporting your initial system is starting to fail, showing ever more frequent performance lags or system unavailability. These problems form a risk for your business, a risk you need to eliminate as soon as possible!
The standard advise here would be to upgrade your environment, the standard advise here would be to upgrade to a bigger machine and to an Enterprise Edition database. This is what your investment would be then…

  • Medium Oracle Sun Server X2-4 with 4 x 10 core CPU’s at € 42,500
  • (40 cores x 0,5 core-factor **) 20 Oracle Database Enterprise Edition licenses               at € 914,800

Without rendering your application infrastructure worthless by the required investment, a more reasonable step would be to migrate to Oracle Database Standard Edition.

  • Medium Oracle Sun Server X2-4 with 4 x 10 core CPU’s at € 42,500
  • 4 Oracle Database Standard Edition licenses at € 67,400

Still requiring a total investment of more than a hundred thousand Euro and leaving you with the old server and licenses to be decommissioned.

In many implementations, not data entry but data-mining or information aggregation are the costly processes. So probably this will be true in this situation too. With a little investigation it is possible to separate a number of functions that will only query data and not necessarily modify data. Especially in this situation you can also increase your application performance by moving these specific processes to a new environment.

But… how…

The information in the new environment needs to be real-time consistent with the “production” or primary environment. Here we introduce a real-time data replication solution like Dbvisit Replicate which will create just this real-time consistent query environment for you! This makes for the following investment:

  • Medium Oracle Sun Server X4-2 with 2 x 8 core CPU’s at € 19,500
  • 2 Oracle Database Standard Edition One licenses at € 11,200
  • 4 Dbvisit Replicate XTD at € 16,180

With this installation you add another € 50 k. of licensing in stead of € 100 k. with the Standard Edition migration. With this choice, you separate your time-critical data-entry process from the query environment, making sure a mis-fired query will not influence the availability of your data-entry process environment, which is a cool extra advantage!

* All prices are based on list-prices, excluding VAT and including 1 year of support.
** Based on the Oracle Processor Core Factor Table.

Retail Innovation with Dbvisit Replicate

In these current times it’s a “dog-eat-dog” world like it has never been. We are fighting over every bit of margin and trying to create value without increasing cost. The following example from a retail background shows an innovative way you could accomplish this, leveraging the #1 database at the lowest thinkable investment combined with the smart alternative from the makers of Dbvisit Software.

In this example we are following a supermarket in their quest to “do business a little different” and they are thinking of combining ‘shopping audience attitude’ in an interactive way with a time specific advertisement technique.
The idea is to look at what people are checking out at the cash register and combining this, in real-time, with both amounts of items in stock and possible specific business rules applicable to any discounts to be given.

By gathering information at the counters, information about what kind of groceries people are actually buying at that specific moment, you get an insight in the natural fluctuation of buyers behavior during the day. With this information you can do stuff, like figure out how much of what articles you need in stock or direct resupplies in the store during the day, getting the maximum revenue out of the employees responsible for making sure everything is plentyfull ready for the taking.
But why not take this one step further they thought. If we can combine this cash-register information with some kind of continuously changing system of discounting, we can create an element of interactivity. By looking at what articles are sold, combining this with remaining stock and using the fact if there already is some regular discount on specific articles, you can make a system where, for instance every 15 minutes, there is another specific item on ‘super special sale’. Delivering this “buy now” message to the actual customer can be done in several ways, either by loading this information in self-checkout bar-code scanners or for instance by label printers offering the specific discount label to be scanned at the checkout counter. After a ‘super special sale’ moment elapses, everything changes and a new article is the hot deal of the moment.

Where in a normal setting the POS entries are fed into the regular business database to be processed in a batch-like fashions you would have no chance of getting this information recycled. This backbone infrastructure cannot be used for the very data intensive activities we would need for our initiative to take shape. Having delays here would inevitably mean delays and errors at the checkout counters with queues and unsatisfied customers as the least of your concerns.

Regular data replication solutions would render any business case useless before somebody even had the heart to dream up such an idea. Today, by leveraging the Oracle database Standard Edition or even Standard Edition One, you have an environment which is capable of handling such information loads. Combine this hyper cost effective installation with a Smart Alternative like Dbvisit Replicate, replicating data away from you core POS infrastructure, delivering this data at a special database for this initiative. Here it is combined with stock information, also delivered by Dbvisit Replicate, to create a system that is real-time, robust and a system that doesn’t interfere with regular business. Moreover it creates a system which does support the business case by requesting up to 80% less investment.
This example shows that many of the smart ideas which were created by the business have stranded in an impossible business case. Today, the Smart Alternatives of Dbvisit create the opportunity for you to rethink these ideas and really start realizing them.

Just because it’s possible now!