Category Archives: Technical

Updating SQL Developer to use newer Java version


I was being teased by SQL Developer.

Everytime I started it came nagging about that it was being forced to live in an old Java version called jdk1.7.0_45 and that is was not feeling happy about it.
So, I should remedy this, I thought to myself.

First visit was, inspired by some search-work on the WWW, a file called product.conf. Which offered two possiblities:

java

SetJavaHome to some logical location
or
SetJavaHome to nothing, and then SQL Developer would kindly ask me to point it to somwhere to live.

Well… no. My SQL Developer refused it all and just started with this jdk 1.7.

Same hack done in another file on another location, a file called sqldeveloper.conf.
Same result.

Freshly downloaded SQL Developer, put in place… No help!

Erm…

Rename
drwxr-xr-x  3 root  wheel  102 Jan  6  2014 jdk1.7.0_45.jdk
in /Library/Java/JavaVirtualMachines
to
drwxr-xr-x  3 root  wheel  102 Jan  6  2014 xxx-jdk1.7.0_45.jdk

Nope! Still the same nagging…

What now?

In the end, I wound up with one of Jeff Smits’s helpers.
This guy aksed me to “start SQL Developer from the commandline”. Right, but how?

So I finally found:
/Applications/SQLDeveloper.app/Contents/MacOS/sqldeveloper.sh

And that did start SQL Developer from the command-line…

But… wait… an .sh-file!! Interesting!!

And, behold… in this .sh-file lies the answer:sqldev_startup1

So the file reads:
export JAVA_HOME=`/usr/libexec/java_home -v 1.7`
Which I hacked to:
export JAVA_HOME=`/usr/libexec/java_home -v 1.8`

And, presto, error-message gone and SQL Developer now happily lives in Java 8.

Hope this helps somebody out!!


Setting up SQL Instant Client on MAC

In doing more work directly from my Macbook Air, I ran into a situation where native connectivity to an Oracle environment was needed.Connectivity over Oracle Instant Client
From experience I have always been a big fan of the Full Oracle client, just because it comes with a lot of tools and utilities for troubleshooting, which makes the actual experience a bit more pleasant.
Looking & asking around, though, I learned fairly quickly that this client is just not available for Mac OSX… Thanks to Osama Mustafa for confirming.

So, a fact, although quite a number of IT pro’s are working with Mac!

This leaves no other choise than to divert to the Oracle Instant Client 11, which then, indeed, is just an 11g Instant Client (11.2.0.4)!
It would humor me if Oracle were to bring out a 12c Full Client for Mac, as well as an instant client, if someone would so desire.
To have some more tooling around the client, I downloaded all the packages including at least SQL*Plus.

Though the install process is relatively straight forward (download the archives and unzip them in place) getting SQL*Plus to actually run is a somewhat different ballgame!
As usual, when you start a tool, you’re bombarded by messages about unfound dynamic libraries. This set me (very briefly) on a path to place these files where they were expected on my Mac.

In a place like:

/ade/b/2649109290/oracle/sqlplus/lib

for instance, you would need to place a number of these libraries.
This leaves you with the option to populate your system with all these specific libraries, which is of course just fine, but not my choice (think of the mess in ever having to clean up) and especially not when it can be avoided.

A quick search pointed me to this excellent blogpost by Casey Lucas about this exact same issue. With a tool called ‘otool’* applied as suggested, I am now able to run SQL*Plus natively on my Mac without error messages.

* otool – object file displaying tool
If you need it, call it from the command line. It will install this and other development tools on your Mac.

That is nice, but it’s just only over halfway there.

manneke stopt de stekker erin

 

Now I want something where I can just run:
sqlplus <username>@<database>
without intricate connect-strings.

 

 

This leaves one minor “hack”, or rather “edit” required, your .bash_profile needs a bit of a path addition and an environment setting:
alias ll="ls -l"
export TNS_ADMIN=/Applications/instantclient_11_2
export PATH=/Applications/instantclient_11_2:$PATH

Note: the alias was already in there 😉

To top it off, I created a small tnsnames.ora in the directory with the instant client (keeping all related files neatly tucked away together)

xesource =
(DESCRIPTION =
(ADDRESS =
(PROTOCOL = TCP)
(HOST = 192.168.56.66)
(PORT = 1521)
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SID = xe)
)
)

And voila, goal acquired.

sqlplus usera@xesource
Never specify the password on the command line. Not only will it be shown, it will also be sent (most probably) unencrypted over SQL*Net

Oracle on OpenVMS – revival

Can it be true?

Will there be Oracle on OpenVMS again? Meaning the “regular” (sorry) Oracle (12c) RDBMS on a revived VMS?

As many who have ever lived on OpenVMS have always known:

OpenVMS will never die!

OpenVMS can never die, becOpenVMSause it is still running way to many hidden, hyper-mission-critical environments.
The fact that these environments are hidden, combined with the fact nobody ever spent any marketing budget on OpenVMS at all, created a super solution nobody knows about. And you cannot love what you don’t know.

A lot has been happening around this tormented operating system. OpenVMS is indefinitely bound to Digital Equipment Corporation (DEC) which was acquired by Compaq in June of 1998 and then merged into Hewlett Packard in May 2002.

DIGITAL-logo
Personally I have lost access to OpenVMS, and Oracle op OpenVMS, around 1995, when these systems were replace by HP Unix. I never fully recovered 😉

Until a few years ago I was introduced to a Alpha emulator, which creates a virtual machine with (obviously) an Alpha processor, which allows you to run OpenVMS. This was one step closer (back) to Oracle on OpenVMS.

Recently (like the day before yesterday, recently) I learned a number of new things! One is that the ongoing development OpenVMS will be take up by VMS Software Inc. (VSI).
But, more importantly, they will be creating new versions for mainstream hardware (such as x86)!! Wilm Boerhout of VX Company wrote an announcement about this not too long ago (article in Dutch)!

And now these rumors…

A porting of Oracle on OpenVMS!

Will we once again see the day that systems just won’t go down? Oracle environments with an uptime with like a dozen or two ‘nines’ behind the decimal-mark? Wouldn’t that be something?
Your own VMS server running an Oracle database with Oracle Application Express (APEX)? Wouldn’t that be something else? High time to clear some of your calendar and get (re)acquainted with this super OS!

A very special “thank you” goes to my dear friend Gerrit WoertmanOpenVMS Ambassador, who never seized to remain a link to the VMS World!!

If you are from The Netherlands, please also join Interexperience, to stay close to the game.

Live free or die

Big Data: Hadoop and Oracle technologies explained

MarkRittmanUnder the title “Hadoop and Oracle technologies on BI projects” Mark Rittman flew to The Netherlands on the 14th of July to visit the Oracle Usergroup Holland.

As I had obviously heard a lot about Hadoop, I never really did anything further with it and left it to a synaptic link to Gwen Shapira. This lack of action created a kind of threshold in the understanding of the technology. When I heard about this session I realized this would be the moment to take a step further. It turned out the be the  first real talk that puts “Big Data” in the perspective it needs to be consumable and realistic.

In these current times where “The Internet of Things”, more and more social media and ever further digitization we are heading to a Big Data Disruption. This is both a conceptual as a very real thing if you take a moment to think about it. According to real world experience it is also not something “which will once be”, it is something which is actually here today!

On the technical side of thhadoopings, data is captured in something that is called a “data reservoir” (or “data lake” or “data dump (yard)”). Compared with “regular” data storage, you can conclude that data-governance, or a data-structure, in a Big Data system is applied later  We are used to apply this structure, this governance, beforehand, by applying data definition. Using Hadoop in combination with noSQL give you “schema on read” capabilities making quering of the Hadoop data reservoir possible.

Adding this structure later is harder! This leads to the following:

  • Data is much easier to get into Hadoop then into a star-schema
  • Data is much easier to get out of a star-schema then out of Hadoop

This could be one of the essential things to consider when thinking about engaging in a Big Data project!

As Tanel Poder concluded: “High value, high density data will remain in the Oracle database” which I think is a very true conclusion. In the end, the high value conclusions (or the engineering of Big Data results) will also happen within the Oracle database.

On the horizon is “Oracle Big Data Discovery” which will help with the time consuming and tedious work of sorting and interpreting raw data in the data reservoir. The use of ‘R’, as the data exploration tool of duty, is expected to be replaced by this discovery tooling, over time…

To sum up the concept of the first half of the presentation, to my taste:

  • Hadoop changes business
  • NoSQL scales business
  • Oracle runs business

It takes eons to list all names of the Buddha” nicely sums up the number of different applications that make up and are needed to execute a successful Big Data project.
Plus, “You’d better keep the 13 rules for relational databases close at hand“!

presentation

Part two of the evening was spent on mapping these concepts on actually tools, disclosing data through Hadoop to Oracle SQL and making actual use of Big Data. The exercise was completed by demos and illustrated by screenshots from the slides (link below).
A special word of warning goes out to the security aspect of Big Data, which is something to really pay close attention to. Kerberos authentication and apache Sentry are imperative things to implement in your Big Data environment.

All in all, this evening turned out to be 110% more informative and necessary as I expected when I embarked on the journey to Utrecht! Thank you for sharing, Mark!

Thanks to Piet de Visser for the nice quotes! And a great “hi there” to Klaas-jan Jongsma, René Kuipers and Marti Koppelmans.

If you want to work with Big Data on your Smal(ler) Device, please download the Big data light VM from OTN.

The link to the slides for anyone who wants to review the “extended remix”!

A new form of on-line data protection

In the last few years I have been active with data replication solutions in the Oracle realm as you may know. This data replication field is one that has many angels, so there is something new to learn every day and sometimes there even are really new possibilities!

Take heed…

The first and most familiar form of the data replication forms is ‘physical data replication’, also known as ‘Standby Database‘.
In this form of replication, both source and target database are binary identical. Changes are propagated by copying the archived redo logfile from the source database to the environment for the standby database lives. Most often this is another server, preferably in another building in another town, far enough away to not be struck by the same havoc.

There are basically 3 ways to accomplish this;

  1. Use Oracle Data Guard (in Enterprise Edition Oracle database)
  2. Use Dbvisit Standby (in all Oracle database Editions)
  3. Write your own scripting (not recommended in any case)

The second and more emerging form of data replication is ‘Logical Data Replication’.
In this form of replication, there is not real relationship between the source and the target database, other than that the target database houses data coming from the source database. They can live on different systems, be from different database version, a different operating system or even be from a different vendor.
Data is harvested from the source database, converted and copied over to the target database / system. On the target system this data is being applied, in the native speech of the the target database.

There are a few ways to accomplish this, but basically every vendor has the same technique. It is more a matter of pricing, basically.

  1. Oracle Golden Gate (expensive, complex)
  2. Dell Shareplex (somewhat expensive)
  3. IBM Infosphere (ComPlex, expensive)
  4. Dbvisit Replicate (easy, affordable)

So, having discussed this, as this is not new, why this blogpost?

Well…

A Standby database is more or less closed. You can open it occasionally to query some data, but that interrupts the apply-process.
On-line data replication does what it says, you have an active database, where data is continuously added. This way you can, for example query, the same data on two sources to spread load.

The case I mean to discuss is the following:

“I have 10 source database and I want one target database (ah, presto, on-line data replication) and I want to backup 5 tables from each source to the target database (again, on-line data replication, but wait, backup?) so I can easily copy back specific data to the source (eeeuhm, yes…) whenever a user messes up the source tables (aï…) and I want the target to be update each day at 23:00 (so… okay!)

This reeks after somewhat of a hybrid approach!

We cannot do regular on-line data replication, for this is aimed at being real-time.
And we cannot leverage Standby database, since it needs to be centralized in one database and not 10. Next to that it would take some administration to open up the standby database in read-only mode, take the copy, and close the database again.

Working with Dbvisit, we came up with “Pause Apply” and “Resume Apply”, which we combine to form “Delayed Apply“.
This delayed apply would neatly answer the question posed.

  • By “delaying” the application of changes to the data, we could make sure the requested tables are only updated from 23:00 on;
  • We can combine the 50 tables (10 databases x 5 tables) in one single target database, since it is a logical approach to the matter;
  • We can easily restore or copy back corrupted data, since both the source and the target database remain continuously open.

Using Dbvisit Replicate, having this kind of protection for your “logical test-cases”, what this company was doing to require this solution, is really affordable.
It can help in dynamically and quickly resetting specific data-sets or test-cases while remaining much more flexible than creating scripts to reset a specific data-set or test-case! And, of course, there are many more ways to use this neat feature…

Printing directly with APEX

When looking for a print solution with APEX you will find .PDF

You will find a lot of .PDF

And .PDF is good. There is nothing wrong with .PDF. In fact, .PDF looks cool and you can do a lot of neat stuff with it. With toolkits like pl/pdf you can create .PDF’s directly from PL/SQL.

But sometimes there is the need to be able to print directly.
For instance with batch-processing or with nightly print-runs or whatever. And this is where you would find yourself locked out with .PDF and, glancing Google, you would guess you’d be out of luck!
Since we had:

  • created a web based solution
  • the need to print directly
  • print in nightly-runs

plus we had:

  • about 400 reports (.rdf files) which we need to reuse (without having the opportunity to rebuild them in something like pl/pdf)
  • combine different output / distribution mechanisms

we needed to tackle this challenge!

So we did !!

It was fixed by using some old and new technology mixed together:

Oracle reports builder
and
Oracle Fusion Middleware, more specifically, Oracle Reports Server, aka WLS_Reports

By using this combination of products, you can create a printing solutions which is capable of printing directly to your network printer, create HTML or PDF reports.
Schedule them, e-mail them, and all this by URL-control!

http://<your-reports-server-node>:8888/reports/rwservlet?command=argument&command=argument&and-so-on

Use the following (much used, but far from a complete list of) control-commands:

  • report=<name of your .rdf>
  • userid=<userid/password@database>
  • desformat=HTML/PDF
  • destype=type of output of the report
  • desname=name of your output (device, file, whatever)

More commands in the link to the documentation on the bottom of this post!!

Notes:

  • You can post these parameters to the Reports Server without calling them in the original URL!
  • You can set a “local” on your Reports Server for omitting <@database> in ‘userid’ for your default database
  • Actually you can set all environment variables, like TNS_ADMIN, NLS_LANG, REPORTS_PATH, etc.

What we found is we needed to run Oracle Reports Server on Windows, just to take advantage of the Windows Printing System which is quite stable and easy to configure. (So, yes, okay, there you have it, a good thing about Windoze!)

Basically you can create a simple solution, but you can easily expand it quite a bit, making a printing and reporting solutions worthy of and enterprise environment, with distributing reports via e-mail, creating reports in file-systems, embedding reports in websites, and basically anything you want or would need.

And, you get a nice Management Console for free with this installation!

08-forms-em
Oracle Enterprise Manager Console

From this management console you can administer your print-jobs, set all kinds of parameters, which is quite neat!!

But, wait… the catch… It’s gonna cost you!

Or, can you keep it under control?

But of course!

Printing is mostly a half-on-line thing, and for a lot of stuff, it’s not extremely performance / time critical… So what can we do?

Oracle Reports Server is licensed as “Oracle Forms & Reports Server” and it will set you back € 370 per Named User or € 18.200 per CPU (being Oracle CPU’s according to the Core Factor Table!)
It’s still a whole lot of money, but would you really need more than 2 cores? If you give the machine enough memory and fast disks? Probably not.

Is it worth considering taking another node in your environment? Perhaps. This print-solutions could be a viable reason to do so. It brings you quite a bit of functionality straight from the box. But, as always, do your math and make educated choices.

The documentation link promised:
https://docs.oracle.com/cd/E16764_01/bi.1111/b32121/toc.htm

If you would like more info, please just drop me a line!

Upgrading 11.1 to 11.2 and the time it takes

As many know, Oracle 12 has not reached each and every corner of many production sites. Okay, running Oracle 7 or 8 is becoming tricky…
Moving from anywhere to Oracle 11.2.0.4.0 is still a valid action as it is still the latest and rock-solid stable release out there.

I wanted to share what I ran into, upgrading 11.1.0.7.0 to 11.2.0.4.0 with in-place upgrades (catupgr.sql). As 11.1 to 11.2 is a relatively small step, in quite a few instances we have chosen an in-place upgrade over ‘ye old fashioned export/import with the main reason; saving time and reducing chances on error as you factually stay in the same database.

On the topic of “saving time”… this is what I found…

At some point during the upgrade, tailing the log, I noticed that an unreasonable and unexplainable amount of time (50% of the entire upgrade duration in my example) was spent on just this one statement:

— revoke grant with grant option privs

Note: Contact me for the actual statement…

This got me puzzled to such a state, and since The Internet didn’t hold any of the answers, I decided to turn to MOS and raise an SR.

And after just a few messages to and fro, the issue was found!

The time this bit of program spends, is spend on ORDIM and SDO, better known as Oracle Intermedia and Oracle Spatial. Bringing me to the task (which I knew that answer on already, as a matter of fact) of finding out if both of these technologies were used, and that was easy enough:

connect / as sysdba
— ORDIM
select owner, table_name, column_name
from dba_tab_cols
where data_type in (‘ORDAUDIO’,’ORDDOC’,’ORDIMAGE’,’ORDSOURCE’,’ORDVIDEO’)
order by 1,2,3;
— SDO
select owner,index_name from dba_indexes
where ityp_name = ‘SPATIAL_INDEX’;
select owner, table_name, column_name
from dba_tab_columns
where data_type = ‘SDO_GEOMETRY’
and owner != ‘MDSYS’
order by 1,2,3;

Which in my case returned “no rows selected” as expected.

With this knowledge “in pocket”, it was a matter of removing the unwanted matter

  1. Deinstall Oracle Spatial (SDO) following the steps listed in Note.179472.1.
  2. Deinstall ORDIM per $ORACLE_HOME/ord/im/admin/README.txt, see Note.337415.1.

And do the upgrade in half the time.

Well, hope this helps to save you some time.

Oracle Open World 2014

In flight to San Francisco on the 27th of September 2014. Heading out to Oracle Open World for the second time.
Much has changed since my previous visit.

The previous time I came to this biggest of IT events in the world, I came as a spectator, representing an IT company, where my mission was to soak up as much knowledge as I possibly could, submerging myself in the flow of the event.
This time ‘round, I come as a participant, representing another IT company that wants to add to the scene and deliver a smart alternative.
And also personally there is a huge difference! Previously I went alone and was thrilled to find Frits Hoogland at the gate, which was already a familiar face to me back then! Now I am travelling to meet up with many more friends… listening to Metallica on the flight already reminds me that I will meet Gurcan Orhan over there! And in the pervious weeks many promises were made for quick meet ups and catch ups on the grounds of what we call “Oracle Open World”!

Clock set to Pacific Summertime, good morning world!!
Time has come a long way since my previous trip! Where I was bound to the onboard entertainment system a few years back, now I can work, prepare and write this text in flight. Hoping to meet all of you guys out here.
And today, Oracle Open World came to a real kick-off, when we went to the Golden Gate Bridge Run, organized by @thatleffsmith, where we ran or walked with a great number of Oracle celebrities, ranging from @oraclebase through @helifromfinland and Frits Hoogland to @dbvisit!
After this @ilmarkerm, myself and two lovely ladies from Finland shared a cab to Moscone where we met up with the RACAttack Ninja’s at the OTN Lounge…

image

It is turning out to be a good day, with the building of the Dbvisit stand, sneaking into the sessions of the Dbvisit speakers and meeting many, many friends!

#RepAttack, it’s all about learning

Everything we do in our daily life is about learning. Especially in IT we are used to continuously learning. Digging through documentation, figuring out how this or that piece of software should work. Downloading, installing, configuring, trying, tweaking, tuning…
Dbvisit Standby

For Dbvisit, it all started with Dbvisit Standby. Logical data replication, but, logical data replication is not so hard in the end. To get it running stable, to make it do exactly what you want it to do, is an oversee-able task. With it’s wizard driven installer and the clear task of having two exactly the same databases and a little bit of time, you’ll have this process of shipping archived log files, nailed. Getting it stable and reliable is built in, so not much worry there.

Logical data replication on the other hand, is a whole different ballgame!
For a long time logical data replication was just for bigger companies with intricate information needs. And it is a little more challenging than physical data replication. There are database, schema or table considerations, what and what not to replicate to where, making sure you get it stable and reliable in your environment. Checking and following up on changes and doing all kinds of work to make sure you get the best our of your setup.

Nevertheless, Logical data replication will help you in doing:

  • “Zero downtime database migrations”
  • “Report offloading”
  • “Schema consolidation”
  • “Real-time business intelligence” operations

And because these things are about you…

You deserve a “flying headstart”

with Dbvisit Replicate!

Dbvisit_Replicate_HR croppedTo be able to bring you this, we looked at the heroes from the Oracle Technology Network for inspiration. This special group of gurus called the RACAttack Ninja’s have been involved in educating and supporting any and all with a setup of Oracle’s Real Application Cluster technology on your laptop.

Inspired by this example, Dbvisit created #RepAttack! A techno-opportunity that will be traveling the world with it’s inaugural session nowhere less than at Oracle Open World 2014.

#RepAttack is a great opportunity to network with your peers who are just as curious as you are, and to access a fantastic team of warriors who will work with you one-on-one to ensure you are up and running quickly and leaping over any hurdles effortlessly. The session will include a deep dive into core concepts to make sure you return to your organization with an in-depth understanding of how both replication and virtualization really work. Take the time to attend and be that “go-to” person when questions around these concepts come up at work.

Keep an eye out as new details will emerge over the coming days and weeks!
Make sure you checkout Twitter hashtag #RepAttack or just submit your e-mail address below!

#RepAttack sessions by her warriors have been confirmed to be at:
Oracle Open World 2014 in San Francisco, USA
Deutsche Oracl-Anwendergruppe (DOAG) Jahreskonferenz 2014 in Nürnberg, Germany

And remember!
#RepAttack is about YOU!

Watch this following video of one of my personal heroes Ronald Rood playing with Logical Data Replication in Dbvisit Replicate:

TCL, Total Cost of Loss, a new business perspective

‘Total cost of Loss’ (TCL) was launched at the World Premiere of the Standard Edition Round Table during the OUGF Harmony 2014 annual user conference.

Doing nothing does not mean it costs nothing

Joel J. Goodman, Finland 2014

“TCL.” Abbreviations.com. STANDS4 LLC, 2014. Web. 15 Jun 2014. <http://www.abbreviations.com/term/1519392>.

Total Cost of Loss is the representation of the cost for an organization when data is lost. Experience learns that this is the hardest exercise in business continuity to figure out and the most neglected threat to an organization.

Next to the two best known terms RTO & RPO and the less well known term RTDA (‘Recover Time to Data Availability’), TCL is aimed at providing the business with an extra ratio to conduct BCP.

To correctly evaluate investments that have to be done to create a sufficient RTO time frame or RPO granularity, there has to be an understanding of the magnitude of the (financial) importance of the underlaying (data)system. TCL is aimed at calculating this figure where this figure is valid per specific data system.

The following components have currently been identified as being part of TCL:

  1. Collection price per granule of data*
  2. Present value per granule of data
  3. Business value per granule of data
  4. Added value in a dataset combination

* a granule of data is the smallest possible set of variables comprising a usable piece of information.

1. Collection price per granule of data:
The amount of effort (time, computing power, etc.) which is required to assemble and record the granule of data in the data-structure.

For example: 1) the time it takes to pick up an item and scan it’s bar-code with a bar-code scanner and put the item back, or 2) the time it takes to enter somebodies name and address at admittance inclusive of possible preparation and filing.

2. Present value per granule of data:
The current amount of effort (if possible) which is required to reassemble and record the granule of data in the datastructure. This entity is taking into account that historical data could be easy to collect at the historic point in time (#1) but would take an unequal effort to collect at present.

For example: 1) establishing if the item was on stock at the given moment, what it’s bar-code would have read at that time and possibly who scanned it at what location, or 2) finding out what person came to be admitted at that specific date and retracing what the date would have been that was entered at that specific moment and possibly by whom.

3. Business value per granule of data:
The value of the single entity of data for the operational business after the moment of measurement. During data lifetime, the value of a specific granule of data can change. Most often it will become less valuable, making it possible to archive or even cumulate** the data in multi teer storage solutions, but, when called upon, it could be this specific granule of data could be of vital importance!

For example: 1) knowing how many of a specific item is in stock, or 2) having identified a specific person within the clientgroup.

4. Added value in a dataset combination:
It can very well be and most probably is, that any granule of data is of key importance to a dataset combination, where several bits of data of different datasets of data-systems combined create information which is vital to any specific action within an organization.

For example: 1) knowing how many of a specific item is in stock to support a JIT-delivery system to keep a production line uninterruptedly going, or 2) delivering the right treatment to any specific person and being able to bill them accordingly.

** Cumulation of data can destroy a recovery path for retrieving any specific granule of data.

Creating a formula to calculate any TCL will be relatively easy.

Creating a model to extract or calculate or even guesstimate the values for the different variables of the formula will be the challenge.
A challenge that needs to be met because of the ever increasing volume of data and the ever increasing importance of certain realms, like healthcare, public services, transportation, etc., within this data mass.

Please step on board and help define TCL as it could prove to be a critical factor when push comes to shove!