PostgresBuild 2021 Session Highlights: Dr. Michael Stonebraker on Maximizing Database Application Performance

Among everything we do and talk about in our space, one thing is crystal clear—all of the technology and methodologies that exist serve one purpose: to support distinct businesses. Industries include e-commerce,  banking, insurance, manufacturing, telecommunications, and everything in between. This leads us to my main focus points in this article.

Building for business application success

One of the pinnacles of the IT industry is to create applications—applications for business workers, end users, and other applications. Apart from the obvious requirements that the application has to function correctly, two major challenges reign in the space:

  1. The speed required to add new features
  2. The speed at which applications work

These are age-old application development challenges that Dr. Michael Stonebraker addressed, much to my enthusiasm, in his closing keynote at  PostgresBuild 2021. Let us explore this in a little more detail.

Dr. Stonebraker remarked that “[a] cursor interface [to your database] is insanely expensive.” Using a database like Postgres as a storage mechanism to simply store and retrieve data makes no sense. You will get no benefit from all the intelligence that has been programmed in Postgres to help optimize application performance and you will add “insanely expensive” overhead to your application.

There have been multiple attempts to find solutions for this so-called “impedance mismatch,” but none have taken flight thus far. My colleague Hettie Dombrovskaya wrote some interesting papers on this. Let’s also dive in a bit, here.

Applications

For many years, applications have been separated from the databases they use to store, retrieve, and manipulate data. We have grown accustomed to having applications written in specialized platforms and languages to facilitate the need for speed (as mentioned above) and maximize user experience.

Databases

Databases are a logical and practically unstoppable part of applications. Relational databases will fulfill a continuously increasing role in data management. A relational database engine like Postgres is an extremely powerful and versatile platform that gives virtually unlimited possibilities to not only store and retrieve data but also manipulate data—very much so.

Business logic

With the split of where applications live and where data is stored and processed, a development started that was focussed on a further decoupling of these two parts of an application. When writing this, going into all of the individual elements is not part of the objectives at hand.

A 3-tier model—frontend, middle tier, backend—was developed that would lay the foundation for future models of application development. We will also disregard the front end here, as this is the realm of browsers and high-end user experiences. While that is certainly important, it is less relevant or interesting for database folks. Well, up to a point, but that will have to wait for another article.

The way applications are built focuses on user functionality, which typically gets organized into objects or classes.
The way databases operate focuses on data transformation, which typically is organized in rows and columns.

The problem, addressed by both Stonebaker and Dombrovskaya, is that the transformation from rows to objects takes place where the application lives. This has three important implications that cause sorrow and much toil.

  • An incredible amount of traffic between the two most costly elements in the equation: between the database server and the application server, and for every data element (row) a message is sent
  • Processing of data takes place at a point in the chain where information handling is the objective, creating a responsibility mashup. The application layer is responsible for interaction with information rather than transforming data into information.
  • By disregarding database functionality for integrity and transformation, you deny yourself the optimal security measures of having consistent, reliable, and idempotent data processing.

Making it usable

So where does this leave us?

A lot has been written and researched about this topic. Research has been referenced early on in this article and many other approaches have been considered. This includes research on “SmartDB” from Oracle’s Toon Koppelaars, where he empirically proved that the process of doing the actual data transformation in the middle layer of a modern application sets you up for imminent scalability issues.

Business logic in the database

I believe that the reason for this is the rock-solid conviction that “business logic should never be in the database.” This has been a teaching since the mid-nineties of the previous millennium and I think we have arrived at a time where that needs to be re-evaluated.

In the 25-odd years that have passed, much has changed. One of the reasons for this stance was the concept of “database agnostic applications,” which we all know today is impractical and relatively senseless. Senseless due to the development in data management systems such as Postgres, delivering unparalleled speed and functionality to this point.

Logical split

Additionally, the understanding has emerged that the concept of business logic is two-fold:

  1. There is application business logic, which handles how information inside an application is managed and which business requirements the application needs to fulfill.
  2. The remaining part of business logic is the data business logic, which deals with how data is transformed into information and defines which actions are required to maintain consistency inside the database.

Again, there is a lot more to discuss around the split in logic than we have room for here.

If you were to do this, and that is nothing more than a simple architectural decision when designing any application, you can find a wealth of opportunity.

Reflecting on the three challenges we faced when revising our original business logic stance:

  1. Excessive traffic
  2. Mashup of responsibilities
  3. Data security risks

Sparking further discussion

I am fully aware that a topic as extensive as this, one that is potentially controversial and challenging, needs more discussion and thought than a blog post can cover. There have been countless studies and debates, yet as an industry, we still trip over this.

My goal is to fan the flame of this discussion in 2022, which was sparked by Dr. Stonebraker in his keynote at PostgresBuild.

The key to applications is performance; working with a technology like the Postgres database gives you many of the tools to achieve this. Use the force wisely.

A new North Star has risen

While this post is not going to be about pulsars, black holes or any other form of astrological phenomenon…It still is going to cover one of the most exciting and fundamental shifts in the IT industry today.

A tale of two convergences

This tale is discussing convergence.
To understand the significance of any convergence, it is necessary to understand the lines that are coming together. Please indulge me when I review these.
The challenging bit is the fact that inside both of the lines there are only the very early glimpses of this new era. They are still very much busy with the day-to-day operations in open source or conquering more of their base realm.

Why Postgres is the answer

I migrated myself from Oracle to Postgres! Moving from a steady path as an Oracle ACE to this—at least for me at that time—brand new world of open source data management. If you want to know how and what, I wrote a trilogy about that here, here and here.
After working in the Oracle realm for almost a quarter of a century, moving from Oracle to Postgres truly felt like following a new North Star. It has proven to provide good guidance.
With the phrase still in my head—”horses for courses”—the PostgreSQL Global Development Group does focus exclusively on Postgres. Coming from Oracle, though, it made me wonder who focuses on all the other aspects that vendors create to make their systems work. This “emergence from the red bubble” made me realize that there are much broader challenges, which leads to the “second thread” for convergence, in this story.

I like to think this puts me in a position that allows me to oversee this part of the spectrum (databases and data management) to a certain degree.

Note: if you realize where the data management industry is moving, Postgres is the answer.
Hold that thought!

Brain-breaking challenges

Over the years, much focus has been on infrastructure. Simply because it is expensive, tedious, error-prone and has lots and lots of room for improvement.

  • Infrastructure as code – your server is not your pet, you do not tend to it, you replace it.
  • Cloud infrastructure – your server?? It is not your server… you just use it, you are server-less.
  • Many more of these developments have shaken the world since I started in IT.

There are a number of drivers that we can distinguish and that play a part here:

  • Cost needs to go down!! The ever emerging CIO challenge is: do more with less. And, we are succeeding in that, year after year…
  • Speed needs to go up!! We need more features for our applications, we need them sooner, we need them working more flawlessly.

Working in IT operations during various points in time, these elements have always given me the hardest brain-breaking challenges.

The winner is…Postgres

It is really, and undeniably so (or at least it is so in my book), Postgres has won, and where it is still competing, it will win <full stop>

Why?

There are several reasons why I think—why I know—this is going to happen.

  1. Publicly governed open source, community driven open source, give it a name. It is not a for-profit thing that is behind the technology…Hence, there are no barriers or boundaries to opportunities and direction.
  2. Relational will never die. Dr. Michael Stonebraker said it himself, there is going to be no post-relational era, and the more data management / processing methodologies we get, the more we will need SQL to make sense of things.
  3. Metcalfe and Reed’s laws. More ideas, more contributors, more firepower to Postgres. Postgres’ strong and unwavering foundation grows and evolves. The foundations laid by the PostgreSQL core team will continue to feed the fire of Postgres for the foreseeable future.
  4. Datawarehouse, Datalakehouse, Graph, NoSQL, NewSQL, distributed, and what other mumbo-magic words you can put together. We’ve seen it in the past and we will see it again…it will all converge back to Postgres. It’s inevitable. The latest  evidence: Apache AGE. I rest my case.

Stop for a minute… just a brief pause to think about some of the implications of this…Put prejudice aside, and just consider this: the Power of Postgres, the most transformative tech since Linux.

Kubernetes (K8s)

Enter Kubernetes! If you have not yet done so, I strongly recommend watching the two-part documentary on the origins of K8s here and here.

Once upon a time, at a conference somewhere in Eastern Europe, a keynote speaker was talking about IT operations. At some point in the story, he told us that he was on the phone with his CEO after a failed deployment. He said; “Look, Sir, the application worked on my computer!” and his CEO replied: “Well, that’s all good, except that I am not paying you to make it work on your computer, but I pay you to make it work on my computer”.
In my opinion, this is one of the key elements that container infrastructure solves. Immutable infrastructure, as part of running Postgres, the Kubernetes way. A critical new way of looking at integrating Postgres as a core element for data processing in a true cloud native manner.

Apart from doing infrastructure very well, K8s turns the world of application development on its head. Monolithic Applications become Microservices based solutions, the paradigm that allows for things such as Continuous Integration / Continuous Delivery, and many more of the super-cool things that DORA describes.

Oh, and did I mention that Kubernetes is also a publicly governed open source community driven project?! Check out what the CNCF is all about! I won’t go on about Seven of Nine this time, I promise.

Data on Kubernetes

So basically, there you have it!

We have seen AMAZING things from Postgres!
But they never really focussed on deployment and alike; “horses for courses”.

We have seen AMAZING things from Kubernetes!
But they never really focussed on data and alike because developers need to build features!

Under the awe-inspiring guidance of the CNCF, we have an actual Data on Kubernetes Community!
A first, profound and fundamental step on the path of convergence, where Postgres meets Kubernetes and we start enabling a new era that might start bringing a couple of answers to some age-old (well, as far back as the invention of computers, really) and some newer challenges such as:

From here it is basically: “Hi-ho, Silver! Away!”
There is no stopping this, or as a colleague once paraphrased Babylon 5: “The avalanche has already started. It is too late for the pebbles to vote.”

This is the new North Star.

In the end, we’re just getting started

  1. Oracle to Postgres—well, that’s done. What has not yet been migrated will probably die out at some point.
  2. Postgres is established, no debate there.
  3. Kubernetes is so strong, so appealing, it answers so many questions that it will be with us for quite some time.

There is, though, this one fundamental gap. However you twist or turn it, the user of your app needs data, otherwise what’s the use of all of your mega cool features?

Postgres and Kubernetes, the two most powerful technologies of today answer that question.

2021 F1 World Championship thoughts

It was Marco (https://twitter.com/Irosmarco) who prompted me on Twitter.

Question to you Jan
Formula 1 fan to another Formula 1 fan.
Not Hamilton or Verstappen. But as a Formula 1 fan.

Do you think last race was a fair race?
Do you think Verstappen deserved to win the last race?
Did Masi make a correct decision?

Don’t be biased but be fair.

(https://twitter.com/Irosmarco/status/1475565657441906690)

And that question intrigued me to such an extent that I though; “This warrants a real answer and not just something that is limited by Twitter’s character count”. So, Marco, here goes nothing…

There are so many angles to this question that I will limit myself to a couple of them.

Loose / loose

Ultimately, for Masi, it was a loose / loose decision.
What I mean by that is he has now lost out for everyone who roots for Hamilton, obviously, or I would not even be writing this.
If he had done nothing, and the race had finished behind the safety car, he would also loose. There would be not a living Max Verstappen fan who would not have hated his guts. Christian Horner cried out; “we only need one lap to race.” Everybody would have said that the race was “given” to Lewis, as that would be the case in that situation, as Max would have had no chance to overtake him.

That leaves the bit that the cars that were behind Max were not allowed to overtake them both. Well, basically this would effectively also lead to a race that ends behind the safety car, as there was only a good lap left in the race.

Circumstances

Rubber

If we step back just a little bit, we have the issue of strategies. If we would put ourselves in the situation that both teams had somehow wounded up with the strategy that put both Lewis as well as Max on fresh red rubber… Would Max then even have had the opportunity to overtake? Seeing how the rest of the race was, probably not, but finally, they were not both on fresh red rubber. Is this something that should have influenced the decision? If we apply this rubber-fact either way? It should not have and I am no mindreader.

The crash

In that same light, if Latifi would not have crashed, none of this would have happened. No safety car, no issues with cars being left in the queue and so on, and so on.

Everything together

If you look at this race, if you look at the outcome…
If you put this in perspective over the racing season…
If you look at “the purpose of Formula one racing”

Completely unbiassed, looking at Formula one, the best possible decisions were made.
Why?
Because they have brought about emotion and passion in so many people. For years and years F1 has been dominated by either a single driver or a single team. Up to a point where folks just gave up, as you could already predict the world champion after the first lap of the first race.
Additionally, car racing is not something you do with knife and fork. Dangerous? Hell, yes, it is dangerous. And here you can go off in a complete tantrum of it’s own around halo’s and what have you.

If I look at the whole picture, Michael Masi could not have done a better job. Even more people than ever before will be so excited to see the 2022 F1 World Championship battle taking place, and, as far as I am concerned (which essentially is about “zero”) is what this is all about.

Marco, I hope this is an answer to your call. I do not really expect you to agree with me, but that is okay. As Morgan Freeman said: “We might not agree, but that doesn’t mean we cannot be friends.” Well, okay, he did not say that, but it is my free interpretation of it…

Why document databases are old news…

We’re going to store data the way it’s stored naturally in the brain.

This is a phrase being heard more often today. This blog post is inspired by a short rant by Babak Tourani (@2ndhalf_oracle) and myself had on Twitter today.

How cool is that!!

This phrase is used by companies like MongoDB or Graph Database vendors to explain why they choose to store information / data in an unstructured format. It is new, it is cool, hip and happening. Al the new compute power and storage techniques enable doing this.
How cool is that!!
Well, it is… for the specific use-cases that can benefit from such techniques. Thinking of analytical challenges, where individual bits of information basically have no meaning. If you are analyzing a big bunch of captured data, which is coming from a single source like a machine, or a click-stream or social media, for instance, one single record basically has no meaning. If that is the case, and it is really not very interesting if you have and retain all individual bits of information, but you are interested in “the bigger picture”, these solutions can really help you!

How cool is it, actually?

If it comes to the other situations where you want to store and process information… where you do care about the individual records (I mean, who wants to repopulate their shopping cart on a web-shop 3 times before all the items stick in the cart) there are some historical things that you should be aware of.
Back in the day when computers were invented, all information on computers was stored “the way it’s stored naturally in the brain”.
Back in the day when computers were invented, all we had were documents to store information.
This new cool hip and happening tech is, if anything, not new at all…
Sure, things changed over the last 30 years and with all the new compute power and storage techniques, the frayed ends of data processing have significantly improved. This makes the executing of data analysis, as described above, actually so much better!! Really, we can do things to data, using these cool new things, that we never dreamt possible, 30 years ago.
But these things remain the “frayed ends of data processing”.
If you do have requirements like filling your shopping cart once, and it works all the way through check-out…
If you do have requirements where some kind of “transaction” is required (like buying something, like your bank account, like two actions that are dependent of each other)…
You need transactions…
I know, “transaction” is boring, old-fashioned and a seemingly surpassed entity…
But, I promise you, you will want those things, if you actually have to process something in your application in a way that makes real-world sense.

This was solved ages ago

For that, indeed 30 years ago (which is such a long time, most of the cool young dudes and dudetes developing applications today were not even born), the relational database theory was invented to solve the inherent issues that document based databases bring if you want to introduce these transactions to your application.
Document databases brought these issues back in the day… They bring these issues today!!!
Please believe me, they bring these issues today! This is the reason – contrary to the messages by non-relational database vendors – applications developers find that they need to add actual transactional capabilities to their applications, to either work in real life of bring any kind of scalability to them.
Imagine building an application and actually being successful with it! Isn’t that the dream of every application project? How boring is it then, to find that you are unable to meet demands? Not because you are understaffed or because you lack compute-resources? But simply because your application, based on these data storage methodologies, cannot keep up? Document database is data storage, not data processing.
For that, you would need the likes of PostgreSQL. Postgres is (also) free, it is Open Source… it is even Community Open Source, how cool is that? No annoying vendor telling fantasy stories about what Postgres can do, unlike MongoDB for instance.

So…

Coming back to the opening phrase, We’re going to store data the way it’s stored naturally in the brain.
It is kind of dumb to use a computer to store data like it would be stored in the brain. The human brain is not designed to process YUGE amounts of data, simply because the structure is not designed to accommodate that. Period.
To process large amounts of data, you need structures, either when you store the data or when at the moment you want to start doing stuff with it. Structuring data when you store it, is by far the cheapest method. Technologies like JSON data storage add sufficient flexibility to that, and engines like Postgres have no trouble what so ever processing such data.
Finally, the programs these vendors use to “store data the way it’s stored naturally in the brain” are written in computer-code, also not “naturally like the brain”. Would we need to revert to medieval clerks to start recording the data in these documents? No, I guess not.
Be smart,
Be modern,
Be hip-and happening,
Be efficient and scalable,
Use relational database techniques…

Postgres in the Enterprise, Real World Reasons for adoption

The lead-in

Friend and colleague Bruce Momjian already shared in his presentation “Will Postgres live forever?” (https://attendee.gotowebinar.com/register/8793191803818201345) an overview of reasons why companies adopt open source software (OSS) in general. This overview was based on a survey done by Black Duck Software in 2016 (https://www.slideshare.net/mobile/blackducksoftware/2016-future-of-open-source-survey-results). The survey ranks databases on the second place of technologies that companies are moving towards OSS in.

In this series of short blogpost I want to highlight some of the practical cases of Postgres adoption in the Enterprise from our day-to-day practice, and see which of the general factors of OSS adoption we see with Postgres .

IBM DB2 to Postgres

The first of the cases deals with a semi-government organization that currently has an application landscape based on IBM DB2, Microsoft SQL-Server and Oracle. Our primary engagement is to support their move of their application landscape from IBM DB2 to Postgres.

The initial driving force behind this decision was the staggering renewal-cost this organization was facing for their DB2 licenses. These licenses included IBM’s Deep Compression option (https://www.ibm.com/developerworks/data/library/techarticle/dm-1205db210compression/index.html). This option reduces disk-space by as much as 50%. More importantly it reduces database IOPS to meet the performance of the applications. Moving from DB2 on Z/OS to DB2 on Linux, the company faced a performance deficit which they initially resolved by using this Deep Compression option.

This option was also a grounds for worry. Obviously an expensive database option, that was known to bring real benefits to DB2 database performance, would outperform Postgres, right? Would we be able to come near DB2 database performance when running on Postgres?
Discussion with our own Postgres experts could not take away these concerns. We would have to test and validate.

The project had already kicked into gear when EnterpriseDB was engaged. Some of the software had already been migrated and converted to fit on EDB Postgres Advanced Server (EPAS). For much of the data and schema migration, the software by Spectral Core, named Full Convert (https://www.spectralcore.com/fullconvert/).

The outcome

Part of the extensive test-program of the migration project was performance testing, which also included a simulated full pressure practice test with load-generators.

The surprise was then also quite absolute, when the final results of the performance tests were released. EPAS was capable to keep up quite nicely with the original DB2-setup, and even improved performance in some specific tests.

Performance tests concluded, June 28 2018

Test Number Postgres Runtime
% of original DB2 runtime
Number 1 87.2 %
Number 2 101.0 %
Number 3 109.4 %
Number 4 79.3 %
Number 5 70.0 %
Number 6 109.5 %
Number 7 106.4 %

In analysis of this particular project, we can conclude that the reasons for this project to choose Postgres are:

  • Primarily #5, Cost reduction. Database running cost on an annual basis can be reduced in access of 80%
  • Secondly #2, Freedom from vendor lock-in. Postgres is a community open source solution having no single company controlling the solution.

More details

I am curious to learn what details you would want to read here in order to help you assess your specific situation! Let me know in the comments.

Swiss pgday 2018

The cool thing about zooming out… is that your world appears to get bigger.
Being personally now no longer bound to Oracle, having the opportunity to work with PostgreSQL, also gives the opportunity to go new places and explore new possibilities. One of the cooler things is: participating in Postgres conferences.

Conference vibe

Where Oracle conferences, although having some deep technical aspects, tend to lean towards the business aspects of technology, especially today with Cloud first / Cloud only… PostgreSQL conferences tend to lean towards engineering. What things are we – that same Postgres community – building in and around Postgres. What do we think about these developments and how can we improve them.
Postgres tends to have anywhere from 5 to 10 different directions in which the product is being developed. Lots of people check, test, improve, criticize and comment on all these developments.
A significant difference, somehow.
The atmosphere at PostgreSQL conferences, though is also simply super cool. New people to meet, new ways to incorporate.

Swiss pgday

I had the opportunity to join and participate in the Swiss pgday (find the program here) in the beautiful town of Rapperswil, at the university of applied science (HSR). Together with my friend and colleague, Postgres founding core team member, Mr. Bruce Momjian, I joined the event.
The Swiss Postgres community booked a nice result with a 30% higher number of participants. In two tracks, over 12 talks were delivered by local and international speakers on many aspects of Postgres, from a more business perspective on Postgres to the new things that come with Postgres 11, and can now be tested by anyone who wants to!

With all these shifting panels, with this second wave of Open Source rolling, that is now happening… More intricate systems, like relational database management systems, are now being offered and adopted.
It makes sense to zoom out as the opportunities increase so rapidly and in ways never foreseen.

I challenge and invite you all; come on board and ride this wave with us.

A week of PostgreSQL

One of the attractive things of my job is this… Just a bit more often than every now and then, you get the opportunity to get out and meet people to talk about Postgres. I don’t mean the kind of talk I do every day, which has more of a commercial touch to it. – Don’t get me wrong, that is very important too! – But I mean, really talk about PostgreSQL, be part of the community and help spread the understanding of what open source database technology can do for companies. Running implementations, either small or large, trivial or mission critical…

This past week was one of those weeks.

I got to travel through Germany together with Mr. Bruce Momjian himself. Bruce is the one of the most established and senior community leaders for Postgres. Bruce is also my colleague and I would like to think I may consider him my friend. My employer, EnterpriseDB, gives us the opportunity to do this. To be an integral part of the PostgreSQL community, contribute, help expand the fame of Postgres, no strings attached. Support the success of the 30 to 40,000 engineers creating this most advanced open source RDBMS.

The week started with travel, and I got to Frankfurt. Frankfurt will be the proving ground for the idea of a pop-up meet-up. Not an EDB-marketing event or somewhere where we sell EnterpriseDB services, but allow anyone just to discuss PostgreSQL.
We will be in a city, in a public place, answering questions, discussion things or just relax with some coffee. Purpose is to show what the PostgreSQL community is all about, to anyone interested!

The first day in Frankfurt, we spent at the 25hrs hotel. We had some very interesting discussions on:

  • Postgres vs. Oracle community
  • Changing role of DBA:
    • The demise of the Oracle DBA
    • RDBMS DBA not so much
  • Risk management
  • “Data scientist”
  • Significance of relational growing again

In the afternoon we took the Train to Munich, which was a quick and smooth experience. Munich would be the staging ground for a breakfast meeting, or a lunch… or just say hi.

Bruce and I spend the day discussing:

  • How to go from using Postgres as replacement of peripheral Oracle to Postgres as replacement for all Oracle
  • Using Postgres as polyglot data platform bringing new opportunities

After the meet-up we headed to Berlin training towards the final two events of this week.We spent Thursday teaching the EDB Postgres Bootcamp, having a lot of fun and absolutely not sticking to the program. With Bruce here, and very interesting questions from the participants, we were able to talk about the past and the future of Postgres and all the awesome stuff that is just around the corner.
Friday morning started with a brisk taxi-drive from Berlin to the Müggelsee Hotel. And, if you happen to talk to Bruce, you simply must ask him about this taxi-trip 😉

pgconf.de ended up being a superb event with a record breaking number of visitors and lost of interesting conversations. You will find loads of impressions here!

I got to meet a great number of the specialists that make up the Postgres community:
Andreas ‘Ads’ Scherbaum
Devrim Gündüz
Magnus Hagander
Emre Hasegeli
Oleksii Kliukin
Stefanie Stölting
Ilya Kosmodemiansky
Valentine Gogichashvili

I am already looking forward to the next Postgres events I get to attend… pgconf.de 2019 will in any case happen on the 10th of May in Leipzig.
It would be super cool to see you there, please submit your abstracts using the information from this page!

Getting traction through action!

Weird title, right! Very slick and marketing talk…
It is also a deviation of the usual topics I discuss here, but at least (!)as important.

But it is true. And this is what we truly aim to do at JK-Consult. Not just professionally, but in everything we stand for.
Of course, as long as this is just words, what does it really mean? Not so much…

Therefor, starting in 2018, we are a sponsor for Senna Rodijk, up and coming Dutch karting-talent. Senna has chosen the bold path to be part of Formula 1 and will make this happen through her focus on karting with Chrono Karting.
Starting this year in a new class, the minimax karts, we have seen her racing today in the first race of the new season!

Go Senna!

For the complete story, as she has some unique challenges, check out her website.
Not only is she trying to establish herself as a champion in a field with only guys. In a time that everybody wants to try their hand at karting, to be the second Max Verstappen! This alone is a challenge in it’s self…
Senna’s major sponsor and coach, her bonus-father, Carlo Hoevenaars, is diagnosed with terminal cancer and his treatments have been recently ended… this is not pity full -as they don’t desire pity-, it is an incredible challenge!

Deep, deep respect for Carlo and deep respect for Senna and Suzanne!!

They can  always do with any (additional) support they can get.
Click here for more details and some words from Senna herself.

We have put our actions where our believes are! You only get traction through action.
We stepped up and JK-Consult is the proud sponsor for Senna’s racing tires for 2018!

At the time of this writing we have finished the first race of 2018, with an amazing result!! Senna finished 5th and 6th in the two heats at the circuit Horensbergdam in Genk!


Who will follow and help us let Senna take the 2018 championship for Carlo!!

Why I picked Postgres over Oracle, part III

This is the final episode of this short series of blog posts on some of my drivers for moving to Postgres from Oracle.
Please do read Part I and Part II of the series if you have not done so. It discussed the topics “History”, “More recently”, “The switch to Postgres”, “Realization”, “Pricing”, “Support” and “Extensibility”.

In summary:

  • Part one focused on “why not Oracle anymore, so much”
  • Part two discussed on the comparison between PostgreSQL and Oracle
  • Part three talks some more on what Postgres then actually is

Community

One of the more important things to be really, really aware of is that Postgres is “not just open source“. Postgres is “community open source“.

Now, why would that be important, you might wonder.

We all know what open source stands for. There are many advantages to an open source system, and in our case, an open open source database.
A number of arguments are in this blog post series. If you take this one step further though, and realize that Postgres is a community open source project, what are  extra advantages?

A community open source project is not limited, in any way, to any one specific group of developers (let’s call them a company). For example, let’s look at MongoDB. This is an open source database, but it is developed by MongoDB inc.
It is, in essence, controlled by MongoDB.

Postgres is developed by the Postgres Developer Community coached by the Postgres Core Team.
This makes Postgres incredibly open, independent and it enables its developers to truly focus on actual business problems that need to be solved. There is no ulterior drive to satisfy commercial goals or meet any non-essential requirements.

Development

A very important discriminator, that only became this clearly and apparent to me, after I dove into Postgres some more, is the development…

The actual development of the database core-software is done by this community, we’ve just identified.

“Well, yes…” you might say, this is what open source stands for. But the impact of this extends well beyond support, as I mentioned in part 2 of this series. The ability to be part of where Postgres goes, to have actual influence on the development, is awesome, especially for a database platform in the current “world in flux”.
Postgres users don’t necessarily have to wait until “some company” decides to put something on the road-map or develop it at their discretion. These company-decisions will mostly be driven by the most viable commercial opportunity, not necessarily the most urgent technical need.

The development of Postgres is more focused on “getting it right”.
One nice example is the Postgres query optimizer. The Postgres community hates bugs. When bugs start to get discussed, it results in many emails within the community, which stand for a lot of reading!
Many bugs are fixed very quickly, so that this email storm stops!
For optimizer bugs therefore, turn-around times (from reporting to having a production-fix) can be as low as 72 hours, so even for mechanisms as complex as a query optimizer.

Invitation

I would like to invite all of you in the Oracle community to take a look at the Postgres query optimizer and share your concerns, worries, bugs or praise with the Postgres community.
If you want to, you can share this with us at the https://www.postgresql.org/list/pgsql-hackers/ email list. We are looking forward to your contributions!

Future

Oracle

I can only speak from what I see. What I see is that Oracle is becoming an on-line services company. I see them moving away from core technology like the database and accompanying functions. Oracle is more an more and moving to highly specialized applications aimed at very big companies.
Chat-bots, social media interactions, integrated services and more, delivered from a tightly integrated but also tightly locked set of Oracle owned and operated data-centers, or rather, the Oracle cloud.

Is this useful? Of course, there will be targeted customers of Oracle who will continue to find this all extremely useful, and it will be, to them.
It this for me? No, not really.

Postgresql

In the beginning, Linux was not something anyone wanted for anything serious. I mean, who wanted to run anything mission critical on anything else than Solaris, HP-UX, VMS, IBM? No-one…
And that was just a few years ago. Imagine!
Today in any old data-center, if you would eliminate the Linux based servers, you would not have much left.
This same thing is now also happening in, what I guess is the second wave of open source. More complex engines are being replaced by open source and the ever present relational database engine is one of those.

Why? Price, extensibility, flexibility, focus, you name it. We have seen it before and we will see it again.

EnterpriseDB

If you permit me just these few words.

I think EnterpriseDB is extremely important for PostgreSQL. We have been fighting on the forefront since the beginning, supporting PostgreSQL’s move to the Enterprise. EnterpriseDB has been and will continue to spend a large amount of our resources to PostgreSQL. We are a PostgreSQL support company. We just have been not very good at patting ourselves on the back…
As a company we are doing extremely well, simply because Postgres is rock solid in all facets and ready to take on the word, even the most daunting tasks – and beyond.
This will continue as Postgres will continue in this second wave of Open Source.

I thank you for your attention.
If you have addition questions or comments, please do not hesitate to contact me.

Why I picked Postgres over Oracle, part II

Continuing this short series of blog posts on some of my drivers for moving to Postgres from Oracle.
Please do read Part I of the series if you have not done so. It discussed the topics “History”, “More recently” and “The switch to Postgres”.

Realization

In the last months, discussing Postgres with my Oracle peers, playing with the software and the tooling, I actually quite quickly realized Postgres is a lot cooler, at least to me. Not so much of the overly complicated technology, but rather built to be super KISS. The elegance of simplicity and it still gets the job done.
Postgres handles a lot the more complex workloads than many (outsiders) might think. Some pretty serious mission-critical workloads are handled by Postgres today. Well, basically, it has been doing this for many, many years. This obviously is very little known, because who would want to spend good money on marketing  for Open Source Software, right? You just spent your time building the stuff, let somebody else take care of that.
Well… we at EnterpriseDB do just those things, …too!

And, please, make no mistake, Postgres is everywhere, from your fridge and video camera, through TV set-top boxes up to major on-line banking software. Many other places you would not expect a database to (be able to) run. Postgres is installed in places that never get touched again. Because of the stability and the low to no-touch administrative character of Postgres, it is ideally suited for these specific implementations. Structured on some of the oldest design principles around Postgres, it doesn’t have to be easy to create the database engine, as long as it “just works” in the end.
Many years ago, an Oracle sales director also included such an overview in his pitch. All the places Oracle touches everybody’s lives, every day. This is no different for Postgres, it is just not pitched anywhere, by anyone, as much.

I have the fortunate opportunity to work closely together with (for instance) Bruce Momjian (PostgreSQL core team founding member and EnterpriseDB colleague). I also had the opportunity to learn from him some of the core principles on which Postgres was designed and built. This is fundamentally different from many other software projects I know and I feel it truly answers some of the core-requirements of database projects out there today! There is no real overview of these principles, so that’s on my to-do list.

Working with PostgreSQL

Pricing

Postgres is open source… it is true open source. It is even a true community open source project, but more about that later in the next installment.

Open source software is free to use, it does not cost nothing!

But, wait! Open source does not mean for free! How…, why…, what do you mean??

Well… you need support, right!?
The community can and will help you, answer questions, solve some of your problems. But they will not come in to install, configure and run Postgres for you. You will need to select and integrate your specific selection out of the wealth of tools. You basically have a whole bunch of additional tasks to complete to get your Postgres platform sorted out.
Companies like EnterpriseDB can help you mitigate these tasks. This allows you to focus on the things you actually want to achieve, using Postgres

In comparison to traditional database vendors, the overall price of your solution will absolutely significantly reduce when using Postgres as your open source database engine.

Support

A significant difference between Oracle (for instance) and Open Source support services is interchangeability.
In the end, Oracle support can only be given by Oracle. They are the only ones that have access to the software sources and can look up (and hopefully fix) issues. In the support of Postgres, or any true community open source product, different companies can provide support. If you don’t like the company you work with… you switch. This drives these companies to be really good at delivering support! How is that for an eye opener.

Extensibility

One of the superb advantages of Postgres is its native extensibility. I mean, think about it for a moment… having a relational database platform with the strength of Postgres, the strength of Oracle or Microsoft SQL Server for that matter. Postgres gives you more options to integrate a wealth of data sources, data types, custom operators and many more other extensions than you will ever need! The integration into Postgres is so solid, these extensions function like any other function in the core of Postgres.
And, rest assured, chances that you will ever be faced with having to built this yourself are extremely slim. There are 30 to 40 thousand developers working with larger and smaller pieces of code of Postgres. Chances are that if you find yourself challenged, somebody else faced and solved that challenge before you. That solution will then be available for you to take and use, solve your challenge and move on. That is also open source for you.

This capability is what makes Postgres ultimately suited to fit the central role in any polyglot environment, we see being built today.
Maximizing the amount of information from data available in multiple data silos in an organization. This is a challenge we see more and more often today. Integrating traditional  applications as ERP, CRM, with data-warehousing results, again combined with Big-data analysis and event-data-capture aggregates. This generates additional decision-driving information out of the combination of these silos. Postgres, by design, is ultimately suited for this. It saves you for migrating YUGE amounts of data from one store to another, just to make good use of it.
The open source Dogma “Horses for courses” eliminates double investments, large data migrations or transformations, it just enables you to combine and learn from what you already have.

End of part II

A link to part three of this blog post will be placed here shortly.