Tag Archives: Cloud

A new North Star has risen


While this post is not going to be about pulsars, black holes or any other form of astrological phenomenon…It still is going to cover one of the most exciting and fundamental shifts in the IT industry today.

A tale of two convergences

This tale is discussing convergence.
To understand the significance of any convergence, it is necessary to understand the lines that are coming together. Please indulge me when I review these.
The challenging bit is the fact that inside both of the lines there are only the very early glimpses of this new era. They are still very much busy with the day-to-day operations in open source or conquering more of their base realm.

Why Postgres is the answer

I migrated myself from Oracle to Postgres! Moving from a steady path as an Oracle ACE to this—at least for me at that time—brand new world of open source data management. If you want to know how and what, I wrote a trilogy about that here, here and here.
After working in the Oracle realm for almost a quarter of a century, moving from Oracle to Postgres truly felt like following a new North Star. It has proven to provide good guidance.
With the phrase still in my head—”horses for courses”—the PostgreSQL Global Development Group does focus exclusively on Postgres. Coming from Oracle, though, it made me wonder who focuses on all the other aspects that vendors create to make their systems work. This “emergence from the red bubble” made me realize that there are much broader challenges, which leads to the “second thread” for convergence, in this story.

I like to think this puts me in a position that allows me to oversee this part of the spectrum (databases and data management) to a certain degree.

Note: if you realize where the data management industry is moving, Postgres is the answer.
Hold that thought!

Brain-breaking challenges

Over the years, much focus has been on infrastructure. Simply because it is expensive, tedious, error-prone and has lots and lots of room for improvement.

  • Infrastructure as code – your server is not your pet, you do not tend to it, you replace it.
  • Cloud infrastructure – your server?? It is not your server… you just use it, you are server-less.
  • Many more of these developments have shaken the world since I started in IT.

There are a number of drivers that we can distinguish and that play a part here:

  • Cost needs to go down!! The ever emerging CIO challenge is: do more with less. And, we are succeeding in that, year after year…
  • Speed needs to go up!! We need more features for our applications, we need them sooner, we need them working more flawlessly.

Working in IT operations during various points in time, these elements have always given me the hardest brain-breaking challenges.

The winner is…Postgres

It is really, and undeniably so (or at least it is so in my book), Postgres has won, and where it is still competing, it will win <full stop>

Why?

There are several reasons why I think—why I know—this is going to happen.

  1. Publicly governed open source, community driven open source, give it a name. It is not a for-profit thing that is behind the technology…Hence, there are no barriers or boundaries to opportunities and direction.
  2. Relational will never die. Dr. Michael Stonebraker said it himself, there is going to be no post-relational era, and the more data management / processing methodologies we get, the more we will need SQL to make sense of things.
  3. Metcalfe and Reed’s laws. More ideas, more contributors, more firepower to Postgres. Postgres’ strong and unwavering foundation grows and evolves. The foundations laid by the PostgreSQL core team will continue to feed the fire of Postgres for the foreseeable future.
  4. Datawarehouse, Datalakehouse, Graph, NoSQL, NewSQL, distributed, and what other mumbo-magic words you can put together. We’ve seen it in the past and we will see it again…it will all converge back to Postgres. It’s inevitable. The latest  evidence: Apache AGE. I rest my case.

Stop for a minute… just a brief pause to think about some of the implications of this…Put prejudice aside, and just consider this: the Power of Postgres, the most transformative tech since Linux.

Kubernetes (K8s)

Enter Kubernetes! If you have not yet done so, I strongly recommend watching the two-part documentary on the origins of K8s here and here.

Once upon a time, at a conference somewhere in Eastern Europe, a keynote speaker was talking about IT operations. At some point in the story, he told us that he was on the phone with his CEO after a failed deployment. He said; “Look, Sir, the application worked on my computer!” and his CEO replied: “Well, that’s all good, except that I am not paying you to make it work on your computer, but I pay you to make it work on my computer”.
In my opinion, this is one of the key elements that container infrastructure solves. Immutable infrastructure, as part of running Postgres, the Kubernetes way. A critical new way of looking at integrating Postgres as a core element for data processing in a true cloud native manner.

Apart from doing infrastructure very well, K8s turns the world of application development on its head. Monolithic Applications become Microservices based solutions, the paradigm that allows for things such as Continuous Integration / Continuous Delivery, and many more of the super-cool things that DORA describes.

Oh, and did I mention that Kubernetes is also a publicly governed open source community driven project?! Check out what the CNCF is all about! I won’t go on about Seven of Nine this time, I promise.

Data on Kubernetes

So basically, there you have it!

We have seen AMAZING things from Postgres!
But they never really focussed on deployment and alike; “horses for courses”.

We have seen AMAZING things from Kubernetes!
But they never really focussed on data and alike because developers need to build features!

Under the awe-inspiring guidance of the CNCF, we have an actual Data on Kubernetes Community!
A first, profound and fundamental step on the path of convergence, where Postgres meets Kubernetes and we start enabling a new era that might start bringing a couple of answers to some age-old (well, as far back as the invention of computers, really) and some newer challenges such as:

From here it is basically: “Hi-ho, Silver! Away!”
There is no stopping this, or as a colleague once paraphrased Babylon 5: “The avalanche has already started. It is too late for the pebbles to vote.”

This is the new North Star.

In the end, we’re just getting started

  1. Oracle to Postgres—well, that’s done. What has not yet been migrated will probably die out at some point.
  2. Postgres is established, no debate there.
  3. Kubernetes is so strong, so appealing, it answers so many questions that it will be with us for quite some time.

There is, though, this one fundamental gap. However you twist or turn it, the user of your app needs data, otherwise what’s the use of all of your mega cool features?

Postgres and Kubernetes, the two most powerful technologies of today answer that question.


Synology backup with CrashPlan 4.3.0

I recently upgraded to CrashPlan 4.3.0 which I use to backup my Synology to a remote location.

On Synology, you can only use CrashPlan in a headless manner, so I am running “the head”, the client, from my MacBook.
After the update to CrashPlan 4.3.0, I was unable to connect to the engine running on my Synology. And that is a pain, as I cannot control the CrashPlan setup anymore, which I needed, to do some setup-changes.
I thought to write it down as it is the combination of to pieces of forum-information with a small alteration.

Here’s how I got to fix it (I took the rigorous way as I feel a clean start is the best start & CrashPlan keeps all your settings with you account anyway):
1) remove CrashPlan from Synology (using the package manager)
2) remove CrashPlan from my MacBook
3) install CrashPlan on Synology (using the package manager)
4) install CrashPlan on my MacBook from the CrashPlan website
5) change the client ui.properties to include serviceHost=<your NAS name / IP>
6) change .ui_info on the Synology NAS (and this was the missing bit):

Synology (server) side of things:
– Edit my.service.xml, mine was located in /volume1/@appstore/CrashPlan/conf/my.service.xml. Changed from <serviceHost>localhost</serviceHost> to <serviceHost>0.0.0.0</serviceHost>. Please keep the default port <servicePort>4243</servicePort>
– Get the server user id information, check your path… You could use the command cat /Library/Application\ Support/CrashPlan/.ui_info  ; echo

MacBook (client) side of things:
– Making a backup of the client .ui_info file just in case… sudo cp /Library/Application\ Support/CrashPlan/.ui_info /Library/Application\ Support/CrashPlan/.ui_info.backup
Substituting original client .ui_info content with .ui_info coming from server: sudo vi /Library/Application\ Support/CrashPlan/.ui_info

And, presto, this is what did it for me and my Synology!

Cloud Database Offers On-premise Advantages

These are times when there are technologies abundantly available to help you make the very best of the data you gather from your business processes.

Increasing numbers of businesses choose the option to host their production database environment in one of the many cloud forms that are available these days. This example of a smart alternative discusses an additional service you could implement or request when you are dealing with cloud based databases.

In many organizations there is a BI-team responsible for the development of company specific KPIs or compose competitively strategic information based on the information that is gathered during day-to-day business. There often are key management positions that have a need for ad hoc queries to live data. In recent years the grave importance of this intelligence has been recognized as being of the greatest importance for decision support, and giving your organization the biggest competitive advantage possible.

Developing or even running these activities on live data gives the sharpest edge. Doing this on a production environment, nevertheless, is out of the question. Uninterrupted availability and maximum responsiveness for regular activities of these databases are unquestionably important. How can you combine these factors with the proposition of running your database in the cloud while staying smart?

The smart alternatives of Dbvisit enable you to do just this! By leveraging Dbvisit Replicate in a hosted environment you can create one or many local copies of live production data with specific local database settings to do precisely what you need, be it running or developing heavy BI queries or having departmental management looking at or analyzing data as it is recorded. Having (a subset of) the live data uni-directionally delivered from the cloud to your local (desktop) database creates a safe environment to analyze and enable knowledge workers to do the their job without any holds barred!