IIUG Conference 2015

The IIUG 2015 Conference in San Diego is getting very close. Next week, throughout the conference with over 80+ presentations, keynotes and other events, the IIUG will celebrate its 20th birthday! And following the “conference for users, by users” policy, I will present on some of the tools (mostly open-source) and techniques my team uses in a daily work with Informix. It will be a run-through different software, from scripts and SQL tools to application development and testing techniques we use in development, maintenance and support of big information systems. Though some of the stuff will certainly be familiar to a part of the audience, I hope everyone gets to hear and learn about something new.

Find the info about the conference and the schedule at www.iiug2015.org. For developers, it should be especially pleasing to see a number of developer-oriented topics – Java, Hibernate, web security, etc – just follow the D track.

See you at IIUG 2015!


, ,

Leave a comment

Continuing Strong in 2015

Gartner guys reacted on a recent analyst report which stated that in 2015, the industry would begin retiring Sybase and Informix products. We’ve already seen so many such reports stating Informix is dead, or will be in the near future, that I’m used to ignoring all of them, but it’s nice to see Gartner investigate and state the opposite.

Here is the link to the article (first there is a part on Sybase, and Informix follows). What gives additional warmth to the heart, is the recognition of an IIUG as one of the important things when talking Informix. Kudos, Donald and Merv!

Leave a comment

What Does It Mean for Us?

The big announcement in the world of Informix the other day was IBM signing the agreement with Chinese company General Data Technology or GBASE, which is, based on information on their website, a key software enterprise in the state planning. The base of the agreement (copy of the news is here) is that IBM shares the code of Informix, and GBASE will (in support of the China government’s national technology agenda) modify its security to conform to standards of Chinese government. That modified Informix will then be used in the future database projects.

So this is obviously big news for Informix, because it guarantees the expansion of the product to a fast growing market, but what does it exactly mean for us, non-IBMers involved in the Informix business?
Well, I’m looking at this in a good faith, so my personal thoughts are positive. This kind of agreement will result in bigger demand for Informix experts, both domestic and foreign. I have to admit that I have no idea about the state of affairs regarding Informix experts in China, but I presume it is definitely possible there will be a need for database designers, architects, and even bigger need for DBAs to maintain the big installations we all imagine when thinking of databases in China. So it’s quite possible some of us will find our careers continuing somewhere in east Asia or telecommuting for a local partner in China. Other than that, new DBAs should be properly trained, so that’s also another opportunity for all of us involved in teaching and training. This also means the growth of Informix community, and hopefully there will be more international community members springing out of these systems.

But most importantly, this means a long term survival of Informix as a product, which is of course in the best interest of all the Informix people. As Mr. Art Kagel said, this is the proof Informix is here to stay. And a little dream for the end, to share with you… would be nice to have this kind of commitment in other countries as well. Just saying.

, ,

Leave a comment

The Book: Data Just Right

I realized there is a number of books covering databases and data handling that I go through or am in touch with, but I never mention any of those or give any credit to authors, so I’m gonna change that starting today.

Recently I came across the book called Data Just Right by Michael Manoochehri. It is subtitled Introduction to Large-Scale Data & Analytics. Books with this kind of name could hide anything, quite often restraining themselves to one or two technologies, but this is not the case with this one. It is a review of a current state of data management and analytics, with a lot of sense for data management history and current needs and trends in this field. Reading this book won’t teach you how to use Hadoop, Pig, R or anything like it. It will give you the perspective of various technologies used today, show some examples and try to help you find the right tools for your needs.

What I found interesting about it is the broadness of technologies and ideas being covered. In the book, especially in the opening chapters, there are so many products, languages, tools, names, methodologies mentioned, that only a selected few of data experts could know about all of them. Codd, OLAP, NewSQL, BigQuery, SOX, Tableau, SciPy, to name just a few. For a book of only 200 pages, there is an index of more than a 1200 entries in it. So, in my humble opinion, this is why this book is worth going through – it gives a good perspective of data technologies to any kind of reader, data management novice, expert, CIO, CTO. In the same time, this is a burden for the book, because it will require some changes in the following editions to stay current in a fast changing data management and analytics landscape. This first edition is certainly worth reading.

More info about the book on its website: datajustright.com.


Leave a comment

Working with JSON Data from SQL

The MongoDB support was introduced in 12.10xC2, bringing many cool things to Informix, one of them being JSON and BSON data types. Putting all the NoSQL and MongoDB story aside, these new data types enable us to work with semi-structured data directly from SQL, thanks to several new built-in functions. Of course, you could do the same with XML documents, but it took a while before all the necessary functions became available in Informix, and working with XML is still more complex than working with JSON, because of the difference of those two data formats.

In order to put the data in a JSON column you can use genBSON function, or simply cast a text to a JSON type. Here’s an example – a tourist office database table storing various places people could visit. One table with JSON data could be used to store data of many different places – cities, regions, islands, landmarks etc. So the table could be defined as:

  place_id SERIAL,
  numberOfVisitorsPerYear INT,
  place BSON

The place column could also be of JSON type, but if you want to perform more meaningful queries on a table, stick to BSON. There are some BSON functions we can use, and BSON can be cast to JSON data.

Rows could be inserted via plain SQL:

INSERT INTO places VALUES (0, 500000, '{city: "Zagreb"}'::JSON);

Note that last value needs to be cast to JSON in order to be able to run queries on it with the bson_value functions. Here are some other data rows with various attributes describing places:

INSERT INTO places VALUES (0, 600000, '{city: "Pula", country: "Croatia", population: 57000}'::JSON);
INSERT INTO places VALUES (0, 20000, '{mountain: "Velebit", country: "Croatia", height: 1757}'::JSON);
INSERT INTO places VALUES (0, 1000000, '{national_park: "Plitvice", country: "Croatia"}'::JSON);

Simplest way to find out what is stored in a table is to execute query like this one:

SELECT place_id, numberOfVisitorsPerYear, place::JSON
FROM places

which will return these results:

place_id numberOfVisitorsPerYear (expression)
1 500000 {“city”:”Zagreb”}
2 600000 {“city”:”Pula”,”country”:”Croatia”,”population”:57000}
3 20000 {“mountain”:”Velebit”,”country”:”Croatia”,”height”:1757}
4 1000000 {“national_park”:”Plitvice”,”country”:”Croatia”}

However, the idea is to be able to search within JSON data. For that purpose, there are some new functions we can use:

  • bson_value_int(column, bson_attribute) – returns an integer value of the bson_attribute stored in a specified column of the row
  • bson_value_bigint(column, bson_attribute) – returns a bigint value of the bson_attribute stored in a specified column of the row
  • bson_value_double(column, bson_attribute) – returns an double value of the bson_attribute stored in a specified column of the row
  • bson_value_lvarchar(column, bson_attribute) – returns an lvarchar value of the bson_attribute stored in a specified column of the row

Here are some query examples:

-- find all destinations in Croatia 
SELECT *, place::JSON FROM places 
WHERE bson_value_lvarchar (place, 'country') = 'Croatia'; 

-- find all destinations without specified county 
SELECT *, place::JSON FROM places 
WHERE bson_value_lvarchar (place, 'country') IS NULL; 

-- find all mountains higher than 1000 meters 
SELECT *, place::JSON FROM places 
WHERE bson_value_lvarchar (place, 'mountain') IS NOT NULL 
AND bson_value_int (place, 'height') > 1000; 

-- find all national parks in Croatia 
SELECT *, place::JSON FROM places 
WHERE bson_value_lvarchar (place, 'country') = 'Croatia' 
AND bson_value_lvarchar (place, 'national_park') IS NOT NULL; 

There is another new function, genBSON, which generates BSON/JSON data from a relational table’s data, depending on a query. It can be used to return JSON directly from a query or to insert a JSON data in a column. The Informix Knowledge Center support page for this function is informative, along with some examples, so I’m not going to repeat it all here. As a continuation to our example, if a tourist office already has a relational table named cities in its database, then this data could be imported in a places table with a single SQL statement:

-- cities relational table 
  city_id SERIAL, 
  city CHAR(30), 
  population INT, 
  country CHAR(30), 
  numberOfVisitorsPerYear INT

-- copy the cities data into places table: 
SELECT 0, numberOfVisitorsPerYear, genBSON(ROW(city, country, population), 0, 1)::JSON
FROM cities; 

Or, if we don’t want to have a copy of cities in places, a view on structured and semi-structured data could be made (this one returning only JSON data):

CREATE VIEW places_and_cities (place) AS 
  SELECT place::JSON FROM places 
  SELECT genBSON(ROW(city, country, population), 0, 1)::JSON FROM cities; 

In conclusion, with JSON capabilities at hand, it’s now pretty simple to mix structured and semi-structured data in a single database. But before we do it, we should make sure there is a proper need to design our data model that way, bearing in mind there are numerous advantages of having relational data in a relational database.

, , ,

Leave a comment

IBM Bluemix is generally available

I previously mentioned IBM Bluemix as a cool new thing that was open for beta testing, and as of today it is generally available. Bluemix is powerful development platform in a cloud, with many services already included, some IBM’s, some open source, some 3rd party. Some of the services could be used for free, some have a monthly payment plan.

One of these services is our favorite database, Informix, and can be found under the cryptic name of Time Series Database. It can be used as a standard Informix database, there are no limits to use only TimeSeries data.

There is also a one-month free trial, and I encourage all the developers, DBAs, project managers, decision makers to have a look. Find Bluemix at www.bluemix.net and the news of GA at the Bluemix blog here.


Leave a comment

A Short Recap of the Informix >> 2014 Conference

As previously announced, our Informix >> 2014 (Fast Forward your Data) Conference took place in Zagreb, Croatia on May 22nd, with some great presenters and topics. Jerry Keesee gave a great overview of the current state of Informix technologies, Informix road map, IBM’s software portfolio and Informix’s role in it. Stuart Litel explained the work IIUG is doing for all the Informix people and the product. He also talked about the IIUG Board of Directors Award, promoting once again one of two this year’s winners, Adria IUG president, Hrvoje Zoković.
Jan Musil made two great live demo presentations, one about exploiting Genero to create a native mobile applications for both Android and iOS devices, and the other about using new mongodb capabilities in Informix, including sharding.
Frederick Ho presented the impressive current state of Informix Warehouse Accelerator, while Jean Georges Perrin showed the potential of employing the JSON capabilities in the existing information system. And finally, yours truly gave a talk about Internet of Things and its impact on our future.

On the downside, the event attendance was not as expected. However, we hope to get it back on track for the upcoming events.


Leave a comment