Informix for Newbies

Mr Jan Musil, the man who does the best and the most complex Informix live demos most of us have ever seen, has put together a great document: Informix for Newbies (part one).

Document is in the form of slides, and it covers topics from download and install to simple configuration and instance monitoring. It is intended for people how have the basic Linux skills and understanding of how database (engine) works.

This kind of material is something that Informix community is always short off, so it’s more than welcome. Please feel free to download, use and share it: https://lnkd.in/dY387zC.

Thanks, Jan. We’re waiting for part two :).

Advertisements

, ,

Leave a comment

Been a While

Title of my blog is Informix on my mind, but since I haven’t published anything for some time, one could argue that Informix was not on my mind lately :). Correctly, but only to some extend. Most of the time I’ve been busy finishing my doctorate research which I procrastinated for too long and finally defending my PhD thesis. During this time, from my perspective, only couple of things worth mentioning happened (which doesn’t mean I’m going to stop writing).

First, I gave a presentation on IIUG 2017 Conference in Raleigh, NC. In this presentation I covered some of the advantages of graph analyses and gave arguments that graph structure could be used for OLAP purposes. I also presented the algorithm and my open-source implementation of an ETL process in order to create a graph OLAP database from Informix database. Stay tuned, I’ll cover this in more detail, along with links to the code and presentation in a separate post.

Second, and of much more importance to global Informix community, was the IBM-HCL partnership deal about Informix development and maintenance. It was announced before the IIUG Conference in April, and produced a lot of different reactions, from “IBM finally killed it” to “this is a long-expected good news”. I’m not going to elaborate on the details of the deal since there is plenty of information available (let me google it for you). I was among those who were optimistic about this change, since IBM’s work on Informix was slow, especially in the last couple of years. On top of that, many great people whom worked on Informix left IBM or were moved from the product.

Now, it has been nearly half a year since HCL partnership was introduced and we’re all still waiting for any sign that this works. In this time, HCL hired some very good and Informix-caring people, for which I have high hopes that will succeed in moving things in right direction. Yet, we are still to see what happens, and I wonder how much longer should we wait.

HCL, the community is here, listening and it’s anxious.

 

, , , ,

Leave a comment

Serving Customized Documents from the Database

If your database serves an information system, there is a pretty good chance that system generates some written documents to end users – predefined reports or simply letters or some kind of official documents to be printed. In the case of relatively simple documents, i.e. not complicated multi-level detailed reports, application has to generate it using a template or some other meta-description of it, and fill it with a case-specific data. If there is a need to generate same documents from different apps (e.g. a web app and a desktop app), a solution to generate it and serve it right from the database might come in handy.

There are several ways to do this, but probably the simplest is to call the stored procedure (or a function) with the right parameters, which will return the document. Document templates which will be used should be stored somewhere in the database. The document generation function takes the appropriate template, searches for predefined tokens and replaces them with the correct data, depending on the input parameters. That’s why the template needs to be in some simple text format, perhaps RTF (rich text format) which allows neat formatting, could be prepared in most of the WYSIWYG text editors, can contain pictures and other advanced formatting stuff, and yet could be viewed as a plain text file, so tokens could be found and replaced easily.

Tokens should be predefined, agreed upon, and unique in a way that they could never be a part of a normal text. Some examples of tokens might be: REPLACEME_CURRENT_DATE, ##CUSTOMER_NAME## or %customer.id.number%. I suggest to define a single notation and stick to it. Just make sure the token format does not interfere with the special characters your chosen document format uses. After this, templates should be prepared in any rich text editor, like LibreOffice Writer or MS Word and stored in a database table with templates. Next, the document generating function has to be written. Depending on the size of the template, you might need to use some character data types like CLOB or LONGLVARCHAR (a new, still undocumented type), which make things more complicated. Main reason for this is the REPLACE function which doesn’t support bigger character types. Since we want to replace all tokens of the same kind in the document at once, we may store the file on the server, iterate through all the token types, replace them with proper values (using sed, for example), pull it back from the file system and return it to the user. Here is the outline of the function which might do that:

CREATE FUNCTION generateDocument (
   document_id INT, additional parameters...) 
   RETURNING CLOB;
  -- define the variables
  -- get the appropriate template from the table 
  -- and store it on the file system using LOTOFILE function
  -- fetch the values which will replace tokens in the template
  FOR token IN (list of tokens)
      -- replace each token in the file using SYSTEM 'sed...'
  END FOR
  -- pull the file from the file system using 
  -- FILETOCLOB and return it
END FUNCTION;

This solution requires a smart blob space and file system privileges for the user executing it, but no other special demands exist. If you’re a fan of datablades, there is also a possibility of using a string manipulating blade functions which would make the replacements without meddling into a file system and leave the file system out.

, ,

Leave a comment

Simple and meaningful row level auditing

Have you ever been asked about the history of a certain data? Like, who entered it and when? If you have, and you couldn’t figure out the answer, then you probably need some kind of auditing in place. There is a bunch of commercial (and free) products out there, most of them working with various data sources, not just Informix, some are only software solutions, other could be hardware-software combos. There’s one of IBM’s products out there aimed for this purpose, quite noticeable lately. If you know about it, then you’re probably already using it, or will use it at some point.

But what if you don’t need or can’t afford an expensive high tech solution to keep an eye on just a few tables? Or even an entire database. Well, luckily for us, Informix itself also offers audit feature, free of charge. On the second thought, maybe not.

Here’s the thing. Informix comes with auditing facility and it’s actually quite good. It covers various events it can keep track of (as much as 161 in version 12.10), some of which should be really enabled on every production system, if nothing else, for logging purposes. It also implements role separation, very important feature if you need to be confident that no one has tampered with your audit trail. It also enables you to create masks, or profiles, so you can audit different things for different types of users.

Unfortunately, most of the time, it can’t help you answer the question about the history of the data. When it comes to row level auditing, it’s mostly useless. There are four events which relate to row level auditing and could be turned on: INRW – insert row, UPRW – update row, DLRW – delete row and RDRW – read row. All things considering row level auditing on a live system should be used with utter caution, as these could produce enormous amounts of audit trails in a short time, especially the latter, as it will write one row in an audit trail file for each row read by the user. Just think of what would happen if a user would carelessly execute select statement without criteria.

At first, there was no way to select the tables which will be audited this way, so you either had to have INRW, UPRW and/or DLRW turned on for all tables or not. As this wasn’t very useful, it was enhanced soon enough. Now there’s a possibility to say which table gets to be audited this way, if some of these four events are turned on and the ADTROWS parameter is set to 1. This is done via the WITH AUDIT part of the CREATE TABLE statement:

CREATE TABLE example (a int, b int ...) WITH AUDIT;

If the table is already created, row level auditing for it could be added

ALTER TABLE example ADD AUDIT;

or removed

ALTER TABLE example DROP AUDIT;

at any time.

And now for the unfortunate part. The audit trail is written in the audit files. Every audit trail entry has the same structure, as explained here (LINK). While this format is useful or sufficient for some events, that is not the case with the row level events. For each of the row level events, only basic info is logged – tabid, partno and rowid. In the case of an update event, there are two rowids, an old and a new one. So basically, there is no way to figure out what was the value of a field before the update, or even which field changed. After the row was deleted, there is no way to find out what data it contained. The only way to know is to dig through the logical logs, as I previously explained, but that’s no way data history should be explored.

Bottom line, if you need to know your data history (inserts, updates and deletes), you should either acquire a tool which will help you with that, or you can try and set something up on your own. So here comes the happy ending. There are several way to do this, I’m showing the one with the triggers and shadow tables. The idea is to have a shadow table for each table you’re auditing, which has the same set of fields, and some extra fields, like username, operation performed on that row and a timestamp. Additionally, a set of triggers on the original table is needed, which will ensure the shadow table is filled. Informix can have multiple triggers on a single event since version 10.00 or so, so this is not a problem. If you’re thinking about implementation, there more good news – I’ve written and shared the code which will do that for you (a view and couple of procedures, find it here). All you need to do is to call the main procedure, provide the table name, dbspace name and the extent size of the shadow table, like this:

EXECUTE PROCEDURE createAuditForTable('exams, 'audit_dbs', 128);

It will create a shadow table in a designated dbspace, and name it like the original table, but with _ at the beginning. It will also create three triggers, with _ at the beginning of its name, followed by the table name and the operation name at the end, so it could be identified more easily (in this example, shadow table will be _exams and triggers will be named _examsinsert, _examsupdate, _examsdelete). These triggers will fire on each insert, update or delete operation on the original table and write actual row data, operation performed, user who performed it (USER variable) and the time of the operation (CURRENT variable) in the shadow table. Actual row is considered to be new row (in the for each row trigger) in the case of insert or update operation, and the old row in the case of a delete operation.

So from this moment on, you just have to query the shadow table in order to get the history of the original table. If the delete occurs, there is a whole row stored in the shadow table that can be seen. If the update occurred, you can find the previous row with the same primary key to compare the differences. Once you decide that no more audit is necessary on a table, simply drop the shadow table and these triggers.

Obviously, there is a storage space required for this, but so is for every other type of auditing, except this one is managed inside the database. It’s easy to govern and explore it, only SQL interface is needed.

There are security issues at place here, since no real role separation can be implemented. Triggers could be disabled or dropped, shadow tables with audit trail could be tampered with. Some problems could be solved by eliminating users with elevated permissions, like resource or DBA, and part of it could be solved by storing or moving the shadow tables to another server. But still, database system administrator (DBSA, i.e. informix user) can override all of this. So if you need to be in compliance with some kind of regulation, this type of auditing is not for you.

Otherwise, if you need to know your data history from an end-user point of view, this simple auditing can help you. All the code needed could be found here, it’s all open sourced via the GNU GPL v3 license. Feel free to use it and extend it. After noting that I cannot be held responsible if anything goes wrong :), just put it in your database, end execute the procedure with the correct arguments – tables you want audited.

, ,

1 Comment

Moving to GitHub

Google has announced a while ago it will be shutting down its Google Code service, so new home for some of the open-sourced tools should be found. That’s why I switched to GitHub and all the links in previous posts and software and tools page have been updated for their new location.

On top of that, I’m determined to continue sharing the code – my own and of some other developers, with their permission, of course. As I stated at the IIUG earlier this year, this is something we, the Informix community, should do to help Informix grow and become more accepted: share the knowledge, share the code – scripts, queries, tools we write and use. No matter how small or insignificant the code may be, there is always a novice out there who’ll find it useful.

With that regard, while migrating away from the Google Code, I was happy to see there were more than 2350 downloads of the ifmx-sql-editor binaries. Hopefully it is useful to somebody out there, as it is to me :-). Some new additions to it are on the way as well. Stay tuned.

Leave a comment

IIUG Conference 2015

The IIUG 2015 Conference in San Diego is getting very close. Next week, throughout the conference with over 80+ presentations, keynotes and other events, the IIUG will celebrate its 20th birthday! And following the “conference for users, by users” policy, I will present on some of the tools (mostly open-source) and techniques my team uses in a daily work with Informix. It will be a run-through different software, from scripts and SQL tools to application development and testing techniques we use in development, maintenance and support of big information systems. Though some of the stuff will certainly be familiar to a part of the audience, I hope everyone gets to hear and learn about something new.

Find the info about the conference and the schedule at www.iiug2015.org. For developers, it should be especially pleasing to see a number of developer-oriented topics – Java, Hibernate, web security, etc – just follow the D track.

See you at IIUG 2015!

, ,

Leave a comment

Continuing Strong in 2015

Gartner guys reacted on a recent analyst report which stated that in 2015, the industry would begin retiring Sybase and Informix products. We’ve already seen so many such reports stating Informix is dead, or will be in the near future, that I’m used to ignoring all of them, but it’s nice to see Gartner investigate and state the opposite.

Here is the link to the article (first there is a part on Sybase, and Informix follows). What gives additional warmth to the heart, is the recognition of an IIUG as one of the important things when talking Informix. Kudos, Donald and Merv!

Leave a comment