Archive for category Scripts And How-Tos

Serving Customized Documents from the Database

If your database serves an information system, there is a pretty good chance that system generates some written documents to end users – predefined reports or simply letters or some kind of official documents to be printed. In the case of relatively simple documents, i.e. not complicated multi-level detailed reports, application has to generate it using a template or some other meta-description of it, and fill it with a case-specific data. If there is a need to generate same documents from different apps (e.g. a web app and a desktop app), a solution to generate it and serve it right from the database might come in handy.

There are several ways to do this, but probably the simplest is to call the stored procedure (or a function) with the right parameters, which will return the document. Document templates which will be used should be stored somewhere in the database. The document generation function takes the appropriate template, searches for predefined tokens and replaces them with the correct data, depending on the input parameters. That’s why the template needs to be in some simple text format, perhaps RTF (rich text format) which allows neat formatting, could be prepared in most of the WYSIWYG text editors, can contain pictures and other advanced formatting stuff, and yet could be viewed as a plain text file, so tokens could be found and replaced easily.

Tokens should be predefined, agreed upon, and unique in a way that they could never be a part of a normal text. Some examples of tokens might be: REPLACEME_CURRENT_DATE, ##CUSTOMER_NAME## or I suggest to define a single notation and stick to it. Just make sure the token format does not interfere with the special characters your chosen document format uses. After this, templates should be prepared in any rich text editor, like LibreOffice Writer or MS Word and stored in a database table with templates. Next, the document generating function has to be written. Depending on the size of the template, you might need to use some character data types like CLOB or LONGLVARCHAR (a new, still undocumented type), which make things more complicated. Main reason for this is the REPLACE function which doesn’t support bigger character types. Since we want to replace all tokens of the same kind in the document at once, we may store the file on the server, iterate through all the token types, replace them with proper values (using sed, for example), pull it back from the file system and return it to the user. Here is the outline of the function which might do that:

CREATE FUNCTION generateDocument (
   document_id INT, additional parameters...) 
  -- define the variables
  -- get the appropriate template from the table 
  -- and store it on the file system using LOTOFILE function
  -- fetch the values which will replace tokens in the template
  FOR token IN (list of tokens)
      -- replace each token in the file using SYSTEM 'sed...'
  -- pull the file from the file system using 
  -- FILETOCLOB and return it

This solution requires a smart blob space and file system privileges for the user executing it, but no other special demands exist. If you’re a fan of datablades, there is also a possibility of using a string manipulating blade functions which would make the replacements without meddling into a file system and leave the file system out.

, ,

Leave a comment

Simple and meaningful row level auditing

Have you ever been asked about the history of a certain data? Like, who entered it and when? If you have, and you couldn’t figure out the answer, then you probably need some kind of auditing in place. There is a bunch of commercial (and free) products out there, most of them working with various data sources, not just Informix, some are only software solutions, other could be hardware-software combos. There’s one of IBM’s products out there aimed for this purpose, quite noticeable lately. If you know about it, then you’re probably already using it, or will use it at some point.

But what if you don’t need or can’t afford an expensive high tech solution to keep an eye on just a few tables? Or even an entire database. Well, luckily for us, Informix itself also offers audit feature, free of charge. On the second thought, maybe not.

Here’s the thing. Informix comes with auditing facility and it’s actually quite good. It covers various events it can keep track of (as much as 161 in version 12.10), some of which should be really enabled on every production system, if nothing else, for logging purposes. It also implements role separation, very important feature if you need to be confident that no one has tampered with your audit trail. It also enables you to create masks, or profiles, so you can audit different things for different types of users.

Unfortunately, most of the time, it can’t help you answer the question about the history of the data. When it comes to row level auditing, it’s mostly useless. There are four events which relate to row level auditing and could be turned on: INRW – insert row, UPRW – update row, DLRW – delete row and RDRW – read row. All things considering row level auditing on a live system should be used with utter caution, as these could produce enormous amounts of audit trails in a short time, especially the latter, as it will write one row in an audit trail file for each row read by the user. Just think of what would happen if a user would carelessly execute select statement without criteria.

At first, there was no way to select the tables which will be audited this way, so you either had to have INRW, UPRW and/or DLRW turned on for all tables or not. As this wasn’t very useful, it was enhanced soon enough. Now there’s a possibility to say which table gets to be audited this way, if some of these four events are turned on and the ADTROWS parameter is set to 1. This is done via the WITH AUDIT part of the CREATE TABLE statement:

CREATE TABLE example (a int, b int ...) WITH AUDIT;

If the table is already created, row level auditing for it could be added


or removed


at any time.

And now for the unfortunate part. The audit trail is written in the audit files. Every audit trail entry has the same structure, as explained here (LINK). While this format is useful or sufficient for some events, that is not the case with the row level events. For each of the row level events, only basic info is logged – tabid, partno and rowid. In the case of an update event, there are two rowids, an old and a new one. So basically, there is no way to figure out what was the value of a field before the update, or even which field changed. After the row was deleted, there is no way to find out what data it contained. The only way to know is to dig through the logical logs, as I previously explained, but that’s no way data history should be explored.

Bottom line, if you need to know your data history (inserts, updates and deletes), you should either acquire a tool which will help you with that, or you can try and set something up on your own. So here comes the happy ending. There are several way to do this, I’m showing the one with the triggers and shadow tables. The idea is to have a shadow table for each table you’re auditing, which has the same set of fields, and some extra fields, like username, operation performed on that row and a timestamp. Additionally, a set of triggers on the original table is needed, which will ensure the shadow table is filled. Informix can have multiple triggers on a single event since version 10.00 or so, so this is not a problem. If you’re thinking about implementation, there more good news – I’ve written and shared the code which will do that for you (a view and couple of procedures, find it here). All you need to do is to call the main procedure, provide the table name, dbspace name and the extent size of the shadow table, like this:

EXECUTE PROCEDURE createAuditForTable('exams, 'audit_dbs', 128);

It will create a shadow table in a designated dbspace, and name it like the original table, but with _ at the beginning. It will also create three triggers, with _ at the beginning of its name, followed by the table name and the operation name at the end, so it could be identified more easily (in this example, shadow table will be _exams and triggers will be named _examsinsert, _examsupdate, _examsdelete). These triggers will fire on each insert, update or delete operation on the original table and write actual row data, operation performed, user who performed it (USER variable) and the time of the operation (CURRENT variable) in the shadow table. Actual row is considered to be new row (in the for each row trigger) in the case of insert or update operation, and the old row in the case of a delete operation.

So from this moment on, you just have to query the shadow table in order to get the history of the original table. If the delete occurs, there is a whole row stored in the shadow table that can be seen. If the update occurred, you can find the previous row with the same primary key to compare the differences. Once you decide that no more audit is necessary on a table, simply drop the shadow table and these triggers.

Obviously, there is a storage space required for this, but so is for every other type of auditing, except this one is managed inside the database. It’s easy to govern and explore it, only SQL interface is needed.

There are security issues at place here, since no real role separation can be implemented. Triggers could be disabled or dropped, shadow tables with audit trail could be tampered with. Some problems could be solved by eliminating users with elevated permissions, like resource or DBA, and part of it could be solved by storing or moving the shadow tables to another server. But still, database system administrator (DBSA, i.e. informix user) can override all of this. So if you need to be in compliance with some kind of regulation, this type of auditing is not for you.

Otherwise, if you need to know your data history from an end-user point of view, this simple auditing can help you. All the code needed could be found here, it’s all open sourced via the GNU GPL v3 license. Feel free to use it and extend it. After noting that I cannot be held responsible if anything goes wrong :), just put it in your database, end execute the procedure with the correct arguments – tables you want audited.

, ,

1 Comment

Working with JSON Data from SQL

The MongoDB support was introduced in 12.10xC2, bringing many cool things to Informix, one of them being JSON and BSON data types. Putting all the NoSQL and MongoDB story aside, these new data types enable us to work with semi-structured data directly from SQL, thanks to several new built-in functions. Of course, you could do the same with XML documents, but it took a while before all the necessary functions became available in Informix, and working with XML is still more complex than working with JSON, because of the difference of those two data formats.

In order to put the data in a JSON column you can use genBSON function, or simply cast a text to a JSON type. Here’s an example – a tourist office database table storing various places people could visit. One table with JSON data could be used to store data of many different places – cities, regions, islands, landmarks etc. So the table could be defined as:

  place_id SERIAL,
  numberOfVisitorsPerYear INT,
  place BSON

The place column could also be of JSON type, but if you want to perform more meaningful queries on a table, stick to BSON. There are some BSON functions we can use, and BSON can be cast to JSON data.

Rows could be inserted via plain SQL:

INSERT INTO places VALUES (0, 500000, '{city: "Zagreb"}'::JSON);

Note that last value needs to be cast to JSON in order to be able to run queries on it with the bson_value functions. Here are some other data rows with various attributes describing places:

INSERT INTO places VALUES (0, 600000, '{city: "Pula", country: "Croatia", population: 57000}'::JSON);
INSERT INTO places VALUES (0, 20000, '{mountain: "Velebit", country: "Croatia", height: 1757}'::JSON);
INSERT INTO places VALUES (0, 1000000, '{national_park: "Plitvice", country: "Croatia"}'::JSON);

Simplest way to find out what is stored in a table is to execute query like this one:

SELECT place_id, numberOfVisitorsPerYear, place::JSON
FROM places

which will return these results:

place_id numberOfVisitorsPerYear (expression)
1 500000 {“city”:”Zagreb”}
2 600000 {“city”:”Pula”,”country”:”Croatia”,”population”:57000}
3 20000 {“mountain”:”Velebit”,”country”:”Croatia”,”height”:1757}
4 1000000 {“national_park”:”Plitvice”,”country”:”Croatia”}

However, the idea is to be able to search within JSON data. For that purpose, there are some new functions we can use:

  • bson_value_int(column, bson_attribute) – returns an integer value of the bson_attribute stored in a specified column of the row
  • bson_value_bigint(column, bson_attribute) – returns a bigint value of the bson_attribute stored in a specified column of the row
  • bson_value_double(column, bson_attribute) – returns an double value of the bson_attribute stored in a specified column of the row
  • bson_value_lvarchar(column, bson_attribute) – returns an lvarchar value of the bson_attribute stored in a specified column of the row

Here are some query examples:

-- find all destinations in Croatia 
SELECT *, place::JSON FROM places 
WHERE bson_value_lvarchar (place, 'country') = 'Croatia'; 

-- find all destinations without specified county 
SELECT *, place::JSON FROM places 
WHERE bson_value_lvarchar (place, 'country') IS NULL; 

-- find all mountains higher than 1000 meters 
SELECT *, place::JSON FROM places 
WHERE bson_value_lvarchar (place, 'mountain') IS NOT NULL 
AND bson_value_int (place, 'height') > 1000; 

-- find all national parks in Croatia 
SELECT *, place::JSON FROM places 
WHERE bson_value_lvarchar (place, 'country') = 'Croatia' 
AND bson_value_lvarchar (place, 'national_park') IS NOT NULL; 

There is another new function, genBSON, which generates BSON/JSON data from a relational table’s data, depending on a query. It can be used to return JSON directly from a query or to insert a JSON data in a column. The Informix Knowledge Center support page for this function is informative, along with some examples, so I’m not going to repeat it all here. As a continuation to our example, if a tourist office already has a relational table named cities in its database, then this data could be imported in a places table with a single SQL statement:

-- cities relational table 
  city_id SERIAL, 
  city CHAR(30), 
  population INT, 
  country CHAR(30), 
  numberOfVisitorsPerYear INT

-- copy the cities data into places table: 
SELECT 0, numberOfVisitorsPerYear, genBSON(ROW(city, country, population), 0, 1)::JSON
FROM cities; 

Or, if we don’t want to have a copy of cities in places, a view on structured and semi-structured data could be made (this one returning only JSON data):

CREATE VIEW places_and_cities (place) AS 
  SELECT place::JSON FROM places 
  SELECT genBSON(ROW(city, country, population), 0, 1)::JSON FROM cities; 

In conclusion, with JSON capabilities at hand, it’s now pretty simple to mix structured and semi-structured data in a single database. But before we do it, we should make sure there is a proper need to design our data model that way, bearing in mind there are numerous advantages of having relational data in a relational database.

, , ,

Leave a comment

Reading the Logical Logs

Every now and then (hopefully not often), there is a need to browse through some logical logs and see what’s been going on with your system. In this post, I’ll cover the purpose and logical log basics and provide some info on how to read what’s inside the logs.

In the simplest words, logical logs provide the way to ensure the minimal loss of data in case the system comes to a halt. Every operation and every transaction that is executed is written in one or more logical logs. So if the system, for some reason, crashes, when it is re-inited, you’ll see in the online log that the logical recovery is performed. That means the system is going through the logical logs since the last checkpoint, and re-executing the operations that were executed till the system halted. Transactions that were started, but not completed, are rolled back.

Knowing what and when is written in logical logs provides the opportunity to explore the systems behavior in some cases. For example, it is the way to identify the busiest tables and indexes during some period of time, or perhaps to explore in detail what’s been going on during a single transaction in order to detect possible errors in end-user application while modifying the data.

The size and the number of logical in your system is determined by onconfig parameters LOGSIZE and LOGFILES, respectively. When all the logs are used (flag “U”, in the following list), the system reuses the first one, and so on, in endless cycle. The logs that are to be reused, should always be backed up (flag “B” in the following list), using onbar or ontape utilities. By setting the onconfig parameter LTAPEDEV to /dev/null you can discard the logical logs, but this should never be done in a production environment.

You can see the state of your logs by executing onstat -l. This is an example of the output:

address number flags uniqid begin size used %used
3b2a507e0 992 U-B---- 1066232 2:991053 1000 1000 100.00
3b2a50848 993 U-B---- 1066233 2:992053 1000 1000 100.00
3b2a508b0 994 U---C-L 1066234 2:993053 1000 190 19.00
3b2a50918 995 U-B---- 1064843 2:994053 1000 1000 100.00
3b2a50980 996 U-B---- 1064844 2:995053 1000 1000 100.00

Each logical log is identified by its uniqid (in the list above, logs have uniqids 1066232 – 1066234, and 1064843 – 1064844). The flag “C” indicates that this is the current log being written. The next log has a smaller uniqid than the current – it is the next one to be used (and overwritten in the current logical space). The log with the “L” flag is the log containing the last checkpoint.

The onlog utility is used to display the content (or part of it) of a logical logs. The main thing you have to know is what logical log(s) you want to see (ie. their uniqids). Apart of that, there are number of filters that could be used in order to reduce the output and make it more readable (as you’ll be able to see, there is lot of information being written in logs). The complete syntax of the onlog utility can be found here.

Filtering could be done by username,  partition number (often referred as partnum in Informix) and/or transaction id. Username obviously belongs to the user that executed some statements or commands resulting in data manipulation. Partition number identifies the table or index the operation is executed upon. The easiest way to find out partnums of tables or indices is to query the sysmaster:systabnames table. In some outputs it is shown as decimal, in other as hexadecimal number, so you may need to use the hex() function in your SQL. And finally, transaction id (denoted here as xid) is a number identifying the transaction – all log entries that belong to the same transaction have the same xid (starting with BEGIN entry and ending with COMMIT or ROLLBACK). But be careful about this one – it’s not unique in any way. Often you’ll see the same xid being used again for the next transaction of the same user for some time, and after that a new xid is assigned to this user, and the previous one my be assigned to some other user.

If you’re not backing up logical logs, using the onlog utility you’ll be able to display the contents of those logs currently on disk only, but if you have the logs on backup, you can also fetch and display those. If using ontape utility for logical log backup, backed up logs could be fetched with onlog utility as well. If backup is made with the onbar, then you need to use onbar -P (with the rest of the options same as if you’d use onlog) to fetch logs that are sitting in your backup.

Here’s an example of a logical log display (obtained by onlog -n):

addr len type    xid id link
18   172 HINSERT 155 1  3e7714    70013b 414d03 105
c4   320 HINSERT 155 0  18        60007c c64bc02 254
204  56  COMMIT  155 0  c4        11/30/2013 16:22:56

There are 3 entries shown above. There is the addr column, which is the address of the entry (hexadecimal) in the current log. Len is the length of the entry (decimal) in bytes. Next column is type – one of predefined identifiers, specifying the action executed. Xid column is transaction id. Id column mostly has the value 0, but can take some other values if the entry is linked to the previous log. Link is the address of previous entry that belongs to the same transaction. The rest of the data in each row is type-specific. The list of all entry types and corresponding additional data can be found here.

In a regular production environment, seams reasonable to expect most entries to be of these types:

  • BEGIN, COMMIT, ROLLBACK – transaction start/end
  • HINSERT, HDELETE, HUPDATE, HUPBEF, HUPAFT – insert, delete or update row
  • ADDITEM, DELITEM – add or remove item from index

Reading through the documentation for additional data for these types, we can see the partition number always involved, so it’s fairly easy to infer on which table/index is the action executed. But that’s about it. There is no information which data was written/deleted etc. In order to find that out, we need to look at the long output of onlog, obtained with onlog -l. This option is not documented, the documentation states: The listing is not intended for casual use. Nevertheless, there are some things you can find out of it. Here is an example of the onlog -l, for a single entry.

Say we have the table test_table with columns col1 of type INTEGER, and col2 of type CHAR(10), and execute this command:

insert into test_table values(123000123, 'dva');

This will be written in log as 3 entries – BEGIN, HINSERT and COMMIT, provided there is no primary or foreign keys, nor other indexes on that table (remember that, when a single SQL statement is executed, it is implicitly executed as a single operation transaction). Let’s take a look of HINSERT entry:

addr     len  type     xid      id link
21f050   80   HINSERT  158      0  21f018   200324   103    14 
         00000050 00000028 00000112 00000000 ...P...( ........ 
         00000000 00000000 0000009e 0021f018 ........ .....!.. 
         6b03f454 00200324 00200324 00000103 k..T. .$ . .$.... 
         000e0004 00000000 00000000 8002084b ........ .......K 
         0754d53b 64766120 20202020 20207369 .T.;dva        si

Each byte is represented by two hexadecimal numbers. The entire log entry is shown both in hexadecimal and also ASCII view, side by side, which makes it easier to identify the text strings. First part of each entry are the header bytes, and the rest is operation-type dependent.

Corresponding header data are marked in the following lists (some numbers in header are decimal). The same list is repeated several times, with different data pairs marked.

Entry lenght:

addr     len  type     xid      id link
21f050   80   HINSERT  158      0  21f018   200324   103    14 
         00000050 00000028 00000112 00000000 ...P...( ........ 
         00000000 00000000 0000009e 0021f018 ........ .....!.. 
         6b03f454 00200324 00200324 00000103 k..T. .$ . .$.... 
         000e0004 00000000 00000000 8002084b ........ .......K 
         0754d53b 64766120 20202020 20207369 .T.;dva        si

Entry id:

addr     len  type     xid      id link
21f050   80   HINSERT  158      0  21f018   200324   103    14 
         00000050 00000028 00000112 00000000 ...P...( ........ 
         00000000 00000000 0000009e 0021f018 ........ .....!.. 
         6b03f454 00200324 00200324 00000103 k..T. .$ . .$.... 
         000e0004 00000000 00000000 8002084b ........ .......K 
         0754d53b 64766120 20202020 20207369 .T.;dva        si

Entry type:

addr     len  type     xid      id link
21f050   80   HINSERT  158      0  21f018   200324   103    14 
         00000050 00000028 00000112 00000000 ...P...( ........ 
         00000000 00000000 0000009e 0021f018 ........ .....!.. 
         6b03f454 00200324 00200324 00000103 k..T. .$ . .$.... 
         000e0004 00000000 00000000 8002084b ........ .......K 
         0754d53b 64766120 20202020 20207369 .T.;dva        si

Entry xid:

addr     len  type     xid      id link
21f050   80   HINSERT  158      0  21f018   200324   103    14 
         00000050 00000028 00000112 00000000 ...P...( ........ 
         00000000 00000000 0000009e 0021f018 ........ .....!.. 
         6b03f454 00200324 00200324 00000103 k..T. .$ . .$.... 
         000e0004 00000000 00000000 8002084b ........ .......K 
         0754d53b 64766120 20202020 20207369 .T.;dva        si

Entry link:

addr     len  type     xid      id link
21f050   80   HINSERT  158      0  21f018   200324   103    14 
         00000050 00000028 00000112 00000000 ...P...( ........ 
         00000000 00000000 0000009e 0021f018 ........ .....!.. 
         6b03f454 00200324 00200324 00000103 k..T. .$ . .$.... 
         000e0004 00000000 00000000 8002084b ........ .......K 
         0754d53b 64766120 20202020 20207369 .T.;dva        si

Highlighted bytes below represent Informix internal clock:

addr     len  type     xid      id link
21f050   80   HINSERT  158      0  21f018   200324   103    14 
         00000050 00000028 00000112 00000000 ...P...( ........ 
         00000000 00000000 0000009e 0021f018 ........ .....!.. 
         6b03f454 00200324 00200324 00000103 k..T. .$ . .$.... 
         000e0004 00000000 00000000 8002084b ........ .......K 
         0754d53b 64766120 20202020 20207369 .T.;dva        si

Additional data for HINSERT are three numbers: partition number (hexadecimal), rowid (decimal), and row size in bytes (decimal). In this example, those are: 200324, 103, 14. All three are marked in the list below (bold, italic, underline matches):

addr     len  type     xid      id link
21f050   80   HINSERT  158      0  21f018   200324   103    14 
         00000050 00000028 00000112 00000000 ...P...( ........ 
         00000000 00000000 0000009e 0021f018 ........ .....!.. 
         6b03f454 00200324 00200324 00000103 k..T. .$ . .$.... 
         000e0004 00000000 00000000 8002084b ........ .......K 
         0754d53b 64766120 20202020 20207369 .T.;dva        si

14 bytes of data in this example refers to one integer (4 bytes) and one char field which is defined as char(10). Remember that, even though we’re inserting three-letter word, the rest of the char field is filled with spaces (ASCII code 20).

insert into test_table values(123000123, 'dva');

addr     len  type     xid      id link
21f050   80   HINSERT  158      0  21f018   200324   103    14 
         00000050 00000028 00000112 00000000 ...P...( ........ 
         00000000 00000000 0000009e 0021f018 ........ .....!.. 
         6b03f454 00200324 00200324 00000103 k..T. .$ . .$.... 
         000e0004 00000000 00000000 8002084b ........ .......K 
         0754d53b 64766120 20202020 20207369 .T.;dva        si

When observing update operations, note that both before- and after-image are written to the log, but only changed columns are written, not all. If the amount of data being written is small enough, than single HUPDATE entry is written in log, otherwise, two entries are written: HUPBEF and HUPAFT.

In conclusion, reading the logical logs can be very useful, and sometimes the only way to get some information about how your system works. But there are a lot of things one should be aware of before trying to read the log structure, most important being:

  • the structure of the tables and indices
  • how the data are stored (how many bytes are needed for each data type etc.)
  • what’s your system endianness.

Given these important notes, and the fact that the logical log structure is not documented, that means it could be (and very likely is) different not only from one version to another, but also from one platform to another. The examples above are taken at a standalone Solaris SPARC system, Informix 11.70.FC5.

, ,


New Utility – Ifmx Table Copy

Recently a friend of mine who meets a lot of Informix customers told me he frequently gets a question about copying a table with all the data in it and suggested that could be a good idea for a small utility. Having a SELECT …INTO permanent_table syntax available in Informix 12.10, there wouldn’t be much sense in creating such utility, but there’s another request – would be very nice if the constraints (primary and foreign keys, etc.) were copied as well.

Being a small piece of code, it would be most reasonable if it could be executed in a database itself, with no need for additional software to run it, and every DBA should be able to run some SQL statements. So I wrote a couple of stored procedures (well, functions to be exact) and these are actually all you need to run it. This little project is also open-sourced, released under GNU GPL v3 license, hosted on GitHub as ifmx-table-copy. There are two functions you need to create in your database (links lead to SQL files): getIdxColumnsForTableCopy and tableCopy.

When tableCopy is executed, it will create a copy of the table, with a new name, populate it with the data from the source table, and create the primary key, foreign keys, check, not null, and unique constraints and also indexes that exists on the source table, on the corresponding columns of the target table. The names of constraints are created from or concatenated with the target table name (see the details on the project page). The default values that exists on source table columns are also stated in the target table, provided they refer to character types. It is possible to state a different dbspace in which the table will be created. If no dbspace is given (i.e. the corresponding argument is null), then the table is created in the default database dbspace. There is also a parameter that prevents the execution of all the statements that would be executed during the target table creation – instead the statements are returned from the function and can be examined before the “real” execution.

Execute the tableCopy function with the following arguments to see what will actually be executed on the server:

execute function tablecopy('source_table_name', 'target_table_name', null, 'f');

Execute the following to perform actual structure creation and data copy:

execute function tablecopy('source_table_name', 'target_table_name', 'db_space', 't');

This utility uses the SQL syntax available in the Informix servers 11.70 and newer. There are some minor modifications that could be done to achieve backward compatibility. For more info on that, please see the project’s readme file.

, , ,

Leave a comment

Changing Existing Views and Stored Procedures

If you’re a DBA on a changing system, as most of us are, there are times when you need to change some of the objects (view, stored procedures, etc) in your databases. Since it is not possible just to re-create an object, you need to drop it and create it again. This wouldn’t be so difficult if you have a relatively simple schema. But if you have multiple objects depending on one another, there are some things you need to be aware of.

The way Informix currently is, when you drop one object, all objects that depend on it (views, synonyms, permissions) are consequently dropped. Which is perfectly OK. But not if you dropped it just to create it again the very next second. In that case, you need to be know that all this dependent objects, and all objects depending on those, and so on, recursively are dropped, so you need to create those as well.

Here is a little example. Let’s say there is a table t1 with many rows in it. User can work only with reduced set of data, so reduction views are used to limit the data each user can see. There may be some common criteria to reduce the data, so there is a main reduction view (v1) and several other views that reduce the data in several ways for different kinds of users (v1a, v1b, …). Private synonyms are created for each user and a view he is using, and permissions are granted as well. The schema would look something like this:

create table t1(a int, b int, ...);
create view v1 as select * from t1 where...;
create view v1a as select * from v1 where exists (..some sub-query..);
create view v1b as select * from v1 where exists (..other sub-query..);
create private synonym angie.t1 for v1a;
create private synonym bob.t1 for v1b;
create private synonym john.t1 for v1a;
...more private synonyms for other users...
grant select on v1a to angie;
grant select on v1b to bob;
grant select on v1a to john;
...more permissions for other users...

And now, if there is a change in v1 schema, and you need to do drop-create on it, all of these objects below v1 will be missing from your database after v1 drop. You’ll get no warning for that, so this is just one of the things you have to know.

Until this problem is solved, you need to know what will get dropped so you could create it. Here are some helpful queries. Of course, all of those will only work before you drop the initial object.

To find out what you need to recreate when you drop a view named v1:

select tabname from systables where tabid in 
( select dtabid from sysdepend
   start with btabid = (select tabid from systables where tabname = 'v1')
 connect by prior dtabid = btabid);

If you don’t have the code for your views, you can extract it from the database. Example for v1:

select viewtext from sysviews
 where tabid = (select tabid from systables where tabname = 'v1') 
 order by seqno;

Which private and public synonyms exist for a table or view (only first level synonyms; if you have multiple level synonyms, write a hierarchical query on syssyntable):

select 'create private synonym ' || trim(t2.owner) || '.' || trim(t2.tabname)
      || ' for ' || trim(t1.tabname) || ';'
 from systables t1, systables t2, syssyntable syn
 where t1.tabid = syn.btabid and t2.tabid = syn.tabid
 and t2.tabtype = 'P'
 and t1.tabname = 'v1';

select 'create public synonym ' || trim(t2.tabname) || ' for ' || trim(t1.tabname) || ';'
 from systables t1, systables t2, syssyntable syn
 where t1.tabid = syn.btabid and t2.tabid = syn.tabid
 and t2.tabtype = 'S'
 and t1.tabname = 'v1';

Which permissions exist for a table or view:

select * from systabauth where tabid = 
 (select tabid from systables where tabname = 'v1');

Which permissions exist for a procedure or function:

select 'grant execute on ' || procname || ' to ' || trim(grantee) || ';'
 from sysprocedures, sysprocauth 
 where sysprocedures.procid = sysprocauth.procid
 and procname = 'my_proc';

Instead of writing all these queries and digging through a code that really shouldn’t be taken into consideration while doing some schema update, we really need some kind of possibility to recreate an existing object without dropping dependent objects, like an  ALTER VIEW (and similar for stored procedures, ALTER PROCEDURE) SQL statement.

There is a RFE posted for this here, so feel free to vote.

, , , , , ,

1 Comment

Authenticating Internal Users with PAM

User authentication could be relatively complex issue when it comes to home-grown information systems. You usually need some sort of directory to hold the user data and handle the actual authentication, and you need Informix (or OS) to be aware of those user somehow. This became much easier with internal users feature which was introduced in 11.7.
Assuming that you have a PAM module on your system used to authenticate users, here’s a quick to-do list on how to set up user management using internal users feature on Unix. The same procedure on Linux defers in OS PAM configuration and user handling. You’ll need a root access in order to create user and edit allowed.surrogates and pam.conf files.

First, we’re going to configure internal users. Enable the feature in you onconfig by setting:


Do not set this parameter to ADMIN unless you want to give some of the authenticated users administrative privileges.

Now we need a surrogate user. That’s an OS user whose system properties will all authenticated users take:

useradd -d /home/ifxsurr -s /bin/sh ifxsurr

Specify that this user can be used as a surrogate for Informix:

mkdir /etc/informix
cd /etc/informix
vi allowed.surrogates

Put the single line in the file:


And set the read permissions:

chmod 644 allowed.surrogates

In order to obtain PAM authentication, you need a PAM module on your machine. Let’s say the library is /opt/mycompany/lib/

Create a DBSERVERALIAS to be used for PAM authentication. Add new alias in onconfig file:


Now add new line in your $INFORMIXSQLHOSTS file (we’re going to use PAM only for password authentication):

myserver_pam   protocol   ip   port   s=4,pam_serv=(ids_pam_service),pamauth=(password)

This service should be added in PAM configuration file (/etc/pam.conf). Make sure to add both auth and account facilities (thanks to Dave Desautels for pointing this out):

ids_pam_service auth sufficient /opt/mycompany/lib/
ids_pam_service account sufficient /opt/mycompyny/lib/

As for the configuration, this is it. The only thing left to do now is to add users and set the appropriate permissions on the system.

CREATE USER "" with properties user "ifxsurr";

Now, imagine that you have a PAM that can authenticate on some external service (e.g. Google or Facebook account), and your business policy would allow it, this seems like a nice way to avoid user authentication part of the system all together…

, , ,

Leave a comment