Archive for category Hints

Serving Customized Documents from the Database

If your database serves an information system, there is a pretty good chance that system generates some written documents to end users – predefined reports or simply letters or some kind of official documents to be printed. In the case of relatively simple documents, i.e. not complicated multi-level detailed reports, application has to generate it using a template or some other meta-description of it, and fill it with a case-specific data. If there is a need to generate same documents from different apps (e.g. a web app and a desktop app), a solution to generate it and serve it right from the database might come in handy.

There are several ways to do this, but probably the simplest is to call the stored procedure (or a function) with the right parameters, which will return the document. Document templates which will be used should be stored somewhere in the database. The document generation function takes the appropriate template, searches for predefined tokens and replaces them with the correct data, depending on the input parameters. That’s why the template needs to be in some simple text format, perhaps RTF (rich text format) which allows neat formatting, could be prepared in most of the WYSIWYG text editors, can contain pictures and other advanced formatting stuff, and yet could be viewed as a plain text file, so tokens could be found and replaced easily.

Tokens should be predefined, agreed upon, and unique in a way that they could never be a part of a normal text. Some examples of tokens might be: REPLACEME_CURRENT_DATE, ##CUSTOMER_NAME## or %customer.id.number%. I suggest to define a single notation and stick to it. Just make sure the token format does not interfere with the special characters your chosen document format uses. After this, templates should be prepared in any rich text editor, like LibreOffice Writer or MS Word and stored in a database table with templates. Next, the document generating function has to be written. Depending on the size of the template, you might need to use some character data types like CLOB or LONGLVARCHAR (a new, still undocumented type), which make things more complicated. Main reason for this is the REPLACE function which doesn’t support bigger character types. Since we want to replace all tokens of the same kind in the document at once, we may store the file on the server, iterate through all the token types, replace them with proper values (using sed, for example), pull it back from the file system and return it to the user. Here is the outline of the function which might do that:

CREATE FUNCTION generateDocument (
   document_id INT, additional parameters...) 
   RETURNING CLOB;
  -- define the variables
  -- get the appropriate template from the table 
  -- and store it on the file system using LOTOFILE function
  -- fetch the values which will replace tokens in the template
  FOR token IN (list of tokens)
      -- replace each token in the file using SYSTEM 'sed...'
  END FOR
  -- pull the file from the file system using 
  -- FILETOCLOB and return it
END FUNCTION;

This solution requires a smart blob space and file system privileges for the user executing it, but no other special demands exist. If you’re a fan of datablades, there is also a possibility of using a string manipulating blade functions which would make the replacements without meddling into a file system and leave the file system out.

, ,

Leave a comment

Reading the Logical Logs

Every now and then (hopefully not often), there is a need to browse through some logical logs and see what’s been going on with your system. In this post, I’ll cover the purpose and logical log basics and provide some info on how to read what’s inside the logs.

In the simplest words, logical logs provide the way to ensure the minimal loss of data in case the system comes to a halt. Every operation and every transaction that is executed is written in one or more logical logs. So if the system, for some reason, crashes, when it is re-inited, you’ll see in the online log that the logical recovery is performed. That means the system is going through the logical logs since the last checkpoint, and re-executing the operations that were executed till the system halted. Transactions that were started, but not completed, are rolled back.

Knowing what and when is written in logical logs provides the opportunity to explore the systems behavior in some cases. For example, it is the way to identify the busiest tables and indexes during some period of time, or perhaps to explore in detail what’s been going on during a single transaction in order to detect possible errors in end-user application while modifying the data.

The size and the number of logical in your system is determined by onconfig parameters LOGSIZE and LOGFILES, respectively. When all the logs are used (flag “U”, in the following list), the system reuses the first one, and so on, in endless cycle. The logs that are to be reused, should always be backed up (flag “B” in the following list), using onbar or ontape utilities. By setting the onconfig parameter LTAPEDEV to /dev/null you can discard the logical logs, but this should never be done in a production environment.

You can see the state of your logs by executing onstat -l. This is an example of the output:

address number flags uniqid begin size used %used
...
3b2a507e0 992 U-B---- 1066232 2:991053 1000 1000 100.00
3b2a50848 993 U-B---- 1066233 2:992053 1000 1000 100.00
3b2a508b0 994 U---C-L 1066234 2:993053 1000 190 19.00
3b2a50918 995 U-B---- 1064843 2:994053 1000 1000 100.00
3b2a50980 996 U-B---- 1064844 2:995053 1000 1000 100.00
...

Each logical log is identified by its uniqid (in the list above, logs have uniqids 1066232 – 1066234, and 1064843 – 1064844). The flag “C” indicates that this is the current log being written. The next log has a smaller uniqid than the current – it is the next one to be used (and overwritten in the current logical space). The log with the “L” flag is the log containing the last checkpoint.

The onlog utility is used to display the content (or part of it) of a logical logs. The main thing you have to know is what logical log(s) you want to see (ie. their uniqids). Apart of that, there are number of filters that could be used in order to reduce the output and make it more readable (as you’ll be able to see, there is lot of information being written in logs). The complete syntax of the onlog utility can be found here.

Filtering could be done by username,  partition number (often referred as partnum in Informix) and/or transaction id. Username obviously belongs to the user that executed some statements or commands resulting in data manipulation. Partition number identifies the table or index the operation is executed upon. The easiest way to find out partnums of tables or indices is to query the sysmaster:systabnames table. In some outputs it is shown as decimal, in other as hexadecimal number, so you may need to use the hex() function in your SQL. And finally, transaction id (denoted here as xid) is a number identifying the transaction – all log entries that belong to the same transaction have the same xid (starting with BEGIN entry and ending with COMMIT or ROLLBACK). But be careful about this one – it’s not unique in any way. Often you’ll see the same xid being used again for the next transaction of the same user for some time, and after that a new xid is assigned to this user, and the previous one my be assigned to some other user.

If you’re not backing up logical logs, using the onlog utility you’ll be able to display the contents of those logs currently on disk only, but if you have the logs on backup, you can also fetch and display those. If using ontape utility for logical log backup, backed up logs could be fetched with onlog utility as well. If backup is made with the onbar, then you need to use onbar -P (with the rest of the options same as if you’d use onlog) to fetch logs that are sitting in your backup.

Here’s an example of a logical log display (obtained by onlog -n):

addr len type    xid id link
18   172 HINSERT 155 1  3e7714    70013b 414d03 105
c4   320 HINSERT 155 0  18        60007c c64bc02 254
204  56  COMMIT  155 0  c4        11/30/2013 16:22:56

There are 3 entries shown above. There is the addr column, which is the address of the entry (hexadecimal) in the current log. Len is the length of the entry (decimal) in bytes. Next column is type – one of predefined identifiers, specifying the action executed. Xid column is transaction id. Id column mostly has the value 0, but can take some other values if the entry is linked to the previous log. Link is the address of previous entry that belongs to the same transaction. The rest of the data in each row is type-specific. The list of all entry types and corresponding additional data can be found here.

In a regular production environment, seams reasonable to expect most entries to be of these types:

  • BEGIN, COMMIT, ROLLBACK – transaction start/end
  • HINSERT, HDELETE, HUPDATE, HUPBEF, HUPAFT – insert, delete or update row
  • ADDITEM, DELITEM – add or remove item from index

Reading through the documentation for additional data for these types, we can see the partition number always involved, so it’s fairly easy to infer on which table/index is the action executed. But that’s about it. There is no information which data was written/deleted etc. In order to find that out, we need to look at the long output of onlog, obtained with onlog -l. This option is not documented, the documentation states: The listing is not intended for casual use. Nevertheless, there are some things you can find out of it. Here is an example of the onlog -l, for a single entry.

Say we have the table test_table with columns col1 of type INTEGER, and col2 of type CHAR(10), and execute this command:

insert into test_table values(123000123, 'dva');

This will be written in log as 3 entries – BEGIN, HINSERT and COMMIT, provided there is no primary or foreign keys, nor other indexes on that table (remember that, when a single SQL statement is executed, it is implicitly executed as a single operation transaction). Let’s take a look of HINSERT entry:

addr     len  type     xid      id link
21f050   80   HINSERT  158      0  21f018   200324   103    14 
         00000050 00000028 00000112 00000000 ...P...( ........ 
         00000000 00000000 0000009e 0021f018 ........ .....!.. 
         6b03f454 00200324 00200324 00000103 k..T. .$ . .$.... 
         000e0004 00000000 00000000 8002084b ........ .......K 
         0754d53b 64766120 20202020 20207369 .T.;dva        si

Each byte is represented by two hexadecimal numbers. The entire log entry is shown both in hexadecimal and also ASCII view, side by side, which makes it easier to identify the text strings. First part of each entry are the header bytes, and the rest is operation-type dependent.

Corresponding header data are marked in the following lists (some numbers in header are decimal). The same list is repeated several times, with different data pairs marked.

Entry lenght:

addr     len  type     xid      id link
21f050   80   HINSERT  158      0  21f018   200324   103    14 
         00000050 00000028 00000112 00000000 ...P...( ........ 
         00000000 00000000 0000009e 0021f018 ........ .....!.. 
         6b03f454 00200324 00200324 00000103 k..T. .$ . .$.... 
         000e0004 00000000 00000000 8002084b ........ .......K 
         0754d53b 64766120 20202020 20207369 .T.;dva        si

Entry id:

addr     len  type     xid      id link
21f050   80   HINSERT  158      0  21f018   200324   103    14 
         00000050 00000028 00000112 00000000 ...P...( ........ 
         00000000 00000000 0000009e 0021f018 ........ .....!.. 
         6b03f454 00200324 00200324 00000103 k..T. .$ . .$.... 
         000e0004 00000000 00000000 8002084b ........ .......K 
         0754d53b 64766120 20202020 20207369 .T.;dva        si

Entry type:

addr     len  type     xid      id link
21f050   80   HINSERT  158      0  21f018   200324   103    14 
         00000050 00000028 00000112 00000000 ...P...( ........ 
         00000000 00000000 0000009e 0021f018 ........ .....!.. 
         6b03f454 00200324 00200324 00000103 k..T. .$ . .$.... 
         000e0004 00000000 00000000 8002084b ........ .......K 
         0754d53b 64766120 20202020 20207369 .T.;dva        si

Entry xid:

addr     len  type     xid      id link
21f050   80   HINSERT  158      0  21f018   200324   103    14 
         00000050 00000028 00000112 00000000 ...P...( ........ 
         00000000 00000000 0000009e 0021f018 ........ .....!.. 
         6b03f454 00200324 00200324 00000103 k..T. .$ . .$.... 
         000e0004 00000000 00000000 8002084b ........ .......K 
         0754d53b 64766120 20202020 20207369 .T.;dva        si

Entry link:

addr     len  type     xid      id link
21f050   80   HINSERT  158      0  21f018   200324   103    14 
         00000050 00000028 00000112 00000000 ...P...( ........ 
         00000000 00000000 0000009e 0021f018 ........ .....!.. 
         6b03f454 00200324 00200324 00000103 k..T. .$ . .$.... 
         000e0004 00000000 00000000 8002084b ........ .......K 
         0754d53b 64766120 20202020 20207369 .T.;dva        si

Highlighted bytes below represent Informix internal clock:

addr     len  type     xid      id link
21f050   80   HINSERT  158      0  21f018   200324   103    14 
         00000050 00000028 00000112 00000000 ...P...( ........ 
         00000000 00000000 0000009e 0021f018 ........ .....!.. 
         6b03f454 00200324 00200324 00000103 k..T. .$ . .$.... 
         000e0004 00000000 00000000 8002084b ........ .......K 
         0754d53b 64766120 20202020 20207369 .T.;dva        si

Additional data for HINSERT are three numbers: partition number (hexadecimal), rowid (decimal), and row size in bytes (decimal). In this example, those are: 200324, 103, 14. All three are marked in the list below (bold, italic, underline matches):

addr     len  type     xid      id link
21f050   80   HINSERT  158      0  21f018   200324   103    14 
         00000050 00000028 00000112 00000000 ...P...( ........ 
         00000000 00000000 0000009e 0021f018 ........ .....!.. 
         6b03f454 00200324 00200324 00000103 k..T. .$ . .$.... 
         000e0004 00000000 00000000 8002084b ........ .......K 
         0754d53b 64766120 20202020 20207369 .T.;dva        si

14 bytes of data in this example refers to one integer (4 bytes) and one char field which is defined as char(10). Remember that, even though we’re inserting three-letter word, the rest of the char field is filled with spaces (ASCII code 20).

insert into test_table values(123000123, 'dva');

addr     len  type     xid      id link
21f050   80   HINSERT  158      0  21f018   200324   103    14 
         00000050 00000028 00000112 00000000 ...P...( ........ 
         00000000 00000000 0000009e 0021f018 ........ .....!.. 
         6b03f454 00200324 00200324 00000103 k..T. .$ . .$.... 
         000e0004 00000000 00000000 8002084b ........ .......K 
         0754d53b 64766120 20202020 20207369 .T.;dva        si

When observing update operations, note that both before- and after-image are written to the log, but only changed columns are written, not all. If the amount of data being written is small enough, than single HUPDATE entry is written in log, otherwise, two entries are written: HUPBEF and HUPAFT.

In conclusion, reading the logical logs can be very useful, and sometimes the only way to get some information about how your system works. But there are a lot of things one should be aware of before trying to read the log structure, most important being:

  • the structure of the tables and indices
  • how the data are stored (how many bytes are needed for each data type etc.)
  • what’s your system endianness.

Given these important notes, and the fact that the logical log structure is not documented, that means it could be (and very likely is) different not only from one version to another, but also from one platform to another. The examples above are taken at a standalone Solaris SPARC system, Informix 11.70.FC5.

, ,

3 Comments

New Utility – Ifmx Table Copy

Recently a friend of mine who meets a lot of Informix customers told me he frequently gets a question about copying a table with all the data in it and suggested that could be a good idea for a small utility. Having a SELECT …INTO permanent_table syntax available in Informix 12.10, there wouldn’t be much sense in creating such utility, but there’s another request – would be very nice if the constraints (primary and foreign keys, etc.) were copied as well.

Being a small piece of code, it would be most reasonable if it could be executed in a database itself, with no need for additional software to run it, and every DBA should be able to run some SQL statements. So I wrote a couple of stored procedures (well, functions to be exact) and these are actually all you need to run it. This little project is also open-sourced, released under GNU GPL v3 license, hosted on GitHub as ifmx-table-copy. There are two functions you need to create in your database (links lead to SQL files): getIdxColumnsForTableCopy and tableCopy.

When tableCopy is executed, it will create a copy of the table, with a new name, populate it with the data from the source table, and create the primary key, foreign keys, check, not null, and unique constraints and also indexes that exists on the source table, on the corresponding columns of the target table. The names of constraints are created from or concatenated with the target table name (see the details on the project page). The default values that exists on source table columns are also stated in the target table, provided they refer to character types. It is possible to state a different dbspace in which the table will be created. If no dbspace is given (i.e. the corresponding argument is null), then the table is created in the default database dbspace. There is also a parameter that prevents the execution of all the statements that would be executed during the target table creation – instead the statements are returned from the function and can be examined before the “real” execution.

Execute the tableCopy function with the following arguments to see what will actually be executed on the server:

execute function tablecopy('source_table_name', 'target_table_name', null, 'f');

Execute the following to perform actual structure creation and data copy:

execute function tablecopy('source_table_name', 'target_table_name', 'db_space', 't');

This utility uses the SQL syntax available in the Informix servers 11.70 and newer. There are some minor modifications that could be done to achieve backward compatibility. For more info on that, please see the project’s readme file.

, , ,

Leave a comment

Changing Existing Views and Stored Procedures

If you’re a DBA on a changing system, as most of us are, there are times when you need to change some of the objects (view, stored procedures, etc) in your databases. Since it is not possible just to re-create an object, you need to drop it and create it again. This wouldn’t be so difficult if you have a relatively simple schema. But if you have multiple objects depending on one another, there are some things you need to be aware of.

The way Informix currently is, when you drop one object, all objects that depend on it (views, synonyms, permissions) are consequently dropped. Which is perfectly OK. But not if you dropped it just to create it again the very next second. In that case, you need to be know that all this dependent objects, and all objects depending on those, and so on, recursively are dropped, so you need to create those as well.

Here is a little example. Let’s say there is a table t1 with many rows in it. User can work only with reduced set of data, so reduction views are used to limit the data each user can see. There may be some common criteria to reduce the data, so there is a main reduction view (v1) and several other views that reduce the data in several ways for different kinds of users (v1a, v1b, …). Private synonyms are created for each user and a view he is using, and permissions are granted as well. The schema would look something like this:

create table t1(a int, b int, ...);
create view v1 as select * from t1 where...;
create view v1a as select * from v1 where exists (..some sub-query..);
create view v1b as select * from v1 where exists (..other sub-query..);
create private synonym angie.t1 for v1a;
create private synonym bob.t1 for v1b;
create private synonym john.t1 for v1a;
...more private synonyms for other users...
grant select on v1a to angie;
grant select on v1b to bob;
grant select on v1a to john;
...more permissions for other users...

And now, if there is a change in v1 schema, and you need to do drop-create on it, all of these objects below v1 will be missing from your database after v1 drop. You’ll get no warning for that, so this is just one of the things you have to know.

Until this problem is solved, you need to know what will get dropped so you could create it. Here are some helpful queries. Of course, all of those will only work before you drop the initial object.

To find out what you need to recreate when you drop a view named v1:

select tabname from systables where tabid in 
( select dtabid from sysdepend
   start with btabid = (select tabid from systables where tabname = 'v1')
 connect by prior dtabid = btabid);

If you don’t have the code for your views, you can extract it from the database. Example for v1:

select viewtext from sysviews
 where tabid = (select tabid from systables where tabname = 'v1') 
 order by seqno;

Which private and public synonyms exist for a table or view (only first level synonyms; if you have multiple level synonyms, write a hierarchical query on syssyntable):

select 'create private synonym ' || trim(t2.owner) || '.' || trim(t2.tabname)
      || ' for ' || trim(t1.tabname) || ';'
 from systables t1, systables t2, syssyntable syn
 where t1.tabid = syn.btabid and t2.tabid = syn.tabid
 and t2.tabtype = 'P'
 and t1.tabname = 'v1';

select 'create public synonym ' || trim(t2.tabname) || ' for ' || trim(t1.tabname) || ';'
 from systables t1, systables t2, syssyntable syn
 where t1.tabid = syn.btabid and t2.tabid = syn.tabid
 and t2.tabtype = 'S'
 and t1.tabname = 'v1';

Which permissions exist for a table or view:

select * from systabauth where tabid = 
 (select tabid from systables where tabname = 'v1');

Which permissions exist for a procedure or function:

select 'grant execute on ' || procname || ' to ' || trim(grantee) || ';'
 from sysprocedures, sysprocauth 
 where sysprocedures.procid = sysprocauth.procid
 and procname = 'my_proc';

Instead of writing all these queries and digging through a code that really shouldn’t be taken into consideration while doing some schema update, we really need some kind of possibility to recreate an existing object without dropping dependent objects, like an  ALTER VIEW (and similar for stored procedures, ALTER PROCEDURE) SQL statement.

There is a RFE posted for this here, so feel free to vote.

, , , , , ,

1 Comment

PDQ Priority in Innovator-C Edition with AUS

If you recently downloaded and installed Innovator-C, and haven’t mess with any of default AUS (auto-update statistics) settings, you’ve probably noticed a bunch of warnings in your online log that appear on Saturdays and Sundays, that look like this:

Warning: PDQ functionality is not supported in IDS Innovator-C Edition.
The value of PDQPRIORITY cannot be set to 10. It is reset to 0.

The reason for this is that obviously auto update statistics refresher is trying to set the PDQ priority to 10, which is not allowed in Innovator-C edition. You can change this by editing the sysadmin:ph_config table (as user informix):

SELECT * FROM sysadmin:ph_config WHERE name = 'AUS_PDQ';
UPDATE sysadmin:ph_config SET value = 0 WHERE name = 'AUS_PDQ';

This value is actually inserted while creating sysadmin database (see the file $INFORMIXDIR/etc/sysadmin/sch_aus.sql). Since each Informix edition is packed separately, the guys at IBM will hopefully change this file, so people who are new with Informix don’t get the wrong idea about the product.

Leave a comment

IIUG Conference 2012

With IIUG conference under a week to go, it’s time to take a look at a conference schedule. Here are some of my own recommendations on what to attend and who to listen. Please bear in mind these are based only on my preferences and knowledge of presenters, so if I miss anybody, it doesn’t mean a thing :). For those of you who unfortunately are not attending the conference, I suggest to log on to iiug.org member area and search for presentations as soon as they’ll be available.

Keynotes should definitely be attended. Hoping to find out more about IBM’s future plans for Informix and the next release. Also on this topic, Mahesh Dashora will speak about roadmap for 2012 and beyond (F09).

For all of you (us) interested in utilizing the mobile platforms, there are two sessions on Android (B03. Informix and Android: dbaccess, OAT, and more; and B15. Mobilize your data with Informix & Android), one on iOS (C05. Informix Applications Uncovered on iOS) and one on development with Genero (G12. Developing a mobile application in Genero).

There is never enough talk about security, so be aware of sessions by Tom Beebe (A03. Application Security For Informix Developers) and Jonathan Leffler (H03 and H15. Demo of IDS Security features).

Also, there will be a number of sessions regarding smart metering and time series data (C15, D11, E02, E03, E12, H06) hope to see some interesting new examples of time series usage, especially those which could also be applied in different areas of work.

If you are in warehouse business, attending sessions (co)authored by Frederick Ho should be mandatory: A02. How To Achieve Leaps in Warehouse Query Performance with IDS ; A17. Deploying Solutions with Informix Warehouse Accelerator – Customer Experiences; E17. Informix Warehouse Accelerator – A look at the First Year and Beyond; F01. Data Warehousing Technology Trends – What You Need to Know.

And last, but certainly not the least, there are some guys who know and understand Informix way better than vast majority of us. I’d suggest to attend sessions by (alphabetical order): Art Kagel, Lester Knutsen, Jonathan Leffler, John Miller III, Madison Pruet.

, , , , ,

Leave a comment

MSG_DATE

As of Informix version 11.50, there is a simple but cool onconfig parameter – MSG_DATE. It puts a date at the beginning of each online log message. That’s a real relief for all us parsing our online logs to get some kind of daily report or similar. It was much harder to determine where the messages of the actual day start before MSG_DATE was introduced, but now you can simply grep the log for the date.

The only not-so-great thing about it is that the date is always in MM/DD/YY format. It doesn’t depend on localization settings. Date formats in log messages could be different from one log to another (e.g. online log and onbar activity log), which makes the parsing of several logs more demanding (or should I say, kind of pain in the a..). Nevertheless, very useful parameter, which IMHO, should be defaulted to 1. It can be changed online using onmode -wf or -wm.

,

2 Comments