Understanding Data Change Events
Using data change event listeners, application-level Java code (e.g. the Space Inventory application workflow rules) can register callbacks on startup with the datasource. The callback occurs if:
- A datasource on a particular table inserted, updated, or deleted a record in the specified table.
- A datasource performed a bulk query (e.g. UPDATE … WHERE …) against that table.
Single-record Changes
For individual records, the callback functions with these parameters:
- The table being changed (in case the same WFR handles multiple tables).
- The primary key of the changed record.
- The new field values for that record.
- The old values for the changed fields of that record if the user changed a primary key value and did not use direct SQL.
- The user (afm_users.name) of the account whose session request
- The date and time on the application server of when the change was made.
Bulk Changes
For bulk changes, the callback functions with these parameters:
- The table that the bulk change affects (e.g. “wr”). This may be null if the core does not know which table was affected.
- The content of the SQL that made the change, [e.g. "DELETE FROM wo WHERE description = "desc” and wo_id=”1234433”]
- The user name, date, and time, as above.
The callback can optionally decide to log this change or inspect the SQL statement textually to determine if it needs to perform any other action.
Other Conditions
Some other conditions are:
- Multiple applications can register callbacks on the same table. The core calls them in sequence, but not it a particular order.
- The callback cannot cancel the triggering change.
Transactions and Commits
If you are writing application-level logic, such as the logic that synchronizes the Workspace Transactions table with the Rooms table, whether to commit at any point is an application-level decision. Some application-level rules, such as those that create autonumbered records, must commit as a part of the logic.
If you are writing a callback for audit-logging purposes, do not commit. If the transaction is canceled, the originating change – as well as any records you wrote to the log – will be rolled back in conjunction if one of the database transactions fails.
Cascading Updates
Some user actions trigger cascading updates to the database. For instance, if you delete a room, the system clears all dependent references to that room in the other tables, e.g. the equipment, employee, work order, etc. tables. If you rename a room, the system updates all dependent tables.
The data change event notification fires only on the root record of the cascading change. For instance, in the case above, the data change event fires when the system updates the room record, and your data change service will get a notification that a particular primary key record in the room table was deleted or updated. However, your handler will not receive notification that the equipment, employee, work order, etc. tables have changed.
To take another example, suppose you delete a building record. This also deletes all floor and room records as well as clears all references to that building, those floors, and those rooms throughout the database. However, your handler will receive notice only that the root building with a particular primary key was deleted.
This is the correct notification choice for two reasons. First, the root record is the one with the most precise information. In the case of cascading updates, the system knows which particular table and key is being changed. The application knows of the relations between tables and can react to the change as needed. Second, this behavior is consistent across all databases.
Use Cases
Changing Room’s Owner-of-Record
User changes the Division and Department in the Rooms table to change the department that is the “owner of record”. The space domain provides a callback workflow rule that updates the transaction table (the Workspace Transactions table) to audit the change.
On being notified of a single-record update to a Room (rm) table record this WFR:
- Takes the appropriate action in the Workspace Transactions table based on whether the change was an insert, update, or delete. The WFR uses the primary key of the record (bl, fl, rm) and the new values (dv and dp) to do so.
- Records who changed the record (i.e. the afm_user.name) and when (Date and time on the app server).
- Does not need to know the previous dv and dp values, as these are already in the Workspace Transactions table.
On being notified of a bulk update to the Rooms table, this WFR:
- Logs the details of the bulk change to the database event log (the table, user, date, time and query).
- Sets the
resynchRoomPercentagesTable
application parameter to Yes. Key reports and actions that depend on Space Transactions check this activity parameter before running and alert the user if the table needs to be resynched.
Only the CRE/FM department personnel make bulk changes – such as when they use Replace Column in the Smart Client or Edit Data Multiple in the Extension for AutoCAD. They also tend to make bulk changes on just the portion of the inventory that is new and is not yet managed using Space Transactions. As such, it is appropriate to ask these users to resynch after making extensive changes to the Room inventory.
The use case to change a room’s Category and Type and flow the appropriate change down to the Workspace Transactions table involves the same process.
The use case to change an Employee’s Primary-Room assignment in the employees (em) table also is the same process.
See RoomTransactionDataEventListener
Deleting a Room Record
This use case illustrates the need for the notification before the deletion occurs.
If a user deletes a room record, the data change event needs to archive the historical information for that room record that is currently in the Workspace Transactions table before the room is deleted. Otherwise, the room is deleted and the change already cascades to the Workspace Transactions table -- deleting the information that would need to be archived.
See RoomTransactionDataEventListener
Changing an Archibus User’s Information
The system activity supplies a logging callback workflow rule that the core calls whenever a user changes a User, Role, or Process Assignment record.
User uses the Smart Client, the Web Central Archibus Users form, or the My Profile form to change an afm_user record.
The callback workflow rule logs the single-record or bulk update to the table in the database event log, by recording the table, name of the user making the change, the date and time, and
- the SQL that created a bulk update
or - the primary key, changed fields for a single-record update.
In the case of an inquiry, the system administrator can review the log.
Reporting Exceptions
There are a number of cases in which the site can bypass the change events, for instance if users at the site update the database using direct SQL or by loading a database extract file.
In such a case, use the database transaction log or database triggers.
However, sites may wish to have an application-level exception report that detects when this kind of change has taken place.
For instance, consider the use case in which the Space Inventory application has a workflow rule that flows Room table changes to the Workspace Transactions table. Suppose a site then imports a set of room records using direct SQL statements that bypass the Archibus core. The activity can create an workflow rule that finds exceptions by comparing the values in the Rooms table to the most recent record in the Workspace Transactions table. If this exception report finds a difference it can set the resynchRoomPercentagesTable
activity parameter to Yes. Key reports and actions that depend on Space Transactions check this activity parameter before running and alert the user if the table needs to be resynched.
Usage Notes
- Intended Use: The intent behind the data change services is to do targeted cleanup after users change inventory tables or to do auditing. The intent is not to have the handlers implement large amounts of application-level logic.
- Performance: If you write a data change service, make certain it executes very fast. Do not query for data dictionary information or other heavy information. Keep the list of tables that your handler acts on as small as possible.
- Threads: Do not create new threads of execution in workflow rules, such as a long-running job. This rule goes double for data change services.
- Commits: You may wish to avoid doing any commit inside of data change services. In general, there is no reason to do so. The workflow rule container will commit in any case when any workflow rule ends, so there is no need to. If the parent workflow rule encounters an error and rolls back, the concomitant data change service changes will roll back as well.
- The handler cannot assume that the parent workflow rule does not commit. The parent workflow rule may be updating a large number of records, and will need to commit for performance reasons. Or the parent workflow rule will need to commit to propagate some changes – such as when a workflow rule commits after the creation of auto-numbered records in Oracle so that later portions of the workflow rule code can refer to that particular auto-numbered records.
- Keep in mind that if the parent workflow rule throws an error after the data change service fires, the data change service logging will not be in the log. If you need to track not just all changes but all attempts to change, write a custom handler for that exception.