Database Design for New Capabilities
Over the last few years we've been inundated with new capabilities in our databases that we hardly get to use. As an example, there is a capability we have for physical files for making sure data in a field is correct.
If you have Operations Navigator, go to the database folder and find a file. Right-click on the file and look at Properties. One of the tabs you see is for constraints, both for the keys and referential. Seeing that, I thought, “Great, there are some nagging problems with some files and this would make sure the data is right!” So I started to hook it up, then, Oops! You can’t do it if you don’t journal the file.
This may not seem like a big deal for larger shops with Operations departments but many smaller shops have no clue what a journal is! So here is this wonderful tool, that is limited by its implementation.
I thought, what’s wrong with using journaling over all the critical files? So I looked into it and I found some very interesting information that impacts the way we build our database tables. On some clients with lots of activity, the journals were huge. Over a gig every few hours. Then I analyzed what was in the journal. Everything that was changed was in there. That was good, but there are some very large files, mostly masters, in the journal hundreds of times. These files have one or two fields that are being updated with some count or value. Each time, a new journal record was being written with all that data that was not changing.
What this highlights is a problem with the way we have build tables in the past. It has nothing to do with relational normal form, it has to do with data activity. If we redesigned our tables putting data that does not change much in a different table from the data that does, the journal would be substantially smaller. This would make the use of journaling much simpler and give us the ability to use the constraints and referential integrity capabilities.
This does require a redesign of the database, and that may not always be practical. However, if we think about the new files we are always adding, then we can start to use these things slowly and learn the pros and cons at a smaller cost.
| ||Some Examples|
Key constraints—make sure the key field must contain a value (not null or blank on the 400).
Referential constraint—link all the records for the order header, so that if you delete the order header, all the records pointing to this order are also removed with the option here of removing the dependant records, or leaving them alone, but the system will watch it for you.
Index constraint—where the index from this file must match certain criteria from another one, such as, “You can’t have an order detail records without an order header.”
Have you ever wanted to make sure that as someone deletes a record, that as someone deletes a record, that all associated records get deleted too? You could write a program, but it would be really nice if the system did it for you. How nice would it be to build all that logic into the database itself, and not have to worry about the programming required, or that someone used a DFU, or SQL to mess you up.
In case you missed it, the idea here is to build a system where the database is smarter about itself. Rather than having a lot of this logic in programs (code), it’s in the database. Yes, there is a cost to all this magic. A performance cost and a cost that hurts more. You have to think about how you are laying out the tables before you just start creating them. Thinking is always more painful than just slinging it out there. Of course, if you think about it a little, it can pay very large dividends.
Many of us are laying out new databases for new projects, or for our new Internet projects. Maybe now would be a good time to take some time and see if the new tools we have can help us make better systems quicker. I think they can if we give them a chance.
(John Bussert is president of Swift Technologies (Marengo, Ill.), a company specializing in AS/400 and Windows NT software. email@example.com)