The Sybase support is going to be restored by CFBam 2.0 for the 2.1 projects, because I remembered how to deal with the mutable complex join issue in Sybase ASE. It'll take a while to implement, but I remember what I have to do to handle this. It's been a while since I had code doing what I'm doing with the ClearDeps and the DelDeps. The Chain support relies on atomic record operations, so it never encounters the immutable cursor issue.
The attributes SchemaDef.JXMsgRqstSchemaXsdSpec and SchemaDef.JXMsgRspnSchemaXsdSpec are needed to implement the custom JavaXxx expansions in MSS Code Factory 2.0. The bindings are already present in 1.11, so there is no need to back-port to 1.10 and remanufacture 1.11.
MSS Code Factory CFBam 2.1 and 2.2 are also affected and have had their models edited accordingly in the 2.0 and 2.1 code bases.
Each of the databases needs to allow for a JavaXxxSchemaObjImport element.
The missing attribute has been added to the model.
The Java* attributes of the SchemaDef and Table specifications have been renamed J* so that they don't collide with the custom verbs in MSS Code Factory 2.0.
The manufactured bindings were not properly overloading getValueObject(), which was causing the formatters to fail at runtime for MSS Code Factory 2.0. This has been corrected, and MSS Code Factory 2.0 now processes the CFSecurity 2.1 model without throwing exceptions or displaying error messages to the console. That's not to say the code produced is valid yet, but it runs properly for the migrated Java rules.
There was a bug with creating contexts in CFCore 1.11.12399 which has been corrected by CFCore 1.11.12594.
The genDef wasn't getting properly set when building a reference context under one condition that apparently has never been encountered in the 1.11 code, but which has cropped up in the 2.0 code.
The CFUniverse 2.0 code was too big to build again, so the schema references to CFAsterisk and CFFreeswitch have been removed. I'm seriously considering dropping the CFUniverse 2.0 project entirely due to the limitations imposed by Java itself. Were I using C++ on a 64-bit platform, I wouldn't be encountering 32-bit address space limitations (even with 64-bit JVMs, the Java specs use 32-bit sizing on things like the string constant spaces.)
It turned out I had two XSD component relationships from the SchemaDef to the Table and Atom specifications, which was resulting in two sets of the elements being produced for the XSD. I've decided to not only eliminate the duplicates, but to rename them. I'll have to modify the 2.0 parser accordingly after I've manufactured the code -- I don't like duplicate functionality of using more than one method set for accessing an element list. Fortunately, it's not a problem with the engine, so the other models aren't affected by this nasty little bug.
There will, of course, be old code elements for CFBam 2.0 that need to be removed after the manufacturing run is complete.
The MSS Code Factory 2.0 CLI now successfully parses the CFDbTest 2.1 model, so the parser for 2.0 is done until I need to add new features or discover bugs that need to be fixed. Instead of the parser, I'll now be focusing on migrating the Java rules to 2.0, and migrating any custom verbs from 1.11 that I can't emulate through GEL rules.
The changes to the rule base will be dramatic and widespread -- 2.0 is emphatically not compatible with 1.11, even though the same XML format and GEL syntax are used.
There were a couple of minor tweaks to the CFBam 2.0 model required by the testing of the SchemaRef processing in MSS Code Factory 2.0, and I also decided to make the SchemaDef.PublishURI attribute required.
The new CLI for MSS Code Factory 2.0 now builds with all of the features migrated (but untested) from 1.11. There might be some additional changes required in the future, but for now I don't expect to be making any further modifications to CFBam 2.0's model.
There were some missing Java* attributes for the SchemaDef, as well as other modelling corrections that had to be made throughout the day. With this update, I believe that I'll have a version of the new parser that will accept the entire CFSecurity 2.1 model.
The CFBam 2.0 model has been tweaked and enhanced a little bit more. At this point, its objects are being successfully parsed by CFCli 2.0 up to the TableRelations specification. Rather than create variations on TableRelations for each of the object types that can be added to a table, I think I'll rename that artificial element TableAddendums and cluster all of the possible additional elements underneath it.
There is no *functional* change since SP7; only modelling changes have been made.
The CFBam 2.0 model mistakenly specified that SchemaDef.PublishURI was required, but did not provide an initial value. The attribute has been made optional.
Things are going well with the 2.0 model parser -- I've parsed down to the SchemaDef failing because of this bug. The general structure of the new MSSBam XSD has been sketched out and matched to the pruned version of the SAX Parser. I'll need to re-migrate the SchemaDef handler once the remanufacturing is done, but that's no big deal.
Note that I'm only going to manufacture the layers I need for the parser work at this time (java and java+msscf), so I won't be publishing a CFBam 2.0 update until I have the parser working and it's worthwhile to invest the time in doing a full manufacturing run.
The CFBam 2.0 model has been enhanced to support a couple of features that had been added to 1.11 but not brought forward to 2.0 (specifically, the LoaderBehaviour object and attributes. I've also eliminated the "Author" and "User" constructs in favour of specifying a CopyrightHolder as an attribute of a SchemaDef.
CFCli 2.0 is the 2.0 version of MSS Code Factory's command line interface and engine. At this point, it's only an untested shell, but it does bring together the CFBam jars, CFLib, CFCore, and the beginnings of what will be a heavily customized version of the CFBam parser (for MSSBam 2.0 models.)
The only thing I could think of that needed any more work in SP6 was the support for date-time-timestamp values and their timezone-attributed variants. I waited a day or two to think about what else I might have forgotten to take care of, came up with nothing, so here is the fixed version of the date handling for all of the databases.
Most of the changes actually happened in the CFLib 1.11.12561 update release, but there are code changes to the JDBC layers and to the X(ml)Msg layer that need to be remanufactured. You do not need to remanufacture your database scripts, though you should recreate and reload your database information after refreshing your project build.
It's worth noting that with the fixes to the TZTime handling, MySQL now passes all of the CFDbTest 2.0 tests. Part of the fixes applied including making the "zero point" of TZTime values 2000.01.01 instead of 0001.01.01. If you're only referencing the time attributes of the resulting Calendar values (as you should be), no changes should be required to your code. But if you've been accessing the date attributes, you'll need to adjust your code accordingly.
I do not expect to release any further MSS Code Factory updates in the foreseeable future. If I work on anything, it will be fleshing out one of the 2.0 projects to turn it into a "real application" in order to discover what limitations and constraints I may have neglected to consider and need to address. But at this point, I've implemented everything I originally intended to 18 years ago and an awful lot more.
Note that login security was always intended to be handled by an external agent; therefore the implementations of the Swing prototype GUI just let you choose a user identification without providing any form of security other than the database login password. You should replace those screens and functionality with appropriate security checks, and then invoke the "security" login functions to assign that authenticated user to an appropriate Cluster/Tenant/SecUser identity for runtime processing.
With this update, I pronounce 1.11 "done". I can't think of a single thing to work on that isn't a wishlist-extra-nice-to-have-bonus.
The CFLibXmlUtil date/time methods have all been debugged and corrected. In addition, a CFLibDbUtil has been added which holds on to a global database server timezone and provides calendar conversion routines for UTC and database server timezone values from generic value calendars. These routines will be used by updated versions of the database JDBC code to "normalize" the TZ values as date-time columns in the database, persisted in the local timezone of the database (seeing as Oracle is the only database that actually lets you *specify* a timezone, I'm not going to rely on that functionality for portability's sake.)
In order to bypass the date range limits of MySQL, the base time for a TZTime is now 2000-01-01, so that negative timezones can underflow into 1999-12-31.
Fortunately I was already using timestamp columns to persiste TZ date-time values, so I don't need to remanufacture the database scripts. The SAX parsers will automatically receive the corrected behaviour from CFLib, so the only thing I need to do yet is modify the JDBC layers to convert date-time values to the database server timezone, and to convert them back to local time (regular) or UTC (TZ) when reading the data.
Easy-peasy. I should be done today.
I've been thinking about what I might have forgotten to do since I released Service Pack 6, and so far only one thing has come to mind: fixing the TZ date/time/timestamp support. What I want to do is have the SAX parsers convert TZ data to UTC internal values, and then when persisting that to the database, convert it to the database time zone. So you'll lose the timezone specification in the process, but the values will be accurate. Seeing as the only database that lets you *specify* a timezone is Oracle, there is no way I can think of other than "normalizing" the data that would work portably and consistently across the databases.
I'm going to give it a few more days before I do any work, though. I want to see if there is anything else I can remember that has to be done before I release a 1.11-FINAL. (Yes, I would do further releases with bug fixes if anyone were to report bugs that need correction, but FINAL would be the final feature set; I wouldn't add any more functionality later. Sooner or later one has to fish instead of cutting bait.)
Service Pack 6 provides move up/down functionality for the Chains for all of the supported databases. Note that the RAM storage does not support Chains or complex object deletes at all -- it's intended for high volume read/update/delete data, such as the call record information for an Asterisk or FreeSwitch PBX system, or the internals of MSS Code Factory itself.
There are some critical bugs fixed with Service Pack 6, including cache integrity bugs that were discovered during testing of the move up/down functionality.
With this release, I think I'm pretty much done with MSS Code Factory 1.11. I can't think of any more functions I'd want to add that I have experience with. Sure I could implement proper login security with hashing algorithms, a JEE server to receive and respond to X(ml)Msg requests, and polish the prototype GUI some more, but that's really not my forte. I spent 30 years as a back end database programmer, tuning servers and wringing every last bit of performance out of database engines that I could.
MSS Code Factory 1.11 now incorporates everything I ever learned about making an RDBMS sing and dance. It provides all the functionality points that I was ever asked to deliver to a front end application programming team, and does it all automagically from a Business Application Model.
It's been 18 years of long hours working on this project to get to this point. The idea was around even longer (I came up with the concept way back in 1987, before I'd even had any experience with data modelling tools.)
Service Pack 6 is, in essence, my life's work. My magnum opus. I have climbed my mountain, and the view is great.
MySQL now supports the full set of Chain functionality, including the MoveUp/MoveDown operations.
The JDBC for SQL Server also clean compiles with its changes, but the SQL Server scripts haven't been manufactured and tested yet, so don't download this release if you're using SQL Server. Wait for me to finish testing it, after which I'll be releasing Service Pack 6 later today.
The Oracle migration of the MoveUp/MoveDown support was about as pain-free as one could hope for while banging on a keyboard. I got it done in under three hours, including writing all the rules, manufacturing the code, installing the database, and running the tests.
It turns out that PostgreSQL exhibits the same problem if 0034-CreateComplexObjects is rerun. The problem was actually with the XML SAX parser, not the stored procedures for DB/2 LUW or PostgreSQL. There was no checking in place for the Chain relationships, so the second time a SchemaDef was loaded for update, the parser dutifully noted that the values for Prev/Next were null and clobbered the Chain links. That has been corrected.
Both PostgreSQL and DB/2 LUW now pass all their Chain data tests.
I've borrowed the style of code from sp_moveup and sp_movedown for identifying the chain links in sp_delete, along with a reworking of the variable names. Due to some apparent file corruption, I'm not confident that the 12548 release is valid; this is mostly an untested repackaging. As with the previous one, don't download it unless you're a serious glutton for punishment and untested code.
I've confirmed that the DB/2 LUW sp_create properly links the data in the valdef table during testing, and that MoveUp and MoveDown function correctly as well. However, the sp_deletes do not properly unlink data, but leave it all as nulls after the next sp_create invocation. I suspect the root problem is that the deletes are leaving more than one row with a null prev/next link (which indicates the head and tail of the double-linked list that forms a Chain), which is subsequently resulting in the queries for the head and tail during the sp_create to populate the fetch buffers with nulls (the documentation suggests that if more than one row is returned by a select...into... statement, nulls will be fetched because the query doesn't know which row to use. PostgreSQL, on the other hand, randomly selects data from the first row.)
It looks like I'll be investigating for a while yet this morning.
Along the way to getting the PostgreSQL MoveUp/MoveDown code working, I discovered a rather serious bug in the caching objects. When the buffer was replaced in an object, there were stale singleton references left behind in the object. So unless you did a forced-read of all of the singleton references for the object, you got stale data references. This also affected update code -- the EditObj would have the new values from the settings, but the original Obj would exhibit stale references.
You really, really, REALLY should download this release and remanufacture all your projects, even though only PostgreSQL supports the Chain implementation completely at this point.
The Swing prototype GUI has been debugged and the selection listeners completely reworked so that the MoveUp/MoveDown menu items are now properly enabled and disabled according to the row selected in the data lists.
The X(ml)Msg layer has been implemented and now invokes the PostgreSQL MoveUp/MoveDown stored procedures properly from the prototype GUI. However, the stored procedures are bug-ridden and corrupt the Chain links. They have, however, been debugged to the point that they run without throwing errors from the PostgreSQL database engine.
The X(ml)Msg Request package has modified all of the read() routines it invokes to do forced reads, on the assumption that if the client is requesting a read, it's decided the cache is stale. As the server-side cache is expected to be in sync with the client, the server cache has to be presumed to be stale as well.
The answer to Life, The Universe, and Everything!
Well, not quite, but a lot of work was done and tested for this release, so it's at least worth downloading.
The new Relation.IsChainRelation binding returns "yes" if the table which contains the relationship in question defines a Chain, and the Chain references this relationship as either the Prev or Next link relationship.
This new binding was used to implement rules to hide the Chain relationships in the object list panels, and to ensure that the reference widgets in the attribute panels are always disabled so that a user can't break the Prev/Next links, but can still follow them to the previous and next objects with the view buttons of the reference widgets (i.e. only the pickers are permanently disabled for Chain relationships in the attribute panels.)
A couple of bugs in the Swing prototype GUI were also corrected, which were causing tracebacks when chasing the prev/next links from the View/Edit windows. Similar problems would have occured with any lookups that referenced class hierarchy objects instead of directly instantiable classes.
The list boxes in the Swing prototype GUI are now sorted by either the Chain or alphabetically by the qualified names of the displayed objects. I still haven't taken care of hiding the Prev/Next chain references; I should do that sooner rather than later before I forget to do it. (I don't want people messing with the chain link data -- it could create a corrupted database. Or maybe I'll leave the references in place, but make them read-only even for objects that are being edited.)
If you install a CFDbTest 2.0 PostgreSQL database and run the CFDbTestRunPgSqlTests script, the schema for P0035, TableB values show data being sorted by chains. The list of tables for the schema shows sorting by qualified name.
The PostgreSQL JDBC for the MoveUp/MoveDown functions has been codee and clean compiles, but has not been tested.
Please note that I've decided *not* to enhance the RAM storage with the Chain support. It already doesn't support the DelDeps for deletions, and it's primary purpose is to provide the in-memory image of the rule base and BAM for the engine itself. Unless I were to modify the engine to *expect* chains instead of sorting by the primary key of the data as it is sequentially loaded from the XML specifications, there would be no need for Chains. The only reason for supporting Chains in the BAM would be to allow for *directly* executing the engine against a database version of the model, and my vision is rather to provide import/export support from the eventual GUI. So you'd export the model and run that through the engine, rather than locking down the whole database for the duration of a run.
The Object layer code for the new MoveUp/MoveDown functionality for Chains has been implemented, but the database JDBC implementations, RAM storage, and X(ml)Msg Client layers are all just "Not Implemented Yet" stubs for now. But the code all clean compiles, so it's a good cut point for a release.
The stored procedure sp_movedown_dbtablename has been created and test installed for PostgreSQL, but not actually run.
Next I need to take a step back from the PostgreSQL code, and implement the Chain functionality for the RAM storage. During that implementation, I'll be defining the new JDBC table interface methods and adding Not Implemented Yet stubs to the various database JDBC layers.
I will not be testing the new moveup/movedown functionality using the RAM implementation, however. As long as it clean compiles, I'm going to assume that it'll work for now, because I don't want to create a GUI that runs a SAX loader to populate a RAM instance for subsequent manipulation by the GUI. Someday I'll do something like that as an editor for Business Application Models for those who prefer flat-file storage to using a database, but that's very far down the road.
The stored procedure sp_moveup_dbtablename has been added to PostgreSQL for each table which defines or inherits a Chain specification. The procedure installs cleanly to the database engine, but there are probably still runtime errors in it.
Next I'll need to code the corresponding sp_movedown_dbtablename stored procedure, and then add the new Table accessors to the JDBC layer for PostgreSQL. Once that clean compiles, I'll propagate the signatures to the Table interface, and code "Not Implemented Yet" JDBC stubs for the Ram storage and the other databases.
With that low level code in place, I can proceed to implement the TableObj accessors and propagate those to its interface. As this is purely a client-defined functionality, no changes to the SAX object parser will be required, but I'll have to add the new messages and parsers to the X(ml)Msg layers.
Finally I'll be ready to implement the GUI functionality, and will be able to eventually test the PostgreSQL template code. Only after I've tested the template code will I propagate it to the other databases and implement their JDBC layers.
Which should bring me to Service Pack 6.
The StripPrevNextColumnName binding will be used to work around an issue I have with correlating a Prev chain relation with a Next chain relation. By stripping Prev/Next from the names, I can get them down to the same names as the Primary Key Columns that they reference, so that the stored procedure code always references the prev/next attributes by PKey column name instead of the decorated Prev/Next columns actually used by the relationships. This of course means that your Prev/Next relationship and index columns *must* have the same names as the primary key, but decorated with Prev and Next, respectively. No other naming convention will produce valid chain support code.
Yes, it's a hack. An ugly one. But it'll work.
Due to the number of existing bugs I corrected while working on the Chain support to date, I've decided to issue Service Pack 5 at this time. I'll inject another service pack with the remaining Chain implementation at some point in the future. Not that I expect it to take very long to complete the remaining tasks, but this release is 100% backwards compatible with SP4 and lets you specify Chain support without breaking the old functionality.
All of the databases now pass the full suite of CFDbTest 2.0 tests, with the exception of MySQL, which fails some tests due to it not supporting the full date-time range that Java allows.
The sizes of some of the CFSecurity 2.0 strings have been reduced so that there are no longer errors produced when installing a MySQL database server schema. I don't think it's an unreasonable restriction to limit things like JEE mount points, user email addresses, and cluster addresses to 192 characters.
Worth noting is that SQL Server now passes the delete security tests that it has been failing for a few months. It turned out I wasn't checking for errors properly for delete operations in the JDBC code. This additional error checking is one of the key reasons I decided to issue this current Service Pack 5 release.
When create or delete operations are performed with chained data, the previous and next objects in the chain are refreshed for atomic operations, but not for delete-by-index operations. However, seeing as you have to do a beginEdit() which forces a read from the database before allowing you to manipulate data, most code should be safe even if chains are being used. Otherwise after doing a delete-by-index you should refresh the chain members using the index-by-container-key forced read to ensure that the cached chain information is in sync. Alternatively, you could simply flush the entire cache for the affected object(s) and let them be re-read if you're not sure which container keys have been affected by the operation.
The SQL Server stored procedure sp_delete_dbtablename() has been updated to unlink the deletion candidate from the chain before deleting it. SQL Server now passes the "Replace Complex Objects" test with CFDbTest 2.0.
SQL Server's JDBC code now properly detects errors and exceptions for the various sp_delete() stored procedures (by primary key and by index.) As a result, it now passes the security check testing that it used to fail at for the delete permission denied test.
It turned out the problem I was having with the sp_delete() implementations for SQL Server was that I hadn't declared my cursors to be local, so they were defaulting to global with my database installation. As cursors get created and bound when they are declared, that meant the deallocate for the unused cursor was not being invoked, so on the next invocation of the function declaring the cursors, an exception would get thrown because the cursor already existed. Specifying local causes the cursors to be deallocated when the stored procedure exits, preventing the problem.
The MySQL stored procedure sp_delete_dbtablename() has been updated to unlink the deletion candidate from the chain before deleting it. MySQL now passes the "Replace Complex Objects" test with CFDbTest 2.0.
The Oracle stored procedure del_dbtablename() has been updated to unlink the deletion candidate from the chain before deleting it. Oracle now passes the "Replace Complex Objects" test with CFDbTest 2.0.
The DB/2 LUW stored procedure sp_delete_dbtablename() has been updated to unlink the deletion candidate from the chain before deleting it. DB/2 LUW now passes the "Replace Complex Objects" test with CFDbTest 2.0.
The PostgreSQL stored procedure sp_delete_dbtablename() has been updated to unlink the deletion candidate from the chain before deleting it. PostgreSQL now passes the "Replace Complex Objects" test. Tomorrow/later today I'll start porting the new code to the other databases.
Next up is the unlinking of objects prior to a deletion. Identifying the affected records is a select before any of the dependencies are cleared or deleted. Note that I have to assume that the PKey columns in an entire hierarchy use the same names, because there is no way to correlate the joins the way I normally do for subclass/superclass relationships (at least none that I can think of at the moment. Something may come to me, but it's a small restriction that is followed by all the sample models save for CFCore 2.0, which doesn't define chains.)
SELECT prvbase.classcode as prvclasscode, this.prevone as prevone, this.prevtwo as prevtwo, nxtbase.classcode as nxtclasscode, this.nextone as nextone, this.nexttwo as nexttwo FROM chaintable this outer join chaintable prvbase on this.prevone = prvbase.pkeyone and this.prevtwo = prvbase.pkeytwo outer join chaintable nxtbase on this.nextone = nxtbase.pkeyone and this.nexttwo = nxtbase.pkeytwo WHERE
this.pkeyone = argOne and this.pkeytwo = argTwo
If there is no classcode, then there is no need for the outer joins because we don't need to retrieve the classcodes for use in auding the changes.
Updating the objects gets a little bit hairy but is manageable. I'd thought I'd have a problem correlating the columns to be updated, but then I realized I'm updating the next link of the previous object to the next link of this object, so the column names correlate and are predictable. Which is a relief, because for an hour or so there I was dreading having to move the update code to a sub-stored-proc in order to alias the column names predictably.
The update of the prev object in the sp_create() code will be used as a template for the updates of the prev and next linked objects. Note that these updates have to be performed *before* any clearing of references or deletion of sub-objects, much less the deletion of this object. It's not safe to delete anything until the links are updated to unbind this object from the chain.
If you have no chains defined in your model, I highly recommend downloading this release and remanufacturing your entire project. In fact, it's got such important changes for SQL Server that I'm going to make it the default download for MSS Code Factory instead of SP4. Just don't try to model chains yet.
SQL Server now establishes prev/next links for chains. I've also modified the SQL Server JDBC to properly report errors thrown by the database engine, so it now passes the delete permission check tests.
What's odd is that instead of throwing an exception about an integrity constraint, SQL Server instead seems to continue on with processing after the delete fails, and then gets an error because a cursor is still in existence when the next iteration of the delete-by-index loop tries to remove an instance. I'm not sure how to address that problem -- I kind of count on a stored procedure throwing an exception and stopping execution when an exception is raised. Maybe I need to modify the stored procedures to explicitly check to see that an instance has been deleted (by analyzing the SQL status variable), and manually raise an exception to get the processing to stop. I'm not going to worry about it right now -- I'll deal with such changes when I'm doing the modifications to the delete processing for breaking the chain links.
The CFSecurity 2.0 model has been updated to reduce certain string columns to 192 characters so that they can be properly indexed by MySQL. I didn't like the idea of the most fundamental of the models causing errors during the installation of any of the databases. So you're now restricted to 192 character names for email addresses (user id strings), cluster addresses, tenant names, and JEE mount points.
MySQL now implements chain links in sp_create().
DB/2 LUW was not properly establishing the prev/next links after all. That has been corrected and verified through database inspections.
Oracle now implements chain links in sp_create(), and has been verified through database inspection after running the CFDbTest 2.0 test suite.
The JDBC client layers for Microsoft SQL Server, Oracle, and PostgreSQL have been updated to re-read the created instance when client-side code is required for BLOB or TEXT attributes. The changes applied by the stored procedures could not be counted on to be consistently applied without a re-read. DB/2 LUW and MySQL allow TEXT and BLOB parameters to their stored procedures, so did not require JDBC changes.
The PostgreSQL sp_create() enhancements have been brought forward to DB/2 LUW, installed, and tested. As with PostgreSQL, the "Replace Complex Objects" test now fails because the links aren't being untied by sp_delete().
The Prev links were not being properly established so this release has been pulled. An updated release with support for both DB/2 LUW and Oracle will be issued once revalidation is complete.
The PostgreSQL sp_create() rules have been updated to produce support for establishing Chain links, including auditing of the changes made to the previous objects in the Chain as they're appended to the tail. Created objects are always appended to the tail; you'll need to use the future MoveUp/MoveDown support to change the order after creating an object if necessary.
The "Replace Complex Objects" test now fails for PostgreSQL, and will continue to do so until I implement the sp_delete() changes that are required to break the links of a Chain.
Before I do that, I'll be propagating the sp_create() changes to the other databases and testing their new code.
In order to perform the prev/next link substitutions in the sp_create() implementations, one needs to be able to determine which columns participate in the prev/next relations and therefore need to be specified on the fly instead of from the arguments that were passed into the stored procedure.
The Chain verbs have been reworked as a single inherited object dangling from a Table or its inheritance tree, with a HasChain binding and a Chain reference from a Table object. The old iterator "TableChains" has been removed. The iterator version was from a time when I envisioned allowing multiple user-controlled orderings for an object, but as I've thought that through over the past year or so I realized it would be far too much of a challenge to implement.
Objects which define a Chain now latch and unlatch their container object by incrementing and decrementing the revision of the container's base table prior to performing the body of the sp_create() or sp_delete() functionality. All of the databases have been manufactured and tested with the updated stored procedures.
Note that even for models incorporating Chains, this release is "safe" because all it does is pin the container of an object before a Create or Delete operation in preparation for manipulating the Chain reference links.
The client cache was not getting properly flushed when deleting by an index. The code to correct this has been implemented and clean-builds, but has not been tested. Download and use at your own risk. If there are problems, you'll have to revert to SP4 because I'm embarking on the Chain support from here on in so it'll be a long time before there is another usable build for the public, though I'll be posting updates as I go along for those who want to keep track of my progress on Chain support.
Before I dive into chains, I *think* I need to modify the client side code so that when a delete-by-index client routine is invoked, it probes the cache for the index key after the delete and does a cascading forget of the instances. This is only a memory leak issue, so I'm not going to re-issue SP4 after doing so. *Done 2014.10.21*
Chains require a number of changes to support them, and I'll be posting builds as each step of the process is completed. But until all the steps are completed, the builds will produce "broken" code that results in object deletion problems or worse, so I won't be making those the default downloads. You should stick with SP4 until SP5 is released.
Note that only one chain can be defined for an object hierarchy. You may NOT have multiple chains for the same object. While I've coded it as an iterable sub-object definition, you can't actually specify more than one. I should probably update the parser to check for that.
A table which defines a Chain may not specify nor inherit a Text or Blob column, because doing so would require implementing the chain linking code in the client side for some databases. That's a rather hideously ugly and difficult prospect to actually code, so it's just not supported at all.
Before any manipulation of a chain is done, the container object has to be latched. Otherwise you end up with race conditions. After the chain manipulation is complete, the latch has to be undone prior to the commit. The easy way to do this is to increment the Revision of the latched object before the manipulation, and decrement it after the manipulation is done (that way you don't end up with stale data in the client, which would be real pain seeing as you have to edit the container before you're allowed to manipulate the sub-objects.) *Done 2014.10.24, affects sp_create and sp_delete*
I considered modifying argument lists for sp_create() and sp_update() to omit the columns which appear in chain relationships. Chains are to be maintained entirely by the stored procedures, so I don't want client-side edits stomping on the chain links. But then I realized that I can just omit the columns for the insert and leave them getting passed in rather than tweaking code all over the place, and for updates, the revision attribute will detect an edit conflict if a chained object has been modified by the creation or deletion of another object in the chain, or a movement up or down in the chain. It is a little prone to conflicts, but there really is no way around that. Besides, I have to assume that only one user is going to be editing a given complex object at any given time; one can only go so far with the granularity of shared edits. Rather than skipping the columns on the create, they'll get overridden with the tail key (prev) and nulls (next) during the insert.
sp_create_table() has to be modified to establish the chain linkage. Before the instance is created, it needs to identify the tail of the chain by selecting the class code and primary key where the container id matches, but the next reference link is null. Then it uses that selected key to set the prev reference link of the inserted object instead of the arguments that were passed in (this requires some new verbs, at a minimum an IsPrevChainKey and IsNextChainKey for a Value.) After the insertion has been performed and audited, sp_create() needs to check if the selected tail was null or not, and if not null, update the tail to reference the newly inserted object. This gets ugly because you have to handle cases where the link references are in the base table vs. split between the base table and a subclass table. Once the prev object is updated, it needs to be audited, which requires flipping between inserts into the different history tables which form subclasses of the chain object. Note that this means a *lot* of code gets duplicated through the various sp_create() routines in a class hierarchy; each sp_create() in the hierarchy has to allow for every possible subclass of the chain table to be audited. Fortunately I at least know that the subclasses which can appear are all subclasses of the chain table, or they wouldn't be referenced by the prev/next links.
sp_delete_table() needs to be modified so that it breaks the established prev and next links before deleting the object instance. This is going to be even uglier than the sp_create() changes. After the instance had been locked and the delete audit record created, the prev and next links of the chain for the current object have to be selected. They don't need to be updated to null, because the worry is the prev and next objects back-chaining to the deleted object, not whether the deleted object references other objects in the chain. Once the audit is complete, the prev object gets updated to reference the next link from the deleted object, and the next object gets updated to reference the prev link from the deleted object (with appropriate checks for null keys, of course.) After each of them is updated, they need to be audited. Deletes are going to be *huge* stored procedures after this enhancement is made.
The next thing I need are sp_moveup_table_by_suffix() and sp_movedown_table_by_suffix() routines, which relink the object either one instance backwards or forwards in the chain. Up to four objects get updated by these routines, so they're going to be relatively complex. I thought about providing sp_movebefore_table() and sp_moveafter_table() implementations that would let an instance "jump" in the chain, but I've decided against that for the sake of "simplicity" (not that chains are going to be simple.) Maybe someday I'll add those routines as well; I may even change my mind before SP5 is released. Who knows what urges will strike me. Whenever I do add such code, specifing a null as the referenced object for before/after operations will allow for moving to the head/tail of the chain.
With the database changes done, the client side cache code needs to be updated such that after a create operation the referenced prev-instance is force-read to refresh it from the database. The client-side update code for the move operations will be a bit nastier, because it needs to retrieve the prev/next links of the moving object *before* it is moved, and both of those linked objects have to be refreshed as well as the objects referenced after the update. Up to three objects need to be refreshed after a move. Note that updates don't need to do anything special other than leaving the chain link references alone; you can't move an object by updating its links.
The client side caching code for deletes also needs to track the prev/next links prior to the deletion of an instance, and force-read those objects after the deletion completes.
Note that I'm counting on the revisions to detect update conflicts so that the prev/next links to be re-read are sane after the stored procedure returns, but no such presumptions can be made about deletions, because the deletes intentionally ignore the revision conflict detection as future-proofing for the chain code I expected to be working on some day. Otherwise the values of the result sets cursored by a delete-by-index could potentially be stale with some databases and incorporated outdated revision values. Rather than let such a possibility creep up, I'd long ago decided to ignore revision checks in the delete-instance code of the stored procedures.
New client-side interface and implementation methods for moving instances prev/next in the chain have to be added as well, along with the table methods for invoking the stored procedures.
With the database stored procedures and the client side object and cache changes made, I'll finally be ready to tackle the Swing GUI prototype changes. The first thing to be done is to add code that sorts the instances of a sub-element selection set by the chain links instead of the default primary index key sorting that is done by the existing code. Otherwise there wouldn't be much point to allowing user-established instance ordering through chains, now would there?
The menus for the element lists need to have Move Selected Up/Move Selected Down menu items and actions added to them.
With the chains established and modifiable through the GUI, it will be time to tweak the GUI a little further so that the prev/next references of an instance no longer show up in its Attribute panels as reference lookups. I don't want people to be able to mess with the linkages manually, because they could corrupt the chain data if they did so.
I think that's about it. At that point I should be ready for Service Pack 5.
Service Pack 4 eliminates the SAP/Sybase ASE support due to a lack of critically necessary features (you cannot modify the result set of a multi-table join without the cursor being closed in the stored procedure.)
The delete code for all of the database stored procedures has been fleshed out to incorporate all of the features necessary to deal with all of the 2.0 business application models defined to date, including support for clearing sub-object references automatically or explicitly, and cascading deletes of data hierarchy sub-objects in a specified order.
The client side cache is now cleaned up as best I can after an object is removed by the cascading deletes in the database, though I can still think of ways to leave stale data in the cache (for example, if you navigate to an object cluster via a Children or Details relationship and then delete the Component object which contains those references, there will be stale data in the cache, because the cleanup follows only the container hierarchy.)
Client side code to allow for dependent object picking/selection in the Swing prototype GUI has also been added, so it's quite easy to define a dependency such as the Province/State picker being dependant on the selection of the Country picker.
Off the top of my head, the only remaining feature I want to add to the 1.11 series is the support for Chains, which I'd originally been thinking about putting off until the 2.0 code series. But if I get the Chains supported by 1.11, I'll have a relatively functional GUI for 2.0 instead of having to wait for 2.1. I could see that being beneficial, as crude as that GUI might be.
So download, play, and enjoy.
The Table.SuperClassDef attributes and relationship have been completely removed as they were obsolete 1.6 code features.
I've regression tested the refreshed build by remanufacturing the Microsoft SQL Server database scripts, and there are no differences in the files produced, as I'd expected.
The custom code no longer relies on the old 1.6 get/set of the Optional Lookup Superclass Table attribute of a Table object. There were only a couple of vestigial cases of that code left. It *does* work because I maintain that linkage during the parse and merge processes, but it's not the way things *should* be done so I tidied it up. It might slow down processing a tad, but it's *correct* this way. And for 2.0, the vestigial table reference will be removed, so it'll be easier to port this code than what had been used.
Apparently a variable name got edited to another valid variable name in the code that imports super class relationships for Table instances. How this could have happened *now* after working for months and months and months is beyond me, but there you have it. And the bug was so serious that manufacturing CFUniverse 2.0 froze up on an infinite loop.
Bit rot, right?
The ClearDep support for SQL Server has passed testing via CFDbTest 2.0's "Replace Complex Objects" test with Relation.Narrowed specifications present.
Support for SAP/Sybase ASE has been dropped from MSS Code Factory 1.11 due to the following errors being produced by sp_delete_schemadef() under isql. This is an error that cannot be worked around, and I'm only willing to go so far to adapt the generic template code to any given database. It's a shame, really. I liked Sybase during the years I used it. But I must admit, the code I'm dealing with now is far more complex than any I've ever delivered to production for ASE.
1> exec sp_delete_schemadef 1, '654dbba0-eda7-11e1-aff1-0800200c9a66', '0432b621-0e44
Msg 582, Level 16, State 3:
Server 'TUNDRA', Procedure 'sp_delete_tbldef', Line 275:
Cursor 'cursDelTableColumns' was closed implicitly because the current cursor
position was deleted due to an update or a delete. The cursor scan position
could not be recovered. This happens for cursors which reference more than one
Msg 559, Level 16, State 1:
Server 'TUNDRA', Procedure 'sp_delete_tbldef', Line 279:
Attempt to use a cursor 'cursDelTableColumns' which is not open. Use the system
stored procedure sp_cursorinfo for more information.
Msg 547, Level 16, State 1:
Server 'TUNDRA', Procedure 'sp_delete_tbldef', Line 281:
Dependent foreign key constraint violation in a referential integrity
constraint. dbname = 'cfdbtst20', table name = 'cfdbtst20..tbldef', constraint
name = 'tblcol_table'.
Command has been aborted.
Msg 17004, Level 16, State 1:
Server 'TUNDRA', Procedure 'sp_delete_tbldef', Line 289:
sp_delete_tbldef() Data collision detected
Msg 547, Level 16, State 1:
Server 'TUNDRA', Procedure 'sp_delete_tbldef', Line 291:
Dependent foreign key constraint violation in a referential integrity
constraint. dbname = 'cfdbtst20', table name = 'cfdbtst20..scopedef',
constraint name = 'table_super'.
Command has been aborted.
Msg 17004, Level 16, State 1:
Server 'TUNDRA', Procedure 'sp_delete_tbldef', Line 300:
sp_delete_tbldef() Data collision detected
Msg 17000, Level 16, State 1:
Server 'TUNDRA', Procedure 'sp_delete_strtyp', Line 82:
sp_delete_strtyp() Data collision detected
Msg 17004, Level 16, State 1:
Server 'TUNDRA', Procedure 'sp_delete_strtyp', Line 163:
sp_delete_strtyp() Data collision detected
Msg 17004, Level 16, State 1:
Server 'TUNDRA', Procedure 'sp_delete_strtyp', Line 173:
sp_delete_strtyp() Data collision detected
Msg 17004, Level 16, State 1:
Server 'TUNDRA', Procedure 'sp_delete_strtyp', Line 183:
sp_delete_strtyp() Data collision detected
Msg 547, Level 16, State 1:
Server 'TUNDRA', Procedure 'sp_delete_strtyp', Line 185:
Dependent foreign key constraint violation in a referential integrity
constraint. dbname = 'cfdbtst20', table name = 'cfdbtst20..valdef', constraint
name = 'tblcol_datatype'.
Command has been aborted.
Msg 17004, Level 16, State 1:
Server 'TUNDRA', Procedure 'sp_delete_strtyp', Line 194:
sp_delete_strtyp() Data collision detected
Msg 547, Level 16, State 1:
Server 'TUNDRA', Procedure 'sp_delete_schemadef', Line 267:
Dependent foreign key constraint violation in a referential integrity
constraint. dbname = 'cfdbtst20', table name = 'cfdbtst20..schemadef',
constraint name = 'table_schemadef'.
Command has been aborted.
Msg 17004, Level 16, State 1:
Server 'TUNDRA', Procedure 'sp_delete_schemadef', Line 275:
sp_delete_schemadef() Data collision detected
Msg 547, Level 16, State 1:
Server 'TUNDRA', Procedure 'sp_delete_schemadef', Line 277:
Dependent foreign key constraint violation in a referential integrity
constraint. dbname = 'cfdbtst20', table name = 'cfdbtst20..scopedef',
constraint name = 'schemadef_super'.
Command has been aborted.
Msg 17004, Level 16, State 1:
Server 'TUNDRA', Procedure 'sp_delete_schemadef', Line 286:
sp_delete_schemadef() Data collision detected
(return status = -6)
The Sybase ASE ClearDep support has been coded and installs cleanly to the database, but as Sybase does not successfully run the "Replace Complex Objects" test due to an unreported/undetected sp_delete_schemadef() problem, it fails the testing.
The ClearDep support for MySQL has passed testing via CFDbTest 2.0's "Replace Complex Objects" test with Relation.Narrowed specifications present.
The ClearDep support for DB/2 LUW has passed testing via CFDbTest 2.0's "Replace Complex Objects" test with Relation.Narrowed specifications present.
The ClearDep support for Oracle has passed testing via CFDbTest 2.0's "Replace Complex Objects" test with Relation.Narrowed specifications present.
The ClearDep support for PostgreSQL has passed testing via CFDbTest 2.0's "Replace Complex Objects" test with Relation.Narrowed specifications present.
The ClearDep objects are now properly merged when a schema is referenced/imported by a Business Application Model as well.
I modified the CFDbTest 2.0 model to specify Relation.Narrowed with supporting ClearDep requirements from the Tenant and the SchemaDef. Note that you don't want to auto-clear the relationship when deleting a Table, because you *want* integrity violation checks to occur during edits of a model.
Once the build is done, I'll modify the CFDbTest complex object tests to make use of the new Narrowed specification, and verify that the "Replace Complex Objects" test fails due to integrity violations.
After that I'll start working on the PostgreSQL rule support for ClearDep.
The CFBam 2.0 specification has also been modified to define ClearDep requirements.
The DelDeps have been corrected for all of the databases. They all install cleanly now.
All of the databases pass the sub-object reference clearing tests (CFDbTest 2.0's "Replace Complex Objects") except for SAP/Sybase ASE, which already had problems with the test in question before the code changes were made.
Next up: Creating a test case in CFDbTest 2.0 for the ClearDep specifications, and coding the initial PostgreSQL template of support for that new feature. Once all of the databases have been tested with that new feature, Service Pack 4 will be released.
Once again, there were problems with the DelDep code. I cannot fathom how I could have possibly failed to examine the error logs for every single database. Ah well. It's getting tested now. As with MySQL, the sub-object reference clearing worked on the first run after the DelDep issues were fixed.
The SAP/Sybase ASE stored procedures installed successfully on the first attempt, but as previously noted, SAP/Sybase ASE was not passing the "Replace Complex Objects" test already so it doesn't pass the sub-object reference clearing tests.
As with DB/2 LUW, I found errors in the DelDep code. The sub-object reference clearing worked on the first test run once that was fixed.
I don't get it. I *tested* the code. It *worked*. Where are these errors creeping in from? Was I so tired I missed an error message when doing a "grep ERROR *.log"?
Along the way through the DB/2 LUW database installation, I discovered a couple of syntax errors and typos for the DelDeps, and an erroneous DelDep from SecForm to SecApp, which caused a circular/recursive call that DB/2 LUW rejected. I've corrected the CFSecurity 2.0 model accordingly, but I'll have to remanufacture the database layers for all the projects when I'm doing the next release; I'd hoped to get away with just remanufacturing the Java layers for most of them. Sadly, such is not to be.
Next up: MySQL testing. Note that while I've done a quick migration of the PostgreSQL rules for sub-object reference clearing for all of the databases, only PostgreSQL and DB/2 LUW have been tested so far. I don't even know if the manufactured database schema scripts will install without errors, much less function correctly. I do know that Sybase ASE was failing the "Replace Complex Objects" test already, so it won't pass this new test, either.
The ClearDep entries are essentially clones of the DelDep support, but they'll be used differently in the rules. Specifically, the last entry in the ClearDepChain isn't used to identify a "ToTable", but to identify the relationship of the containing table that needs to be cleared.
There were also errors in the rules for the clearing of sub-object references from tables in the PostgreSQL code. While the ClearDep support is a superset of the sub-object reference clearing, the ClearDep support has to be explicitly stated in the BAM, while the sub-object reference clearing is *inferred* from the model automatically and therefore should be relied upon wherever possible to keep things simple for the BAM author.
I will therefore continue on with applying the PostgreSQL rule changes to the other databases rather than working on the use of the ClearDep specifications at this time. The PostgreSQL version of that code passes the CFDbTest 2.0 "Replace Complex Objects" test.
I just realized I've got another delete case to allow for: the "Narrowed" relationships in a CFBam. Those won't be detected by the sub-object clearing. In fact, the only way I can think of to specify that such relationships need to be cleared is to add something similar to the DelDepChain to the engine, maybe a ClearDepChain, except that instead of the last dependancy referencing a ToTable to be deleted, it identifies the relationship of the narrowest table identified by the chain so far which needs to be cleared.
Thus you'd have something like (for a CFBam Tenant):
<ClearDep Name="ClearTableRelationNarrows" ClearDepChain="TenantSchema.SchemaDefTable.TableRelation.NarrowedRelation" />
I'll have to enhance the CFDbTest model with test data as well.
The CFDbTest 2.0 model has been updated to incorporate a test case for the dependency on child object definitions (by adding a "PIndex" reference to the Table object that references the component TableIndex specifications.)
CFBam 2.0 has been updated to specify a PopDepChain for the PrimaryIndex relationship it already specified. The PrimaryIndex relationship already could have been used as a test case for the new code, were it not for the fact that it takes so long to manufacture CFBam.
There have been no changes to the rules or engine with this release; its sole purpose is to produce a *failing* case of CFDbTest's "Replace Complex Objects" test.
The verbs Relation.IsSubObjectLookup and Table.HasSubObjectLookup have been added.
IsSubObjectLookup determines if the current relationship is a Lookup referencing a table that is targetted by a Components, Children, or Details relationship inherited by the FromTable of the relationship. If a relationship IsSubObjectLookup, the relationship will have to be cleared by the stored procedure delete code for the object before invoking the deletes specified by the DelDeps of the table object. Otherwise a relationship loop could exist that would prevent deletion of the sub-objects.
HasSubObject lookup determines if the current table owns a relationship where IsSubObjectLookup is true. It does *not* check the inheritance hierarchy of the table object, because this verb is used to determine whether the current table being evaluated by the rules needs to be updated to clear any sub-object lookups. The rules themselves have to chase the inheritance hierarchy of the table object being deleted.
The rules have not been updated to use these new verbs yet.
The cached sub-objects are now forgotten when an object is deleted, keeping the cache in sync with the cascading deletes that are performed by the stored procedures. This is a low-priority fix, as it only corrects a memory leak rather than altering the essential functionality of the code.
Note that forget() methods of the objects and the table objects have been overloaded to accept an optional boolean "forgetSubObjects" parameter, which is only set to true by the table object's "delete" implementation. As this is isolated to "internal use only" code, no changes should be required for application level custom code.
The next thing I want to work on is properly flushing the cache after a delete of an object. Right now, only the object itself is forgotten by the cache. I want to rework the "forget" code such that it checks for sub-objects of the forgotten object in the cache and forgets about those instances as well. That should take care of synchronizing with the deletion of sub-objects by the stored procedures in the database, and will make it easier to code the eventual syncsets between client nodes. (What, you think I was going to leave everything in pure client-server mode forever? Perish the thought -- I've done plenty of work on distributed systems synchronization over the years; I just haven't gotten that far with the code yet.)
After that, I think I need to do some changes to the engine and to the delete code. In the engine, I need to add a Table verb "ReferencesSubObjects" that checks all the relationships (non-inherited) of an object/table for Lookup relationships that reference sub-objects (Components, Children, or Details.) If this verb returns "yes", then the delete stored proc needs to clear those references after it's audited the delete record to the history. I'll also need a verb for the Relationships that checks "IsSubObjectReference" so I know which relationships to clear in the delete code. When such relationships exist, they are always nullable because the Container/Parent/Master has to be created in the database before the Components/Children/Details, with the reference being set later by an Update of the object (e.g. the PrimaryIndex reference of a Table.) The only reason I can delete a CFDbTest 2.0 Table object is that it's incomplete and doesn't actually make a PrimaryIndex reference like a CFBam 2.0 Table does (this was intentional because I knew I couldn't handle it yet.)
Once that's done, I'll give it a hard think as to whether I want to release those two changes as a Service Pack 4, or whether I want to include the Chain implementation in Service Pack 4. I have to admit I'm doing my best to postpone working on Chains because I know how difficult they are and how much work they're going to be. I'm thinking I'll probably release the Chain support as Service Pack 5 instead of delaying Service Pack 4 -- there are too many supportable cases where the two changes I've outlined would fix problems that I *know* exist in the manufactured code. I'd rather people get those fixes early than delay for a few weeks or months to implement the Chains. (Yes, I really do expect Chain support to take that long. It's an ugly beast to deal with when coding manually, never mind trying to automate the process.)
Service Pack 3 deals with refreshing the data as cached by the Obj layer of the manufactured code. All of the Obj.getRelation(), Obj.read(), and TableObj.read() methods have been overloaded with a new signature taking a "forceRead" boolean parameter. If this parameter is true, the cached information is refreshed from the database backing store. If false, the data is only read from the backing store if it's not already in the cache.
The original methods that did not have this extra argument have been modified to simply call the new variations with a forceRead value of false.
The Swing prototype GUI has been updated so that reads are forced when you open a Finder window or a ViewEdit window, including all component objects displayed by the element tab of the ViewEdit. In the unlikely event that an object is deleted by another user, the ViewEdit window will come up with blank fields. In that case, you should close the ViewEdit, the window that was used to bring it up, and re-open the window that was used to launch the ViewEdit. The parent of the ViewEdit will refresh itself from the database when launched, and you'll no longer have visible references to the stale data, although the instance will probably still exist in the cache.
Stale cache data can also occur when deleting complex objects, because the hierarchy chasing is done entirely at the database end, so the subobjects aren't forgotten by the client-side cache when the main container object is deleted, although the reference to the container itself will be forgotten after the delete. However, with the forced read of sub-elements when displaying other objects that used to refer to the stale data, those objects are refreshed and the stale references are not used. So while this results in some memory leakage, the user interface should behave properly.
Service Pack 2B upgrades the support for SAP/Sybase ASE from 15.7 to 16.0. The testing of the upgrade exposed an error in the CFSecurity 2.0 model, which has been corrected.
The database scripts for all of the projects are being remanufactured to capture the CFSecurity correction, but there will be no changes to the Java or JDBC code as a result of the correction -- it was a DelDep error that only affects the stored procedures.
SAP/Sybase ASE 16.0 still exhibits problems with checking Delete permissions on object tables, and does not properly execute the "Replace Complex Objects" test in CFDbTest 2.0, indicating that the sp_delete procedures are not being executed properly. However, it does not return any errors to the JDBC layer so I'm at a loss as to how to debug the problem.
With the shift to ASE 16.0 from ASE 15.7, the Sybase rules had to be changed to leverage case-sensitive naming. There were only a few cases in the DelDep rules that weren't already using case-sensitive names, fortunately.
Unfortunately, installing to ASE 16.0 also highlighted another error in the CFSecurity 2.0 model, which has been corrected. All of the projects will have to have their database layers remanufactured to correct the error. While they accepted the erroneous scripts when installing CFDbTest 2.0, it is entirely possible that they would generate runtime errors trying to process those invalid join statements when deleting a Cluster.
The most regrettable thing is that the ASE problems with Delete permissions and with the sp_delete() not working properly for the Replace Complex Objects test remain. As I've mentioned before, the problem is not with the Transact-SQL code -- migrated to Microsoft SQL Server, virtually identical code runs just fine (there are some differences in the specific syntax for the cursor fetch loops, but that's about the only difference between the two.)
The corrections to SAP/Sybase ASE have been propagated to MS SQL Server as well.
Service Pack 2A does not modify any of the rules used by SP2, but it corrects some defects in the CFBam model that were causing it's build to fail, and removes CFGui from the support list. Aside from the fact that I didn't see any real future for CFGui, the inclusion of that model by CFUniverse was causing the CFUniverse build to blow Java-imposed limits on the number of constants in any one class (the main X(ml)Msg Request parser.)
I really should have just waited for the builds to finish and fixed the problems before releasing SP2, but such is life.
There turned out to be build errors in the CFBam and CFUniverse models that I hadn't caught. Seeing as I have to remanufacture those projects, I also made a couple of other changes, such as tying the Chains into the DelDeps.
The CFGui 2.0 project has been removed from the support list and will no longer be manufactured. Nor is it included by CFUniverse 2.0 any more. I'd delete the GIT repository for the project, but I can't see any way of doing so. Apparently it's a create-and-update-only service, which is fine by me.
I'll be remanufacturing CFUniverse 2.0 without CFGui, rebuilding, and repackaging it over the next couple of days. CFUniverse had to shrink anyhow -- it's blowing Java limits on the number of constants supported by any one class, and I've already "trimmed" the class in question as much as I can to deal with previous build issues caused by blowing Java limits.
Service Pack 2 addresses a critical area of functionality: being able to control the order in which objects are deleted. This is particularly important for objects which maintain complex relationships throughout their hierarchy, such as defined by CFBam 2.0 and CFDbTest 2.0.
The way I've chosen to address the issue is to introduce the concept of a "DelDep" element of a table definition (they can be specified as part of TableRelations virtual elements in the model as well, allowing you to specify them at the end of a model when all the relationships have already been defined.) Other than a Name, a DelDep has a "DelDepChain" attribute which allows you to specify up to four relationships to be "chased" in order to resolve the objects to be deleted. In practice, this turns into up to a 5-way table join to select the primary keys of the deleted objects, which are iterated through invoking their stored procedures for deletion in order to ensure that the history of the objects is maintained.
At some point in the future I'll probably add an explicit check as to whether the deletion target HasHistory, and perform an optimal delete-join instead of iterating, but for now most of the data I worry about has history so I'm dealing with that as being the general case. It's slower, but it *does* work.
Service Pack 2 also adds the concept of a PopDepChain to relationship specifications, which specifies up to 4 relationships to be chased in order to locate the selection set used by the Picker window for a widget that expresses the relationship in the GUI. There are no database or object side changes for PopDepChains -- they only affect the prototype GUI. But I needed them in order to properly do things like present the user with a list of index columns to pick from for the ToIndex of a Relation while creating the RelationCol elements. The variation on the concept I tried in Service Pack 1 was only able to deal with relationships that chained up the inheritance hierarchy of the object; PopDepChain is a more general solution and therefore more powerful.
There are also a number of errors corrected for the database creation scripts. It turns out I was not looking for errors in the Microsoft SQL Server and Sybase ASE logs correctly, so there were a number of unresolved errors in Service Pack 1. Both of those databases now install cleanly for CFDbTest 2.0. In addition, I made a couple of boo-boos while editing the .bash script rules for PostgreSQL with Service Pack 1 that I only discovered recently while doing some regression testing. While the errors were easily corrected manually, it's far better to download this latest release and have the scripts manufactured *correctly* in the first place.
The PopDeps now work for all models, including CFUniverse.
The DelDep and PopDep imports have been debugged and now function as they were designed to do. The models have all been refreshed to implement the DelDep specifications such that you should no longer rely on implicit deletion of components, children, or details in your models -- you should explicitly use DelDeps to control the sequence of deletion as appropriate for your model.
This is a candidate for Service Pack 2. It depends on how testing goes as to what changes might be needed before SP2 is released. But the bulk of the work has been accomplished.
I've realized that switching wholesale to DelDeps won't fix the problem. The CRM model, for example, is imported before the Accounting model is defined, so the rules for DelDeps would be in place for the CRM objects under the Tenant first. As a result, the Tenant deletion would try to delete the CRM objects that Accounting relies on *first*, resulting in tracebacks.
I had to modify both the engine and the models to reverse the order in which DelDep rules are specified and interpreted. So when manufacturing code, the Engine now navigates through the *newest* DelDeps first, and the models need to specify the *top* objects before the dependant ones, so that they process correctly when evaluated by the engine.
I have yet to modify the remaining models to support DelDeps. I won't be ready to release Service Pack 2 until that's done and the code has all been refreshed, built cleanly, and ready for check-in from the Windows laptop after the manufacturing runs. But I'm getting close -- I can see light at the end of the tunnel now. I sure hope it's not a train. :P
Yeah. I'm pretty sure that will fix the problem.
The CFBam 2.0 and CFAcc 2.0 models have been reworked to rely on DelDeps and PopDep specifications. However, while working on the CFAcc 2.0 model I realized I may have to switch wholesale to DelDep specifications, because in order to delete an Account you need to be able to delete all the references to it, which means that during a Tenant deletion it needs to obey the sub-element deletion rules for an Account, whereas currently that wouldn't happen and it would fail to delete the Tenant.
I *might* take the time to make those changes before Service Pack 2 is released; for now I'm doing a test manufacturing of CFAcc, CFGCash, CFBam, and CFUniverse as they are all affected by the changes I made today. Note that full remanufacturings are required because everything changes right down to the database when using DelDeps.
There were more fixes required to correct late-night mis-edits I'd made to the PostgreSQL rule base, thinking I was editing MySQL or SQL Server files. :P
I also updated the Complex Object tests to use the new attributes I added. I had to remove the RelationCol specifications because those don't load properly for some reason and I don't feel like debugging it right now.
Apparently I got overzealous replacing dollar signs with percent signs in the rules while working on DOS script support for Windows. Sorry 'bout that.
The CFDbTest 2.0 model has been enhanced with a full set of inter-object relationships, DelDeps, and PopDeps as the complex schema objects will require for the CFBam 2.0 model. This was done with CFDbTest 2.0 first as a test. Now that I know I can make all the changes I need to, I'll proceed with modifying the CFBam 2.0 model. I didn't want to tackle that one straight off because of how long it takes to manufacture it.
SQL Server shares the problem with Delete$TableName$ group membership checks, but passes all other tests for CFDbTest 2.0. It does take quite a long time to install the database -- about as long as it takes to manufacture it.
But to me, it indicates that the problem with the Sybase ASE instance is the Sybase back-end server, not the code's logic. There are newer releases of Sybase ASE out commercially, but I don't believe they've posted a developer's release of the latest installers for Sybase ASE yet.
Now to flesh out the other 2.0 models to use the DelDep and PopDep specifications. While the core objects need to be free to have additional components, children, and details dangling from them and remove them via the Cascade support, the internal objects of each subject area should specify their dependencies explicitly.
Just don't add DelDeps and PopDeps to the core objects defined by CFSecurity, CFInternet, or CFCrm.
The Sybase ASE changes have been made for the crsp_delete_$dbtablename$.isql scripts, but at run time it seems the deletes aren't doing their job the same way that they do on other platforms. I'm not sure I can remember how to debug a stored procedure in Sybase, so I'm just going to leave it as is for now.
At least I fixed the other installation errors I hadn't noticed before spotting the error messages during a detailed scan of the .log file.
The CFDbTest 2.0 evaluation of the "Replace Complex Objects" test FAILS.
Next up: Microsoft SQL Server gets bootstrapped with a copy of the Sybase ASE installation changes. There usually aren't any significant differences, as far as I can recall, because both rely on TransactSQL interpreters and shells.
The MySQL crsp_delete_$dbtablename$.mysql scripts now support DelDep the same way that PostgreSQL does, except that it has to rely on local variables instead of a record cursor construct. The CFDbTest 2.0 evaluation of the "Replace Complex Objects" test now passes for MySQL.
Next up: Sybase ASE and then Microsoft SQL Server gets bootstrapped with a copy of the ASE installation changes. There usually aren't any significant differences, as far as I can recall, because both rely on TransactSQL interpreters and shells.
Oracle crdl_$dbtablename$.plsql scripts now support DelDep the same way that PostgreSQL does. The CFDbTest 2.0 evaluation of the "Replace Complex Objects" test now passes for Oracle.
The code was not merging the DelDeps and PopDeps when applying a resolved schema reference. It's been coded, but is completely untested. It should work, though. I probably won't know until I enhance the CFBam 2.0 model with DelDeps and PopDeps and end up importing that to CFUniverse 2.0. It's the only project where I really *must* have those constructs working to proceed much further.
Remember, the overall goal is 2.1. At least for the foreseeable future.
DB/2 LUW sp_delete_$dbtablename$.sql scripts now support DelDep the same way that PostgreSQL does. The "Replace Complex Objects" test from CFDbTest 2.0 now passes for DB/2 LUW.
The "Replace Complex Objects" test for CFDbTest 2.0 works again for PostgreSQL. Now that I've proven in the initial code for implementing deletion dependencies, the changes can be propagated to the other databases.
DelDep specifications are *not* inherited. You need to explicitly state them for each table in a hierarchy if you use them. This allows subclasses to "tune" the cascading deletes that are produced. Also, if a table specifies DelDep specifications, the old cascading delete code for component/detail/child objects is *not* produced. So if you are using DelDep specifications to circumvent a problem with the default cascading delete behaviour, you must replicate and enhance the DelDep specifications for every subclass table of the table which was originally causing a problem.
It is also worth noting that the DelDep implementation is not as efficient as cascading deletes, because it invokes the delete-by-primary-key stored procedure for the target of the dependancy, which adds an extra stored procedure call compared to the way cascading deletes are implemented. On the other hand, cascading deletes repeatedly invoke the whole delete tree of stored procedures, so there could be cases where it's actually more efficient to specify DelDeps.
I hope to avoid implementing DelDeps for the three core models (CFSecurity, CFInternet, and CFCrm) because if any of the tables in those models specify DelDeps, it would force importers of those definitions to specify additional DelDeps for "hook" objects like the Cluster and Tenant.
Samples of the DelPop usage have been added to the CFDbTest 2.0 model and used to exercise the SAX Parser. Unfortunately there were a couple of minor defects in the code that were causing crashes, but it loads the specifications correctly now.
You can't expect further releases until I manage to get the PostgreSQL stored procedures for deletes reworked to rely on the DelPop specifications. In doing so, I may have to modify the specificatons of the three core models (CFSecurity, CFInternet, and CFCrm) such that the DelPop specifications take over for the implied deletions entirely. I am leaning in that general direction, but first I want to get some core coding and testing done to see if I can get the "Replace Complex Objects" test working without resorting to extreme model changes.
Then again, maybe I *will* just "bite the bullet" and get 'er done before the next release. You'll probably know about the same time I do. :P
The population of the Swing prototype GUI Picker windows has been reworked and tested with CFDbTest 2.0 to rely on the new Relation.PopDepChain specifications. In practice, this means that you can now define relationships and populations such that as presenting a list of appropriate Province/State entries after a Country has been selected, complete with the requirement to select a Country before allowing the Picker to be displayed for the Province/State. See the configuration of the "Relation" object in CFDbTest 2.0 for examples.
In addition, the selection of a Picker value now populates both the invoking window's CFJReference widget and the focused edit object, so that the values used to present the sub-selection Picker window is properly dependant on the most recently edited values rather than the value(s) which were originally present in the focused object. So in the Country-Province example, if you change the Country, the list of Provinces will be changed when you next select a Province Picker. However, the values are not forced to depend on each other, so changing the Country does *not* automatically clear the Province or select any sort of default that might be appropriate. It's up to the user to make sure they select values for both fields.
Maybe some day I'll deal with niceties like automatically clearing dependant values, but I'm not even sure how I could go about chasing the dependencies with the current BAM data. It certainly wouldn't be easy -- I'd need some way of identifying the selecting value reference as affecting a particular secondary reference based on the PopDepChain specified for the secondary reference. While mathematically possible, wiring such a construct into the GEL syntax would be "challenging" to say the least.
The attributes Table.DelDepChain and Relation.PopDepChain have been added to the 1.11 BAM specification, and the obsolete PickerPop attributes removed from the Relation specifications. The engine itself has been tested to load the new specifications through an updated version of the CFDbTest 2.0 model, but the rules have not been updated to implement the code for these new attributes yet.
In fact, if you try manufacturing CFDbTest 2.0 at this point, you will get errors in the resulting code (if not tracebacks during the manufacturing process) because the rules still rely on the removed PickerPop GEL verbs.
Do not download and use this release. It is a snapshot of work in progress and is intended for internal use only.
Service Pack 1 was pretty much a seat-of-the-pants development of new functionality with additional testing of that functionality. I'd always had the goal of using 1.11 to create some sort of GUI for CFBam 2.0, and this was a big step forward on the route to that goal.
Service Pack 2 will follow along in the same vein, enhancing the Swing prototype GUI until it is capable of properly editing a CFBam 2.0 specification model, including support for Chains that are used to enable user-determined ordering of objects in the sub-element lists.
There are a few constructs I'm going to need to add to 1.11 in order to support that functionality.
The 1.11 model needs to enhance the Relation specification with a RelationDependency which is an optional chain of relationships extending the PickerPopRelation behaviour by allowing the specification of a relation chain. For example, the CFBam 2.0 specification of a Relation has to include a ToTable and a ToIndex. The ToTable can be specified by a PickerPopRelation -- it's simply a selection of the Table instances owned by a SchemaDef, so it identifies the relationship of the SchemaDef which identifies the tables owned by the SchemaDef. But the ToIndex specification has to depend on the ToTable specification, so it needs to identify the ToTable of the Relation first, and then the relation of the TableDef that identifies an Index of the Table. i.e. A chain or join of relationships, which have to be resolved dynamically at runtime and reject their own resolution if any of the relations in the chain have not been established.
This construct provides a superset of the PickerPop capabilities, so those constructs are being removed from the 1.11 BAM.
A second enhancement that is required is similar. I need to specify a DeleteDependency of a Table which follows a chain of relationships similar to the RelationDependancy, but this time with the goal of identifying sub-objects of a Table n levels down which have to be deleted before the an instance of the Table object itself can be. Instead of enhancing the GUI, this will require an enhancement of the stored procedures used to delete Table instances. I don't expect there to be any changes to the object interfaces as a result of this enhancement -- I think it can be handled entirely through the stored procedures. That will enable the deletion of generic complex objects, which currently can fail on cases like the Relation of a Table identifying an Index, preventing the deletion of the Index before the Relations are broken by the delete process in the current CFDbTest 2.0 "Replace Complex Objects" test. Once this issue is addressed, the "Replace Complex Objects" test will pass again.
At this point, the only other piece of functionality I think I'll need is to code the rules around the Chain specifications that I added to a Table a long time ago. The Chain specifications will add code to identify the head and tail of the chain as the Prev/Next links owned by the Container of the object which have a null reference. There is a good chance I'll have to enhance the model specifications to properly establish those virtual relationships. The simpler part of implementing Chains is automatically establishing the Prev/Next relationships in the SAX XML parser code; I've done something similar in the hand-coded 1.11 SAX parser, and the code required isn't very complex, though it does update the Prev object during the load process.
Deletes will require significant enhancements once Chains are established by that "phase 1" implementation of Chain support. If an individual object is being deleted, the chain has to be broken and mended between the Prev/Next objects referenced by the deleted object, so that you can delete the instance using a single operation. This will result in stale references in the cache, so I may need to create a new delete result set that returns the Prev/Next objects automatically instead of just a success flag, such that the client can automatically update the stale references. That should suffice for deleting an object from the CFBam 2.0 GUI prototype.
When deleting by the index that identifies the container of a chained object, the delete-by-index will act a little differently to reduce the number of updates performed in the database and speed up the process a little. What that process will do is clear the Prev/Next links of the sub-objects with a multi-record updated, and then invoke the individual deletes. So if you have n sub-objects, it will require n+2 database operations to perform the delete, whereas if you were to rely on repeatedly deleting the head (or tail) element of the chain iteratively, you would be performing (n*2)-1 operations.
Ideallly I'd like to enhance the implementation rules of the DeleteDependancy specifications to rely on those chained delete operations, but to perform them on entire groups of containers at a time. I haven't given much thought about how I'd do that, but if I can do so, it should speed up the delete of complex objects by a rather large amount of time, probably in the 90% reduction range for complex objects with any significant number of sub-elements.
Note that for Service Pack 2, I'd only be using CFDbTest 2.0 enhancements using these new constructs to exercise the resulting code. It takes too much time to manufacture CFBam 2.0 to use it for testing. Ideally near the end of Service Pack 2 development I'll use those new enhancements to flesh out the CFBam 2.0 model, producing a GUI that can be used to fully edit a business application model specification. Once that capability is realized, I'll consider Service Pack 2 to be ready for release.
Which brings me to Service Pack 3, which will not see significant enhancements to the 1.11 specifications or rules, but rather to the 2.0 models. I need to bring up the 2.0 models to the point where I can *use* them with CFBam 2.0 to code a replacement for 1.11. I'll also want to code a migration process for bringing the 1.11 rules forward to the new 2.0 specification syntax, rather than trying to hand-migrate the rules.
After that process is complete, I'll work on customizing the CFBam 2.0 GUI to provide support for importing and exporting models from the database, invoking the manufacturing engine, and creating log files of the resulting manufacturing process.
My final task for Service Pack 3 will be to customize the CFCore 2.0 code such that the CFBam 2.0 rule base can produce CFBam 2.1, which will run on CFCore 2.0 instead of CFCore 1.11 as CFBam 2.0 will.
That means that Service Pack 3 will be the end of the 1.11 development and enhancement process, capable of producing generic complex object applications; a goal which I haven't quite achieved to date.
When Service Pack 3 is released, I'll be shifting to the development and enhancement of the 2.0 code base with the goal of producing 2.1, which will run on CFCore 2.0/2.1 instead of 1.11 as the 2.0 models currently do.
When 2.0 is ready for release, I'll have come full circle to manufacturing a full engine from the 1.11 rule base, whereas the 1.10 release only manufactured the *core* of the 1.11 engine, leaving me with over two years of custom coding to be done.
Initially 2.0 will be relying on the manufactured GUI for it's implementation. Another goal I have for 2.1 is to create a greatly enhanced custom *graphical* GUI, potentially shifting from Swing to some other technology. Perhaps Android. I'd love to be able to use a pen-oriented device such as Samsung's tablets to edit a BAM with a sketch-and-scribble interface. I've wanted to have such a toy for a long time, suitable for "taking notes" in business design meetings and capturing the details on the fly.
Hey, a guy can dream, can't he? Maybe by then I'll be able to *afford* a Samsung tablet. My budget is kind of tight for such things right now, but I figure it'll be at *least* a year or two before I'm at that point so I'll be able to save up. :)
With the changes described below, Service Pack 1 is finally ready for release and use by the general design and programming communities.
The Swing GUI prototype has been tested with all six databases. There were a few problems with the MySQL code, but the other databases were working as of 1.11.12414.
For starters, I needed to create .bat scripts to install the database to a MySQL 5.6 instance under Windows 7. That was expected, though, as many of the other databases won't run scripts from Cygwin64 either.
Next up, the sp_create stored procedures weren't populating the audit columns on insert. I was surprised to find this as a problem, as I *thought* I had tested the code under MySQL 5.5 on Linux. Apparently my testing process wasn't very good.
The last problem was with the read-by-index stored procedures, or rather with the JDBC code that was invoking them. There were some missing commas in the SQL statements, which resulted in an argument underflow when invoking the read procedures. Not all read procedures were affected by this bug. In particular, it apparently did not affect any of the CFDbTest 2.0 database tests as I was able to run those successfully (or as close to it as MySQL can get), but it did crop up when trying to browse the data through the CFDbTest 2.0 Swing GUI prototype for MySQL.
The Swing prototype GUI has been modified to rely entirely on the interfaces that are expected to be provided by the JPanels and JInternalFrames created through the Factory interfaces. This allows you to either subclass the components that are produced by the manufacturing process, or replace them completely with custom code, according to your needs. The intent of the prototype is to provide a "leg up" on developing a GUI, while taking care of a lot of the "plumbing" required by Swing interfaces.
The prototype GUI was originally developed for the primary purpose of doing design walk-throughs with users who are likely to be unfamiliar with data models provided by ERD or UML diagrams, and to instead give them a visual representation of the data provided by a Business Application Model. This goal has been achieved, but then I realized that by applying the factory/interface metaphors, the code could be useful in the development of a full user interface, not just a demo/prototype.
Unfortunately when you run a "db2" command under the Cygwin bash shell, it launches a new process which exits after the command is done. As a result, there is no way to retain a connection within a bash shell under Windows.
The command shell provided by DB/2 LUW for Windows is DOS/BAT based, so I had to create a suite of .bat scripts in order to install the CFDbTest 2.0 database to my DB/2 LUW 10.1 instance on Windows 7. On the bright side, the scripts have been tested and verified prior to posting this update.
Unfortunately the install process flagged a remaining self-referenctial object (Domain) in the CFInternet 2.0 model that is included/imported by *every* 2.0 project, so they *all* need to be completely remanufactured, as the database models for them have changed.
The Microsoft SQL Server client also functions correctly with this release.
The Oracle client also functions correctly with this release.
Testing is going great. The Sybase ASE client functions correctly as well. That leaves MySql to be tested, but I'll need to get the server installed on my Windows laptop and address any database instance creation issues before I can do that testing. I am now completely confident that I'll have SP1 out the door before the end of September.
I've only created launcher scripts for Windows/Cygwin, because those are the only ones *I* need at this point in time. You can easily create variants for Linux using the Windows/Cygwin scripts as templates; all that really changes is they use colon separators instead of semicolons for the classpath, and should be named .bash instead of having no suffix.
Now for the hard part: installing and loading CFDbTest in all of the databases and testing the login/logout processing of the different databases. However, once that's done, I'll be ready to release SP1.
Found a bug. Squished it.
I had hoped to make the '411 release SP1, but then I realized I still haven't created the Swing prototype GUI CLIs for all of the databases, just PostgreSQL. The odds of that testing going without a hitch in one delta is virtually nil, particularly as I haven't even *installed* MySQL under Windows yet. Nor do I believe I've installed the DB/2 LUW database instances yet, though I do have the database *engine* installed.
However, this release does present the final set of functionality that is expected for SP1, once any outstanding issues with the additional CLIs are resolved. It does produce clean builds of all the 2.0 projects as well.
The Close action for the Finder windows has been coded, and the rules have been refactored to use ICFJRefreshCallback instead of the schema-specific versions that were used during development.
Now that all the windows have functioning close actions, they all are coded with setClosable( false ) to remove the window decoration that was allowing short-circuiting of the window code during initial development.
I'm going to have to do another run-through of the functionality (testing), but I do believe I'm just about ready to release SP1.
The refresh callback interface replaces the temporary schema-specific callback interface that was used during development of the callbacks in the prototype Swing GUI.
If an object has subclasses, it now displays the Object Kind in it's set of columns for the Finder and List JPanels. That way you have some useful information about the type of data that is being presented by a given row, because you don't get all the detail attributes of the object in question, only those which are in common with the base class of the current table/object class.
When a View/Edit window was used to update an object, the invoking window's data lists weren't getting refreshed. That has been corrected.
The sizes of the prototype GUI windows has also been adjusted to be suitable for a 720p display (which is all I have on my laptop.) As such, it should be visible on a 1280x1024, 720p, 1600x1200, or 1080p monitor.
I've also discovered that in my zeal to test new functionality with CFDbTest 2.0, I've broken the "replace complex objects" test. There are Relation references to Index objects in that test data which prevent the deletion of the SchemaDef. I'm not going to worry about it, because I've already tested the functionality that was originally supposed to verify, but be aware that it *will* fail.
This highlights the need for a specialized enhancement to the factory. I need some way to specify a chain of relationships as being part of a Table/Object specification, and chase through those relationship chains to delete the sub-component objects they specify. Thus you might do something like specify a relationship chain from a SchemaDef to a Table to a Relation, and have the stored procedure delete code for the SchemaDef automagically iterate through the Relations, deleting them before other data.
I also have yet to address the issue of Chain specifications, which will break pretty much every delete implementation I have because chains do forward/backward links of objects. I'd need to do something like clear the chain references to null automatically before deleting the objects, which means the delete record in the history will lose the detail data about the chain references. I can't see any way around that. I know I said I wouldn't implement chains until the 2.x series of MSS Code Factory, but I've the pieces in place to define them so I might well add that functionality before I do an SP1 release. Or maybe I'll put it off for an SP2. We shall see. You never know what is going to catch my interest. :)
The PostgreSQL "psql" command will no longer run under Cygwin (now Cygwin64) with Windows 7, so I created a suite of .bat files that do the same job as the bash scripts. Along the way, I discovered some spurious backslashes in the stored procedure creation scripts that were being absorbed by the Linux version of PostgreSQL, but rejected by the Windows version of 9.3. Those spurious backslashes have been removed, so the same .pgsql scripts should now run on both platforms.
Since the shift to git, apparently the .bat files haven't been getting manufactured with CRLF termination because all of the .xml rule files are mapped to use Unix/Linux termination (LF). This release corrects that problem as long as files are named with a lowercase .bat.
Force the termination of lines in .bat files to use CRLF.
The Swing GUI prototype Picker windows now function correctly, setting the value of the Reference widget in the callbacks from the AttrJPanels. The changes also get persisted to the database correctly.
However, I'm not seeing the refreshed data getting propagated to the list of objects in the parent window's list box, despite the fact that the whole approach of MSS Code Factory is single-object, so the updated information *should* have propagated automatically.
Oh. Wait. I need to refresh the list box data via notification -- it doesn't know that it's data has changed. That'll be an easy fix when I have time for it.
Unfortunately, that probably won't be today. I want to get a build of the currently manufactured code out the door so that I can finish getting off my flaky old Linux box. It's over 10 years old, and even though I replaced the CPU cooler last year, it now overheats constantly, causing random program crashes and data corruption as the CPU goes wonky.
So I'll be shifting my primary development and surfing over to my Windows 7 laptop. I've done some test builds and packaging with CFSecurity on that box using the latest version of Cygwin as a glue layer, and it looks like the only thing I really need to do is get PostgreSQL installed on it (and DB/2 LUW and MySQL, but those can wait until I pick up a portable hard drive to free up some space by shifting my media off the box. I've only got 60GB free on it, and these builds take up a *lot* of space.
So once I get the build refreshed, I won't be doing any development until I've got my entire tool chain working on thw Windows box.
There was a major debug session of the login/logout processing today, which consumed a good many hours as it chased from one bug to the next, the first being that the server-side logout() wasn't getting invoked. That's been fixed by overloading the logout() method of the SchemaObj that wraps the DirectInvoker in the Swing CLI. The same should be done for the XMsg loaders, but I haven't gotten around to that yet.
There were problems with stale prepared statements, subtle problems with the disconnection processing, and a lot of frustration. But in the end, I win! I *always* beat the computer.
Tonight I need to do runs of Java, Java+XMsg, Swing, and the JDBC layers.
When using a database interface under JEE, you'll want to use a subclass that overloads "disconnect( boolean )" to just do the commit or rollback accordingly, clear the cnx to null, and invoke the releasePreparedStatements(). Otherwise you'll mess up the connections in the server pool.
Tomorrow I'll get into debugging the Swing Picker windows -- they've been coded, and the new buttons have been added, but they don't seem to be working properly. Maybe I'll debug it now -- I need to stay up for another couple of hours anyhow, and Eclipse has been letting me work instead of getting all crash-happy.
The Picker windows now show and populate themselves in read-only mode. I need to add the support for callbacks to set the selected values, and also add the buttons for "Choose None", "Choose Selected", and "Cancel" at the bottom of the Picker window and wire their Actions.
All of the database creation scripts need to create the Cluster, Tenant, and Security tables before they create the application tables, because the definitions of the tables often specify the relationships to the security tables for the audit information. I've corrected that oversight/mistake.
I'm a bit baffled as to how I missed this on the PostgreSQL database creation scripts. I couldn't get them to run tonight. I was *sure* I'd run them repeatedly during my testing, but apparently I made some changes between my last run and the release of the rule base.
I know I was doing runs of CFUniverse creation for just about every database. I should have caught this problem. I feel sooooo embarassed -- no one has been able to create the databases I've been using for testing.
The Relation.PickerPopDep is used to specify a dereferencing relationship between the PickerPop relation and the data. When a PickerPopDep is specified, this object's hierarchy is searched for satisfaction of the dependency relationship, which is expected to be a singleton reference. Then the PickerPop is satisfied using that singleton reference instead of the current object, so basically you can indirect the satisfaction of a Picker window's population to just about anything you can reach in your object's hierarchy (including it's containers and their relationships.)
Sample configurations have been added to the CFDbTest 2.0 model to exercise this planned functionality, but doing so requires a change to the overall data model, so I'm pushing this release for now so I can do a full build of CFDbTest 2.0 on my much-speedier-than-my-linux-box laptop. As it is, it takes half an hour just to do a Swing GUI run for CFDbTest 2.0 on the linux box. The laptop can do it in less than 1/3 of that time.
As a side note, I only started on the Swing prototype GUI on 2014.05.22. That's less than four months ago.
The PickerPop relation lookup has been added to the object hierarchy of the MSS BAM model, and wired in to the SAX parser, new verbs provided by the engine (binding HasPickerPopRelationDef and reference PickerPopRelationDef), and a test specification of the new attribute in CFDbTest 2.0.
Now all the pieces I need to work on the Picker windows are in place.
Built on the ever popular CFLib 1.11.12386 system library.
I'm very happy with today's customization and bug fixes on the Swing GUI prototype. In fact, I'm not going to bother deploying the build of yesterday's changes. Instead I'll be refreshing the Swing GUI's for yesterday's source code refresh, and incorporating all those fixes.
The width of the JAttrPanels now tracks properly to the width of the window, displaying only a vertical scrollbar when necessary. Rather than letting you display a horizontal scrollbar, it's a hint to make the window wider.
The way that doLayout() is propagated throughout the code has changed a bit in order to support the JAttrPanel changes, and the new CFHSlaveJScrollPane that was added to CFLib 1.11.12386.
The CFHSlaveJScrollPane is a specialized JScrollPane that forces the width of the scrolled component to track the width of the viewport during doLayout() operations.
The CFJTextEditor has also been modified so text editors are now scrollable objects.
The layout of the displayed frames is much better now, but I would like to have the width of the AttrJPanels *managed* by their containing JScrollPane instead of the other way around. i.e. I want the width of the attribute panel to be *set* when you adjust the size of the JScrollPane's viewport by adjusting the width of it's containing JInternalFrame. Ideally, the JInternalFrame should respect the minimum size of that component AttrJPanel. Maybe what I need to do is just specify a minimum width for the JScrollPane to accomplish that. Regardless, that's all future tweaking and fiddling.
For now I'm content that I've added enough functionality and fixed enough bugs that it's worthwhile to do a manufacturing run. Note that the specification of the CFCrm 2.0 model has changed, so any projects which import this model must be completely remanufactured. (I neglected to make the top level CRM objects components of the Tenant.)
CFDbTest 2.0 has already been refreshed and tested with this rule set, but I did tweak a couple of things so I'll have to remanufacture it's swing layer anyhow. They're not big changes, just bit-twiddling.
In order to implement the Picker population properly, I'm going to have to add a new attribute to the Relation specifications of a BAM. I'll add an optional PickerPopRelation attribute to them, which references a Relation (one of Components, Children, or Details.)
When you specify a PickerPopRelation, the code will search through the object runtime hierarchy for an instance of PickerPopRelation.FromTable. When it finds such an instance, it will cast it accordingly, and invoke the member attribute accessor for the relationship.
If no PickerPopRelation is specified and the target of the relationship has no container defined, then the global set of target objects will be used for the population. Otherwise, if the container of the target objects is a Tenant or Cluster, the appropriate security object will be used to filter the population set.
Note that for now I'm not planning to support Add methods in the picker windows. There are a lot of complications to doing so, and I wouldn't *always* want you to be able to add an object without going to it's container. I may someday implement yet another flag, say PickerAllowsAdd to enable such functionality on a per-relationship basis, if I ever work out the kinks to doing so.
The CFJTextEditor now inherits from JEditorPane, defaulting to multi-line plain text.
You should be able to just rebuild the existing Swing GUI prototype packages and have them work, but there may be layout problems because I specify a minimum size of 400x40 for the text editor now. I'll be adjusting the layout code accordingly and posting updates once that's done.
I've been giving a lot of thought to the Picker windows. In particular, how to decide on the selection set of objects they present. I think I'll need to add a few verbs to the core engine. One will locate the relationship which defines a container, parent, or master reference to the target table of the Picker, and the other will return a yes/no as to whether such a definition exists. Both verbs will be scoped by a Relationship, because they need access to both the referencing (From) table and the referenced (To) table in order to populate the Picker's selection set.
Returning results from a Picker is pretty trivial. I just need to define a callback interface (template-style) in CFLib, and a new window template interface for the Pickers that specifies an API for setting such a callback of a Picker. Or maybe I'll just pass it in as an argument to the Picker instead. I haven't decided yet. If I'm feeling energetic, I'll provide both interfaces. But I am *not* generally an energetic fellow. 100% lazy-assed desk jockey here. :P
For consistency, the main distribution has been repackaged with CFLib 1.11.12378, even though you can run code manufactured by 1.11.12376 by simply building it with the appropriate CFLib.
Rather than wrassle with the Java Swing JFormattedTextField implementation, I opted to switch over to the more basic JTextField as the base class of the Date/Time/Timestamp editors, wire in a data value attribute, and simply leverage my XML parsers and formatters for now. Yeah, it's crude. Yeah, it's ugly.
But you're not supposed to DEPLOY this GUI to users in the first place! You're supposed to customize it heavily through the factory interfaces with your OWN code.
With this version of CFLib, the numeric editors are now working properly, as are the basic String/Token/NmToken/NmTokens fields.
But I'm getting bizarre behaviours from the [TZ]Date/Time/Timestamp editors. The [TZ]Date editors reject valid data outright, and the [TZ]Time/Timestamp editors seem to clobber their values when they become enabled, despite having initially saved valid information correctly. This *might* be a problem with higher level code, but I'm pretty sure it's the library misbehaving.
Regardless. It's 22h30. I've just put on another coffee (fresh can!), but I'm ambivalent about coding any further tonight. It's been a long evening of debugging and fixing.
A little more quick testing showed that the values being persisted by the Time/Timestamp edits are NOT correct. They're being munged in some way, either by my various conversion functions or by something else. More digging will be required. It's even possible I've encountered a core code bug, but as far as I can recall I had *tested* that values are properly persisted by the code, so I don't *think* that's the cause of the problem. I'm pretty sure it's isolated to the way I'm constructing the Calendar values from the Dates in the edits. The TZ types I'm not worried about yet -- I know those need work. But I at least need basic Date/Time/Timestamp working.
I'm not sure why TZDate doesn't seem to be persisting edits; it could be a flaw in the higher level code. But I've re-copied some code from the Date field (which works) in hopes of fixing whatever is wrong with TZDate. It's not that complex a widget at this point.
TZTextEditor isn't fleshed out yet, so it just displays the first line of the text it's been set to. I think I'm going to switch that over to a multi-line text editor and make some layout tweaks (i.e. display 3 lines worth by default instead of one line.) But later, later. For now the other widgets...
When a formatted field editor was being set to an empty value (null), it wasn't properly clearing the inherited Value attribute of the widget. It does that now, so hopefully I'll be able to edit into null values and persist them.
With any luck, that'll take care of the first round of bugs for CFDbTest 2.0.
The CFCrm 2.0 testing of the edit functionality uncovered one minor defect (primary key attributes should never be modifiable), but with this build's rule set I'm already manufacturing CFDbTest 2.0 so I can test all the various field editor types at once.
Wouldn't it be something if they all worked? The odds are, of course, not in my favour...
The postFields() method has been added to the AttrJPanel and wired into place. It was surprisingly easy to code, starting with the populateFields() rules as a copy-paste-edit source. I only had to do four runs before I got a clean build. Which I have now, and am ready to begin testing. It may be a while before I have a *working* version, so I'm posting this starting point for now, but not making it the default download for the project because it *is* completely untested.
There was a very stupid and simple mistake in the enable/disable widget state code of the AttrJPanel which has been corrected. Now when you're in Edit mode the widgets are enabled for editing.
I successfully created a default value instance of a Tenant within the system Cluster for CFCrm 2.0. I think I'm done with debugging the flow of the state machine for editing objects now. Next I need to focus on the proper enabling and disabling of the edit widgets in AttrJPanel.
I'm getting the enable/disable state behaviour I want on the ListJPanels within a View/Edit panel, though, which is nice to see. I love it when a plan comes together.
At this point, the ICFJPanelList interface has been incorporated by the Swing GUI prototype code. However, I've yet to make the changes to the enable/disable state of the list panel menus to reflect the presence or lack of a container reference (which is cleared to null if this instance can't parent a sub-element list.)
At least I didn't *break* anything with this batch of code. :P
The ICFJPanelList<P,C> specifies a list such that the sub-objects in the collection of C are contained by the object of type P. Thus when an Add request is made, the instance of a subclass of C is created, and it is wired to the SwingContainer parent of the elements automagically. As there is no way to set the container relationship from the user interface (those relationships are always displayed as read-only), it has to be set by the code explicitly before displaying the new instance for edit.
There were a number of exceptions being thrown while trying to navigate around the Swing GUI. The causes of those exceptions have been corrected or allowed for as appropriate, and there are no longer any exceptions being thrown while navigating around the CFCrm 2.0 data object hierarchy.
A significant refactoring and reworking of the use of the swingFocus attribute vs. the new CFLib 1.11.12362 get/setSwingFocus() accessors was coded and tested as well. There were several "speed bumps" along the way to getting these code changes to build without errors.
So download, play, enjoy. I'm going to refresh the other 2.0 projects and get them building...
All of the panel bases specified by the Swing foundation now specify the implementation of ICFJPanelCommon with basic accessors for the new ICFLibAnyObj2 SwingFocus attributes. There will need to be some substantial work done on the Swing GUI prototype code to make use of the new interface and accessors instead of the specific types of the component panels and lists.
Ideally I want an abstracted interface managed by a generic GUI component framework that can accept plug-ins of custom user interface components easily and effectively, without requiring that you continue to derive from the manufactured implementations of those interfaces.
I've already started modifying the specification of the manufactured Swing GUI code to support the new interfaces. Essentially you can presume that any JPanel returned by the manufactured swing interfaces supports the ICFJPanelCommon interface at a bare minimum. It may support additional functionality when typecast appropriately, but any GUI elements returned by the factories will implement this critical interface. Your custom interface implementation must do the same in order to be installed in the factory as a replacement for the default presentation implementation.
The AttrJPanel was checking for a null swingFocus when in View mode. In order for the display of sub-element objects, a null swingFocus has to be Viewable.
The population of the display widgets has also been modified such that if the AttrJPanel is in an Unknown state, the swingFocus is ignored and no data is presented to the user.
The changes for the disabled text colours worked, so I'm repackaging everything with the latest CFLib 1.11.12358.
There turns out to be a special API for text fields that is used to specify the disabled colour. No wonder I couldn't get it to work the way I was hacking things.
The ListJPanel Selected menu needs to be enabled whenever there is a row selected in the list, not just when the panel is also in Edit mode. This small change allows you to use the menu navigation Selected/View Selected to display the selected item even if the panel is in View mode.
This is an overly aggressive attempt to force the text colour to black even in read-only/disabled mode for text edits. They're being annoyingly persistent about using that low-contrast pale blue colour.
There are some changes to the rules which correct the propagation of the edit state, in particular for the EltJTabbedPanes. I think there were some other corrections as well, but I can't recall what they might have been offhand. The GUI looks much better with CFLib 1.11.12353.
The blue colour that is used by default for the Swing widgets looks horrible on a grey background when the widget is disabled. Switch over to forcing good old black on grey and black on white.
CFLib 1.11.12350 does not change it's programming interface specifications, but merely adds functionality to the core widgets such that they display the background colour of their containing panel as their background when disabled, and white when enabled.
The new code has been tested with CFCrm 2.0.12352.
The editor widgets now display themselves with the same background as the enclosing panel if they are disabled. If they are in edit mode, they display white backgrounds instead. This provides a clear indicator as to whether a given editor is in read-only or modifiable mode.
The code for the attribute panels and otherwise have been updated to support the new CFJPanel.PanelMode.Update state. Now in order to save an Add or an Edit, you transition to Update, which then transitions to View after applying the changes and losing the data pin.
When incorporated in an AskDeleteJPanel, the AttrJPanel also implements the CFJPanel.PanelMode.Delete state to apply the deletion, clear the SwingFocus, and leave the widget in a state of CFJPanl.PanelMode.Unknown.
The logic for state changes is largely that of AttrJPanel, which is responsible for doing all actual object instance manipulation. The ListJPanel instances, on the other hand, are always in a state of View or Edit, which disables and enables the data editing actions (also allowing for the state of the row selection in the list.)
I'm pretty happy with the functionality for viewing data at this point. I'd call this "good to go" for doing walk-throughs of data that you've loaded up using the SaxLoader implementations and your own data scripts. You can even Delete data at this point. I just haven't got the code in place for applying the edited values to the EditObj when performing a AttrJPanel.Update state transition.
I decided I needed another state transition in the diagram to make things clear.
Unknown -> View, Edit, Unknown View -> Edit, Delete, Unknown Edit -> View, Update, Delete, Unknown Update -> View, Unknown Delete -> Unknown
The ListJPanel instances which are wired to the tab panels of a View/Edit window are now properly wired to the appropriate data for the viewed instance. There is no attempt at doing a proper job of refreshing the view data in response to updates or anything like that yet, but now you can double-click your way around any existing data that was created by the test runs for CFDbTest 2.0, including the elements of the complex object model (a stripped down BAM subset.)
The ListJPanel implementations have been enhanced with the code to create the miscellaneous components required for a list of data.
The final step of wiring the setting of the collection to the re-preparation of the list box data and subsequent invalidation of the list box itself has not been done yet. I just wanted to take a pause while I have a clean build, and *then* populate the data.
The list headers for the list boxes get displayed properly, though. I'm seeing some anomalies in how the list boxes get displayed if there isn't enough column width consumed to take up the entire display area. In that case, an empty area is displayed in the right portion of the list box, and I'd like the rightmost column to take up all that space so that things look "pretty" with fully-populated rows instead of that ugly unused segment of the display.
This is "Life, The Universe, and Everything" sticking it's tongue out at you.
The Finder windows are almost complete. I need to add hooks and callbacks for propagating notifications to the Finder window when it's spawned ViewEdit window has created or updated the information it presents. When that happens, I need to either invalidate the displayed row for the updated instance, or refresh the data list and invalidate the list box if a new instance was added.
I should pass back the affected instance reference when invoking the callback so that I can afterwards search the list box for the affected instance and automatically select that row of data, scrolling the list box to display that selected row.
Then I'll be ready to propagate the code from the FinderJPanel to the ListJPanel so that I can start worrying about the sub-object lists of the View/Edit panel, and the appropriate enabling and disabling of their state.
Note that I've decided the list box panels need to support the mode Edit if they don't already. If the list box is in Unknown mode, none of the Add/View/Edit/Delete actions are displayed by the ListJPanel, and the list box takes the entire area. If the list box is in View mode, then the Add/View/Edit/Delete actions are visible, but disabled. If the list box is in Edit mode, then Add and optionally View/Edit/Delete (based on list box selection state) are visible and enabled/disabled accordingly.
Thus you have to edit an instance if you want to manipulate it's sub-objects.
This batch of changes allows the enable/disable state of the Finder menu items to adjust to match the selection state of the list box shown by it's component FinderJPanel. The FinderJPanel coordinates the SwingFocus attribute of any containing CFJInternalFrame which implements ISchemaSwingTableJPanelCommon with the selected row of the list box.
With this release, a missing typecast was corrected. The 2.0 projects all build successfully now.
There was a spurious $ in one of the rules expanded by the production of CFUniverse 2.0. That error has been corrected, and is the only change between 1.11.12335 and this release.
The way that the desktop is resolved for adding new JInternalFrames has been reworked and refreshed. The end result is the same, but this code *properly* navigates through the object hierarchy to locate the desktop instead of just doing an iterative prove of getParent() until JDesktopPane is found.
The setPanelMode() implementations now check for the subset of valid values that are appropriate for the window or panel in question.
I believe all the state changes are properly tracked by the edit flow events now, but I'll have to do some actual testing to be sure. :)
This is all very much untested GUI code at this point. It builds. It might work. It might not. 'tis the Schroedinger's Cat of Code... :P
There were some inconsistencies and holes in the GUI functionality to date. Some of that has been corrected and fleshed out. I think it should be close to functional at this point, give or take a lot of missing postChanges() code for the AttrJPanel.
With the addition of the PanelMode transitioning matrix for the AttrJPanel, I think I'm now ready to begin looking at testing this code, so I'll be manufacturing both CFCrm and CFDbTest 2.0 this time. CFCrm 2.0 builds clean already; there is no reason to expect any issues with CFDbTest 2.0.
The FinderJInternalFrame and FinderJPanel have most of the code in place for launching appropriate ViewEdit in mode Add/View/Edit and AskDelete in mode View. The code for retreiving the row data of the currently selected row has to be added -- the sections are marked with TODO WORKING comments.
The AskDeleteJPanel now properly responds to it's Delete/View state changes by updating it's user feedback accordingly and propagating the state changes down to the component AttrJPanel.
The ViewEditJInternalFrame has been updated to propagate it's state to the component AttrJPanel, and to update the user interface action enable states according to the value passed to setPanelMode().
The AttrJPanel has a skeleton outlined for the possible state transitions to be reacted to, ready to receive the code from yesterday morning's Programmer Notes. It is, I believe, the last piece of this big event and state change puzzle to be coded.
The PickerJPanel and PickeJInternalFrame have been fleshed out a bit more.
This release produces a clean compile for CFCrm 2.0.
View/Edit/Delete Selected menu items have been sketched out for list panels, and are now enabled and disabled appropriately. The implementations of these methods need to retrieve the currently selected row's data and use it for opening the appropriate focus window. In the long run, there will be a catalogue of the different classes of windows so that only one instance of each type of JInternalFrame can exist for any given data instance. If such a window already exists, it will be brought to the forefront, and if necessary, transitioned to a different panel mode.
I think that's enough work for today/tonight. Post an update of CFCrm, and I'll be calling it quits for now :)
The JInternalFrames of the Swing GUI now propagate changes to their PanelMode values to their sub-objects as required.
The ListJPanel/IJPanelList Add menu and action items are now enabled and disabled by the GUI logic whenever the PanelMode changes state via setPanelMode(). The states CFJPanel.PanelMode.Add and CFJPanel.PanelMode.Edit are expected by instances of this viewport; CFJPanel.PanelMode.Delete is unexpected and CFJPanel.PanelMode.Unknown is the initial state of the panel.
The EltJTabbedPane now sets the singleton Attribute JPanels to CFJPanel.PanelMode.View, and propagates the PanelMode to the duplicate List (IJPanelList) implementations. The actions for Selected.showView(), Selected.showEdit(), and Selected.showConfirmDelete() have not been specified yet, nor have appropriate menu items been wired to reference those actions. There is placeholder logic for adjusting their enable/disable states in the comments of the ListJPanel/IJPanelList specifications.
The AskDelete and ViewEdit JPanels propagate their PanelMode changes to their component Attribute JPanels.
Wire the interface get/setPanelMode as part of the expected interface for panels defined by the user interface. The basic interface provided by the CFJPanel just provides an object attribute and accessors as an implementation. The application panels need to override that basic implementation with code that reacts to changes in the attribute value.
The class hierarchy of the manufactured Swing GUI has been updated to incorporate CFJTabbedPane and CFJInternalFrame as provided by CFLib 1.11.12320.
The Java JDK on this box has been updated to the latest Debian 64-bit release, which reports in as:
java version "1.7.0_65"
OpenJDK Runtime Environment (IcedTea 2.5.1) (7u65-2.5.1-5~deb7u1)
OpenJDK 64-Bit Server VM (build 24.65-b04, mixed mode)
CFJTabbedPane requires the non-default constructor CFJTabbedPane( int tabPlacement, int tabLayoutPolicy ).
CFJInternalFrame provides a default implementation of getters and setters for the PanelMode state attribute.
The default implementations of get/setPanelMode() in CFJPanel are not being found. I believe it's because the implementation does not properly specify the specific reference to CFJPanel.PanelMode instead of just the locally defined PanelMode, even though they refer to the same class specification in the JRE.
The window implementations deriving from CFJInternalFrame will also provide implementations of the panel interface, overloading the getters and setters for their PanelMode attributes to react by propagating any changes to the PanelMode to their component CFJPanels which define the user interface.
SETTER AttrJPanel.setPanelMode( CFJPanel.PanelMode value )
IF value == getPanelMode()
CASE value OF
IF not editing new SwingFocus
IF editing SwingFocus
CASE getPanelMode() OF !! Check previous mode Unknown
IF viewing existing SwingFocus
IF not editing SwingFocus
IF viewing new SwingFocus
IF not editing SwingFocus
IF viewing new SwingFocus
super.setPanelMode( value );
The attribute JPanel ends up being the heart of the object interface, as it provides a state machine around the PanelMode of the SwingFocus object.
Use Schema Factory to instantiate an appropriate instance
Use Swing Factory to instantiate an appropriate ViewEdit window over the newInstance
newWindow.setPanelMode( CFJPanel.PanelMode.Add );
Add newWindow to desktop
Use Swing Factory to instantiate an appropriate ViewEdit window over the selectedInstance
newWindow.setPanelMode( CFJPanel.PanelMode.Edit );
Add newWindow to desktop
Use Swing Factory to instantiate an appropriate ViewEdit window over the selectedInstance
newWindow.setPanelMode( CFJPanel.PanelMode.Edit );
Add newWindow to desktop
Use Swing Factory to instantiate an appropriate AskDelete window over the selectedInstance
newWindow.setPanelMode( CFJPanel.PanelMode.View );
Add newWindow to desktop
IF not editing SwingFocus
setPanelMode( CFJPanel.PanelMode.Delete );
setPanelMode( CFJPanel.PanelMode.View );
IF not editing SwingFocus
CASE getPanelMode() OF
setPanelMode( CFJPanel.PanelMode.View );
setPanelMode( CFJPanel.PanelMode.View );
CASE getPanelMode() OF
IF not have selection
swingFocus = current row selected Invoke selection listeners with SwingFocus Close window
CFJPanel now defines the PanelMode enum, which replaces the WindowMode enum that was in the CFJInternalFrame. After all, there is far more code manipulating panel state than there is code manipulating window state.
The hierarchy of the manufactured frames is modified by this release, which is compiled using the new CFLib 1.11.12313.
The CFJInternalFrame just adds a WindowMode enum and attribute which can be used by implementations to control how their widgets are displayed and organized according to the current behaviour mode of the window.
For example, a window in View mode would disable all of it's edit widgets. A window in Add mode would enable all of it's edits except for Container relationships. A window in Edit mode would disable it's Parent and Container links, but enable editing of it's other attributes. A window in Delete mode would disable all of it's edits.
The enable/disable state of menu items should also be adjusted based on the window mode.
The CFJBoolEditor is now a tri-state check box, with a "?" displayed for null values, "X" for true, and an empty box for false. I really enjoyed writing this chunk of code. Widgets are fun.
This build updates the manufactured code to rely on the refactored Swing classes in CFLib 1.11.12307. CFLib itself corrects a defect with the CFJTimeEditor instantiation, so it now formats the data properly.
Along the way I also recoded the way that the widgets deal with the getting and setting of Time and TZTime values such that the date attributes are cleared automatically during getting and setting operations.
I've been scratching my head, missing a relatively obvious superclass constructor error.
In intermediate date instance is now allocated and cleared, then the HOUR_OF_DAY, MINUTE, and SECOND fields are copied from the parameterized value in order to ensure that there can be no confusion with a date value. I'm seeing some errors in the CFDbTest 2.0 Swing GUI cell renderers that brought this to my attention.
You'll notice I still rely on Debian's Open JDK 1.7 for most of my development and testing work. I can't imagine why JDK 1.8 wouldn't work with minimal modifications, but until it's available from the Debian repositories I won't be testing it, much less counting on it for my deployments.
I'll be focusing on CFDbTest rather than CFLib for the next little while as I code and test the Swing GUI implementation. I need to make sure my widget bases are covered. It's too bad there is no way to wedge a factory into the instantiation hierarchy of a class. You have to explicitly code the inheritence; as far as I know there is now way to virtualize an inherited class. But because I want to port the Swing GUI to Android widgets some day (ideally on a Samsung tablet with the pen/stylus. I like the 10.1 2014 edition a lot. It's got everything I've ever hoped to see in a tablet form factor some day -- notepad sized, with pen input, and a high enough resolution to capture smaller printing than the big special-alphabet writing in one character cell that you used to have to do with the Palm Pilot IV I used to own. As far as I'm concerned, the Samsung 2014 is Star Trek hardware, compared to when I was born. Remember, I've been around since the TRS-80 Model I, Level I 8-bit Z-80. :D
But there are still dinosaurs older than me out there, too. We remember people like Alan Kay.
I now have a nice little collection of CF*CellRenderer and CFJ*Editor objects.
The name is consistent, and consistent shall be the naming.
The leaf classes CFJ*TextField have been refactored to CFJ*Editor. The base CFJTextField and CFJFormattedTextField that most of them derive from has not been refactored, as it's a legitimate inheritance of meaning from Swing. However, I want the final GUI code to be more descriptive of what the widgets are *doing* rather than how they *look*.
The boolean cell renderer now draws a 16x16 box instead of a 20x20, and the character positioning within the box has been recalculated based on some notes about font rendering that I found in the documentation.
With the updates made to the rules and to CFLib, CFDbTest 2.0 now renders correctly for all of the supported atomic data types (i.e. everything but Blobs, which are hidden because a general purpose GUI can't guess how to interpret a Blob.)
Up until now, testing had only been done with CFCrm, which had left some errors in the rendering of the various date/time/timestamp table columns.
The Bool columns now render as a crude checkbox with a question mark for null values. This needs some tidying up, then I need to shift that custom rendering over to a custom widget for editing boolean values instead of relying on a true/false/blank text field.
The value accessors for the text edits for TZDate and UInt64 were incorrectly named.
The formatters and value retrievers for the various date/time/timestamp implementations needed to convert back and forth from a java.util.Date instead of a Calendar in order to use the SimpleDateFormat objects.
As there is no way to *usefully* present a generic Blob without knowing what the data *is*, support for Blobs has been removed from the CFLib Swing GUI package.
The boolean, text, and UUID fields now have getters and setters so that none of the rules for the Swing GUI prototype have to be left as "WORKING" tags.
This is the final update of MSS Code Factory itself for this month's code refresh. All of the manufactured projects have been successfully built, and I've updated the "CountEm*" scripts to skip the files in the build and bin directories. CFCore 2.0 has had it's build pruned to skip the Swing support, as it turns out you can't actually use that code base with Swing because it doesn't support the ICFAnyObj2 interfaces.
The CFAcc model had to be corrected before it would build it's Swing GUI layer. That model affects CFGCash and CFUniverse as well. So it's going to be quite some time before CFUniverse 2.0 is ready for check in.
Not only did I forget to dereference the ToTableDef for the Finder windows, I did it repeatedly with copy-paste-edit code. :P
There were also errors in the model for the CFAcc project -- the AccountConfig table did not have an index with a suffix of TenantIdx, so the Swing GUI code wouldn't compile.
Relationship accessors incorporate the RelationType in their name.
The Swing GUI rules mistakenly refered to "ReferenceType" instead of "RelationType" in one case. I must admit I'm surprised I didn't encounter this bug earlier.
The Swing GUI was missing a "obj." dereference, which has been corrected. MSS Code Factory has also been rebuilt with CFLib 1.11.12289, adding a cell renderer for Text columns (which is, of course, not actually used by the factory itself.)
CFDbTest builds were failing because they employ text columns and I'd neglected to create a renderer for them.
The JDK on my Linux box has been updated to 1.7.0_65, and CFLib's new cell renderers have been debugged, so MSS Code Factory has been rebuilt and repackaged with the relevant jars.
The cell renderers have been debugged and now display their data correctly. That's not to say there aren't any undiscovered bugs, but they work well enough for me to push them out in a release.
The Swing GUI Finder JTables are looking better. They now adjust to the proper row height, the header has a proper height, the background and foreground colours are being set for the first column, and I've got the raised/bump look that I want for the cells.
The problem is not all the cells are being rendered properly yet, so I've still got some debugging to do. I sure hope there isn't something squirreled away in the JDK code that prevents you from returning non-Strings as cell data for a JTable. I'm kind of counting on being able to do that.
Each of the custom data types now has a cell renderer for use in JTable implementations. If the cell data is wrapped in an appropriate object for the cell, then the cell is custom formatted and aligned for display. If it's just a string, it will be aligned, but not formatted and validated.
They don't work all that well yet. I'm having some drawing glitches and missing data displays. But it's time to check in what I've got for now because there's a lot of new code that's been debugged.
The Swing GUI Finder windows have been enhanced such that they display all of their data attributes, similar to what is presented by an attribute panel. The headers were a bit of a bear -- it's amazing to me how much I've forgotten about Swing coding since I did it last. So much glue code...
The header for the Qualified Name now displays in the Swing GUI Finder windows, with the data below. To see it for yourself, manufacture and build a project, and install it's database to PostgreSQL as the "postgres" user.
Fire up the Swing PostgreSQL GUI, and log in as system/system/system/postgres/yourpassword.
Do File/Find/Cluster or File/Find/Tenant and you'll see the "system" entry for the corresponding data displayed in the data table.
An error in readAllBuff() has been corrected for DB/2 LUW, PostgreSQL, and Sybase ASE.
The Swing GUI now enables and disables the menu items of the main window according to the login state of the client application.
The finder windows retrieve their data from the database, although they do not display it yet (of course not -- I haven't even instantiated the list box, much less bound it to the data.)
I did indeed induce bugs to the XMsg Loaders with my GUI login work, but that's been corrected. The code is actually much simpler now, and the changes are isolated to the mainlines for the loaders.
The debugging of the Swing GUI login/logout process took a long, long 8 hours (or more -- I'm not sure when I started, but it's 04h30 now. If I recall correctly I started before 21h00 yesterday.)
I also fixed a rather critical bug in the XMsg layers -- the request/response message headers weren't using the proper schema names, so the parsers were failing for CFCrm 2.0. Sorry 'bout that. At least it's fixed now.
That's it for the messaging layers and the core Java layer enhancements in order to support client-server and distributed logins in a very insecure and pointless fashion. But it will serve it's purpose of letting me log in through the GUI, log out of it, and access the database objects in between so I can start populating the windows with data.
Next up: wiring the new methods to the Swing GUI login/logout code, and dynamically adjusting the menu item permissions accordingly. Maybe I'll even auto-close all the JInternalFrames when you log out. I should.
The X(ml)Msg requests RqstLogIn and RqstLogOut have been defined and their handlers coded and wired to the schema parser.
The server side pieces of the equation for log in and log out are in place as well, so the next piece I have to work on is coding the response handlers and getting them wired to their schema parser.
The GUI will need to be enhanced such that Login is only enabled if no authentication object exists, and Logout is only enabled if an authentication object *does* exist. I'll need to implement a custom dispose() for the desktop as well that logs out if an authentication object exists before it invokes the super.dispose().
An extended connect method has been added to the ISchemaSchemaObj and SchemaSchemaObj interface and implementation which takes the cluster name, tenant name, and secuser name as arguments. If a connection to the database is established, this information is used to establish the authentication for the session automatically, including the creation of a SecSession object.
Now that I've worked out what the method needs to access, I can proceed with defining the request and response messages for a login. I need to do that because even with a client-server implementation, I'm relying on X(ml)Msg internally to enforce transactional interfaces on all data access by the client. This serves to dramatically improve the deployment scalability of a client-server application, and provides testing of the code that would be deployed as seperate client and server implementations bridged by an XML message transport.
Obviously in the long run I'll set up a Java Servlet interface to run the server, and use a virtual web client API to issue the XML requests and receive the XML responses over an SHTTP link. (Probably just regular HTTP at first, which means it's a security hole from hell.)
The default behaviour of the JInternalFrame closings was not set to dispose of the window instance, so the GUI prototype was leaking memory like a seive, littering itself with initialized (but hidden) instances of windows. That has been corrected.
A login window has been laid out and played with, and it's ready to be wired to the instance methods for performing a login and initializing the security data. I'll probably have to add some new APIs and schema messages to deal with logins. I won't implement user passwords yet, but soon database passwords will be getting checked.
The main windows File/Close menu item has been wired to dispose of the main JFrame. Apparently disposal is the "normal" way of dealing with closing an application with Swing. I'd have thought there would be a prepare-to-close protocol that negotiates possible refusals to close, but I guessed wrong -- only JInternalFrames have such a protocol.
Over 3,000,000 lines of code have been added to CFCore 2.0 by this build. Not bad for a day's work. Along the way to getting that code built, a SQL Server bug was corrected.
Ok, so the GUI wasn't as ready as I thought it was -- I'm going to have to remanufacture and rebuild.
I realized that while I had written the rules for handling Lookup references in the attribute panels, they weren't being displayed. That has been corrected -- about half the necessary rules had been written, but none of the cases were being properly invoked by higher-level rules.
Seeing as I was making such a major change to the GUI and would need to remanufacture sooner rather than later to show the changes, I decided to make another GUI change. The lookups, parents, owners, masters, and containers are no longer displayed as elements of a given object. Instead, only data which is a child, detail, or component of the object is presented.
I may have to modify the element lists so that the Add functionality is only presented if the critically necessary owner relationship can be satisfied, or barring an owner relationship, a container relationship. Master-Detail and Parent-Child relationships require that there be an owner for the data (which isn't usually an issue as most data is owned by the Tenant object, which dispenses the identifiers within it's data subset.)
While cleaning up the old *SubDomai* and *_subdom* files after remanufacturing the code, I realized there was a naming error in the Domain object of the internet model. This has been corrected and I'm remanufacturing again.
The GUI prototype is now ready for use as a tool for discussing business application models with end users as you work through the design of an object hierarchy and it's associated container relationships. Please note that the GUI will never be suitable for fielding to end users -- it's meant for database surgery by administrators and maintenance staff, and for discussions of the object models during an iterative design process. Real end-user GUIs require something computers lack: creativity and artistry.
The buttons for the CFJReferences in the attribute panels are wired, and bring up the correct windows. I had thought they were displaying the wrong objects, but I was misunderstanding my own model (it's past 03h00 here, so I'm pretty tired and need to call it a day before I screw up anything too badly because I'm not thinking clearly.)
When I load XSDs, I think I'm using rooted resource paths that start with a / so that the path of the class doesn't get considered. Let's try that for loading the icons.
It still wasn't working, so now I've put the images directory in every place I could think of -- at the project level, under the src directory, and under the directory containing the class files themselves.
Surely getResource() will find at least *one* of those files!
The images weren't being found, so based on Oracle's documentation, I've moved the images subdirectory to the CFLib/Swing directory in the source tree. Hopefully they'll get found now.
The CFJReference should now set the icons for it's buttons by modifying the actions that are passed in to it. You should not try to override the default icons by specifying them in your actions, though of course I can't *stop* you from abusing things that way if you so choose. Don't specify any action attributes, just the event callback to be used for processing.
Now I need to go through all the CFCrm code and wire up the action callbacks for the references to the view/edit windows and to the pickers.
The view/edit windows now have menus that include the delete, save, and cancel/close operations. The delete menu items bring up the appropriate confirmation window, whose Ok and Cancel buttons are wired to just close the window for now (without closing the parent window, which should happen in the case of a delete. Not sure how I'll implement that at the moment -- there are a few historical options from passing window handles around to issuing custom event messages, depending on the GUI toolkit involved.)
I'd like to wire the launching of the picker windows and detail views to the CFJReference widgets before I consider this prototype to be done "enough" for another remanufacturing batch. (The next remanufacturing will be a pain because I removed the SubDomain from the CFInternet model, so pretty much *every* project will require surgery to remove old files. Here's hoping I can do it through the git bash shell provided by TortoiseGit.)
There have been significant enhancements made to the manufactured GUI prototype, ranging from the maximum field sizes now configured into CFLib 1.11.12249 for string/token/nmtoken/nmtokens fields to proper component creation and layout in the delete confirmation windows (which aren't wired yet so you can't see those changes just yet.)
The splitters in the view/edit windows are now resizable, close buttons have been enabled, and the attribute panels throughout are now displayed in JScrollPanes instead of as regular panels.
It's really coming along quite nicely. :)
The string fields now calculate their maximum size based on the MaxLen attribute so that they don't splay across the whole screen in the GUI prototype. Up until now I haven't been specifying a maximum size as I have for the fixed-width formatted fields, which has made the GUI rather ugly to date.
The maximum width of text fields are artificially limited to 60 columns.
Sub-objects are usually lists of objects, so I've modified the GUI prototype code to display a JMenuBar for the ListJPanels instead of an actual list box. This lets you add the referenced objects, opening up their View/Edit windows, from which you can navigate further down the tree.
You can fully explore the hierarchy of add/view/edit windows now, though I haven't wired delete functionality anywhere yet. I do think the add/view/edit windows should be where the delete confirmation gets wired, so you have to look at an object *before* you delete it rather than being able to delete it directly from a list box row (i.e. the functionality would not be part of the list panel, but the view/edit internal frames.)
Maybe I'll work on that next. I *think* that would leave me with the means to navigate through all the prototyped widgets save for list boxes, which I haven't even *started* to flesh out yet. They just display the same attributes as a view/edit panel, so they're not all that interesting to me at this time.
The widgets in CFLib 1.11.12242 now implement calculation of maximum and minimum sizing. The maximum sizes are respected by the manufactured code, so the numeric fields (for example) no longer take up the full width of the attribute panels. However, because the attribute panels are embedded in a scrollable panel, they don't seem to get told to doLayout() when the scrollable panel is stretched wider or taller than the viewed panel.
I'll have to think on this -- maybe I can probe the parent widget to see if it's a scrollable panel, query the scrollable panel for it's size, and adjust my attribute panel width with a forced doLayout() if the scrollable panel is wider than the minimum size specified for the attribute panel.
Puzzles and postulations. There are always quircks and misbehaviours with every GUI toolkit I've ever used (and there are a fair number of those under my belt.)
The widgets now estimate minimum and maximum sizes for fixed-format fields like numerics, and a calculated minimum in the case of text fields, though no maximum is configured for text fields. The fixed format date-time fields also are constrained.
The layout of the CFJReference widget has been completely redone based on what I learned laying out the attribute panels for the GUI prototype. It should deal with resizing properly, and display the two buttons on the right of the text field that is used to display the name of the referenced object.
The manufactured code for laying out the attribute panels for a table has been completely reworked. No layout manager is used any more; instead I overload the doLayout() method and calculate the repositioning of the attribute widgets manually.
Next I'll need to modify the calculation of the widget widths so that the numeric fields and booleans and other such short fields don't take up a full 800 pixels of width. :)
I also need to apply what I've learned about coding layouts to the CFReference widget, which is currently using a grid bag layout. There is absolutely no need for such complexity with such a simple widget, and it's not working the way I want it to anyhow.
The tabs of the subobjects and subobject lists have been relabelled, and the attribute tabs for singleton references are displayed properly.
The manufactured code for the desktop of the Swing application now has the actions wired to launch instances of the find windows. I think I want to restrict the find windows to a single instance, though, and just do a show() on any existing instances instead of creating multiples of these core user interfaces.
The main window's menu bar is now initialized and has some skeleton action classes defined and instantiated in order to initialize the menus. I may need to hang on to references to the actions rather than references to the menu items, though, as it seems the enable state is kept by the action not the menu item itself. One step at a time.
The file menu includes a Find submenu that lists all the unrooted classes in the application model, which always includes the Audit Actions, Cluster, Tenant, and various security objects.
The actions don't actually *do* anything yet.
The default connection that was established by the CLI is no longer attempted. Instead that is going to be handled by a login window, which will also retrieve the security session for the GUI and establish the authentication parameters for the backend database connection. For now I'm going to rely on the CLI arguments for the cluster, tenant, and security user information as I do with the Loaders, but in the long run that needs to change such that the cluster is specified in the configuration file and you are forced to choose a tenant before proceeding after authenticating.
The fun part will be restricting the list of tenants to those where your user id appears in the authentications for the tenant's objects as an indicator that you're allowed to access their data.
But one step at a time. I'm getting ahead of myself.
The main desktop window now has a hook for the schema object interface, which is initialized by the setup of the CLI for the PostgreSQL Swing testing main.
The method connect( username, password ) has been added throughout the interfaces and implementations of the Java layers. This method is meant to be used to establish client-server logins, where the database connection provides the identification of a user and restricts their table access.
Until now, I've been focused on the server database connection pool environment, but that's not viable to rely on for the client-server GUI prototyping I'm working on now (even though security will still be largely implemented in the client, the database admin now has the ability to provide more granular restrictions in the server itself.)
The Java Swing PostgreSQL layer relies on the XMsg implementation to detach the processing of application requests from the front end client schema, which is ready to be attached to an instance of a Swing desktop window for the application. I haven't written that code yet; I was focusing on migrating the plumbing from the PostgreSQL XMsg test main.
The code does build, though it does nothing except attach to a database and disconnect from it right away, so you might not want to bother downloading this development release if you already have one that's been shifted to the git servers.
The entire build of 1.11 has been remanufactured and refreshed using 1.10 and then rebuilt and repackaged using the refreshed CFLib/CFCore 1.11.12225. The code is now in guaranteed to be in sync with the git repositories.
Just to make sure everything is in sync with the new git repository, the code has been refreshed and rebuilt and will be used for all subsequent builds and packaging of 1.11 and 2.0 executables.
Git doesn't provide versioning numbers the way that Subversion does, so I'm artificially adding up 10800 initial releases plus 1420 current releases, give or take a fudge factor.
The new GitHub.com repositories are all located at https://github.com/msobkow, and are named as follows:
|htdocs.git||The HTML documentation for MSS Code Factory|
|net-sourceforge-MSSCodeFactory-CFLib-1-9.git||MSS Code Factory CFLib 1.9|
|net-sourceforge-MSSCodeFactory-CFLib-1-11.git||MSS Code Factory CFLib 1.11|
|net-sourceforge-MSSCodeFactory-CFCore-1-10.git||MSS Code Factory CFCore 1.10|
|net-sourceforge-MSSCodeFactory-CFCore-1-11.git||MSS Code Factory CFCore 1.11|
|net-sourceforge-MSSCodeFactory-1-10.git||MSS Code Factory 1.10, used to produce 1.11|
|net-sourceforge-MSSCodeFactory-1-11.git||MSS Code Factory 1.11|
|net-sourceforge-MSSCodeFactory-CFCore-2-0.git||MSS Code Factory CFCore 2.0|
|net-sourceforge-MSSCodeFactory-CFSecurity-2-0.git||Code Factory Security 2.0|
|net-sourceforge-MSSCodeFactory-CFInternet-2-0.git||Code Factory Internet 2.0|
|net-sourceforge-MSSCodeFactory-CFBam-2-0.git||MSS Code Factory Business Application Model 2.0|
|net-sourceforge-MSSCodeFactory-CFCrm-2-0.git||Code Factory CRM 2.0|
|net-sourceforge-MSSCodeFactory-CFAcc-2-0.git||Code Factory Accounting 2.0|
|net-sourceforge-MSSCodeFactory-CFDbTest-2-0.git||Code Factory Database Testing 2.0|
|net-sourceforge-MSSCodeFactory-CFEnSyntax-2-0.git||Code Factory English Syntax Parsing Objects 2.0|
|net-sourceforge-MSSCodeFactory-CFFreeSwitch-2-0.git||Code Factory FreeSwitch 2.0|
|net-sourceforge-MSSCodeFactory-CFGCash-2-0.git||Code Factory GNU Cash User Interface for CFAcc 2.0|
|net-sourceforge-MSSCodeFactory-CFGui-2-0.git||Code Factory GUI Modelling 2.0|
|net-sourceforge-MSSCodeFactory-CFUniverse-2-0.git||Code Factory Universe 2.0|
SourceForge has corrupted my subversion archives twice in a year. I'm switching to GitHub for the code repositories.
You didn't think I'd miss the opportunity to push out a 420 release, did you? :P
With CFLib 1.11.1418, the Date, Time, Timestamp, TZDate, TZTime, TZTimestamp, String, Token, NmToken, and NmTokens text fields have been fleshed out with value getters and setters, validators, and so on.
The Date, Time, Timestamp, TZDate, TZTime, TZTimestamp, String, Token, NmToken, and NmTokens text fields have been fleshed out with value getters and setters, validators, and so on.
The latest version of CFLib modified the constructor interface for CFJNumberTextField to incorporate the digits and precision, which are required to determine the values for the field formatting and min/max defaults based on those arguments. The rules for the Swing support have been updated accordingly by this release, which has also been rebuilt using the latest CFLib 1.11.1408.
The formatted fields for the various numeric data types have been fleshed out and should do appropriate formatting, validation, and range checking (though, of course, none of the code has been tested yet.)
The alignments have been specified for the other data types.
The date-time types will format their displays, but they don't have getters and setters yet, just formatter initializations. Maybe I'll work on those getters and setters tomorrow; it's past midnight here and time to call it a night.
The widget hierarchy for the Swing panels produced for displaying the attributes of an object have been reworked and now incorporate the new TextField specializations provided by CFLib 1.11.1375.
While there is no "meat", this provides the inheritance framework that will be fleshed out to implement behaviours like error highlighting, range checking, field formatting, text alignment within the field, and so on.
Some of the new widgets will never be used for editing, only display of a preview of the value, such as the Text and Blob specializations. You really shouldn't use a JTextField for editing multi-line Text, but this is a work in progress and I haven't decided how I'm going to deal with text fields yet (i.e. Do I embed a multi-line editor or pop a field editor window for the text, and only display a preview of the first line in a normal text field with a button for bringing up the editor?)
There are now JTextField subclasses for wiring default behaviours for the type-specific data displays used in the GUI. Each data type supported by MSS Code Factory modelling has a corresponding CFJ[Xxx]TextField type.
That's it for the application-specific windowing objects that need to be organized into an MDI Swing GUI. There are still some supporting GUI objects to be added, including login/logout and console log internal frames, but at this point I haven't decided whether they're going to go in CFLib or be manufactured. We'll see how things go as I start fleshing out this sketch of code.
Still, this is a good point for remanufacturing the 2.0 projects and bringing them up to date.
The stubs for the Finders have been added.
The general flow is that you open the main window, and have access to a login window. Logging in selects your Cluster and Tenant for the session.
From there, you now have menu items for Find-ing the top level items of the Cluster, Tenanat, and System data, which opens the appropriate Finder windows.
You can then view one of the objects (or add/delete them) in the Finder window, which opens the ViewEdit JInternalFrame for the object you selected. Similarly, from the sub-element bindings of the object's ViewEdit frame, you can manipulate the sub-objects of that object, and thereby follow the containership hierarchy as your navigation through the system data.
Pickers get brought up to establish relationships between objects and other objects as required.
I should still stub in delete confirmation windows, but I think I'm almost done sketching out the set of Java files that are going to be needed for the Swing GUI.
The main MDI parent window is a JDesktopPane attached as the content of a JFrame. It does not follow the common interface that the table windows do, because it has no focus object like they do. Instead, it's just a container for keeping track of all the MDI child windows that get spawned during a session. The only attribute it shares in common is that it takes a schema object as an argument, taking ownership of that schema object and using it to construct it's various element internal frames.
The first outlines of the Picker and ViewEdit JFrames have been added to the Swing implementation.
The tool has been rebuilt to use CFLib 1.11.1275, which adds a Swing package and a placeholder implementation of a CFJReference widget. The rules for Swing support have been updated to incorporate this new package, and to use it instead of a TextEdit for displaying reference objects.
The CFJReference place holder will at least display the qualified name of the referenced object in it's text field, but the wiring for the View and Pick reference buttons hasn't been done yet. They don't even have icons so far. :)
'tis but a placeholder so I can keep swatting away at the Swing GUI prototype code.
The PostgreSQL rules were not properly specifying range constraints for NumberDef entries if they were referenced by a TableCol such as with the CFAcc.ACMoney type.
Added CFAcc.AccountContact which optionally binds a CFAcc.Account to a CFCrm.Contact and CFCrm.ContactList. (ContactList is only there because of the way contacts are resolved by name -- not including this parent object results in name resolution errors.)
Add CFAcc.AccountConfig with the required lookup attribute DefaultCurrency which defaults to CAD, and the optional lookups Cust(omer)ContactList, Emp(loyee)ContactList, and Vend(or)ContactList as required to limit the display and access of the appropriate lists within the manufactured accounting GUI.
As a general concept, I'll want to add ACLs to the contact lists at some point because there should be finer-grained security control of the tenant's contact lists than merely to decide an all-or-nothing table access.
I'll think about it. I'm in no hurry.
The new objects added to the CFAcc and CFGCash 2.0 projects now build properly.
The primary change with this release is the addition of the Account and AccountEntry objects to the CFAcc 2.0 model, and their wiring in as a hierarchy of data owned by the Tenant.
Most of the new object model is from memory of the many bank and corporate accounting systems I've worked on over the years, give or take a couple of ideas I have for currency-agnostic accounting, and a couple of reminders from GNU Cash (I almost forgot to allow for splits in the model.)
That adds about 118,000 or so lines to each of CFAcc, CFGCash, and CFUniverse. Roughly 360,000 new lines of code in under a week, during which I worked a grand total of 4 hours, the rest of it being computer time or just down time for migraines. :P
The CFDbTest 2.0 test suite now incorporates fragments of code for the complex object tests (the addition of a DataCol to the IndexCol specifications) which exercises the named lookup resolutions with qualified names. The named lookups themselves have been debugged while testing this code.
Note the Tenant objects only probe the Tld sub-object when doing qualified name resolution. The whole purpose of the named resolutions was to deal with internet-aware data locations, so only internet data addresses are searched within a Tenant. Note that there is also an underlying assumption that a name will be unique amongst all subobjects of a given scope, but the code does not enforce that restriction so you could easily create data models and structures that don't respond to qualified name searches properly.
The new methods for getting the qualified and full names of objects would have entered infinite loops because they were not navigating the object scope hierarchy properly. If you are planning on using those new APIs, you should download this build and remanufacture your code.
Be aware the CFDbTest 2.0 does not build with the code produced by this release because it's my work-in-progress for adding some new features to the lookup resolutions for the manufactured structured XML SAX parsers (the ones which are used for loading test or initialization data as opposed to client-server communications handled by the X(ml)Msg layers.)
The Index definition in a CFBam model was referencing the SchemaDef. While this is nice to have from a programming standpoint, it's unresolvable because the Index has no idea what it's containing Schema's scope object is, and you need to know the scope object in order to be able to resolve the name.
Previously I'd had a hack in place that resolved the Schema name from the Tenant scope, but that was always a hack and should never have been done. What can I say -- at the time I just wanted to build errors to go away and wasn't putting much thought into the root cause of the error.
I'll have to restart the manufacturing for CFBam and CFUniverse again. Perhaps tomorrow I'll have a good build of CFBam -- there were only 8 errors left in the build, all of them in the SAX XML Parser file for the Index objects.
The SchemaDefUNameIdx for both the CFBam and CFDbTest models was not including the container's DomainId in the key, resulting in build time problems for the new name resolution code which presumes that a LookupIndex has the same initial key attributes as the container's primary key, followed by the name of the object.
All other 2.0 projects compiled successfully. Once the CFBam, CFDbTest, and CFUniverse projects have been remanufactured and built, I'll push the whole series to distribution.
I will *not*, however, be compiling CFUniverse. Projects that big are just too flaky under Eclipse on my little old machine. Maybe if I had more memory and CPU...
The accessor verb "DefaultVisibility", the object attributes and accessors, XSD changes, SAX parser changes, and everything else I can think of that gets affected by adding an attribute has been updated. DefaultVisilibility defaults to true if not specified, and will be used to configure the initial widget visibility in the Swing layer.
The attribute applies to all column types deriving from Value, as well as Table, Index, Relationship, IndexCol, RelationCol, and Chain. These are all the objects which map directly to potentially visible widgets. Not *quite* all of them, so I didn't add it to the Any definition (I'm trying to get away from using Any as a quick and dirty cheat. There certainly was no shortage of work adding this attribute, especially a lot of copy-paste code for the SAX business application model parser.)
So, like, 11-12-13, eh? :P :P :P
The schema documents referenced by including a schema reference in a project are now loaded by the SAX parsers, and the root document parsers have been modified to recognize the various document names inferred from the referenced schemas.
In other words, you should be able to import a CFSecurity 2.0 compatible structured document containing lookup initialization data by running the SAX Loader for any project which references CFSecurity 2.0, including CFDbTest 2.0 (which I'll use to test this theory of mine about what the code should be capable of now.)
To be actually useful, the X(ml)Msg Rqst/Rspn parsers need to coordinate a bit better, extract the document name from the root element, and use that extracted name to populate the header and document tags prepared by the request parser as responses to the client. In other words, if a parser sees a CFSecurityRqst document, it should process the request and tag the response as a CFSecurityRspn document instead of, say, a CFCrmRspn document.
This will allow custom applications to be written serving a vertical model slice/market and have those custom clients run successfully with any project that references the schema definition used by the custom client to implement it's base code.
When the various Java* tags for Schema customization attributes are processed, they now walk the referenced schemas and produce any code they specify as well as the code specified by the top level schema being manufactured.
There is no need to similarly enhance the Table customization attributes because there is no way to enhance and extend the table customization attributes in a referencing project schema. Only the schema which originally defines a table can specify the customization attribute values for that table.
This sexy release of MSS Code Factory brings you more joy and pleasure with some typos cleaned up for the manufactured XSDs, which now all pass validation under eclipse. I hadn't noticed that there was a typo in the name of the Java/XSD customization variable expansions, which was resulting in spurious text where none is allowed in an XSD specification file.
Now each of the imported projects has it's XSD file written to the current target project's manufacturing run for the XMsg schema files. Next I need to modify the parser to load *all* of the XSD files inherited by the project, and wire the limbs of the root document parser to be triggered by the different document preambles used by the inherited message layouts.
I'll need to work a little magic so that the schema tag is inferred from the initial document object's name consistent with the referenced schemas, and used by the Rqst processor when formatting the Rspn message bodies. The very nature of the XML attribute header may have to change slightly so that it can be produced/inferred from the document object name in a response body.
The net effect of the gobbledygook is to allow the XMsg request and response parsers to handle messages using downlevel/referenced project protocols instead of presuming all the additional object interfaces specified for this project. In other words, a project written to the Crm 2.0 model can talk to your project's server that incorporates the Crm 2.0 model. It's a means of decoupling the specific document schema level from the messaging protocol, while still maintaining valid document specifications and values.
This allows the overloaded custom code to access the current manufacturing context at runtime, instead of being restricted to only the data that is present in the model itself. This is necessary in order to process the ManufacturingSchema reference properly.
The ManufacturingSchema reference is calculated by the GenContext directly by navigating through each of the prior contexts and probing them for their outermost SchemaDef. As the prior contexts are chased, the SchemaDef's they reference override the outermost calculated SchemaDef. So GenContext.getManufacturingSchema() can be quite expensive to run and should only be used when absolutely necessary, such as when forcing the manufactured XSDs imported from other projects to be produced into THIS project's directory tree instead of that of the defining project/schema.
I'm attempting to force the project definition to always be that of the top-most SchemaDef that isn't nested within another SchemaDef. This should force the naming of the XMsg Rqst/Rspn XSDs to use the name of the project that referenced the sub-schema for building the file path names.
The optional RefSchema was not being set properly after the referenced schema was loaded and resolved, so later code executions were failing befause the reference was null.
The attributes have been laid out in 25 unit cells height with a 5 unit sell row separator, a 200 cell label space, a 20 cell separator space, and an 800 cell text field space. It is presumed that this will fit nicely on a 1024x768 display window, which will be the default size instantiated as the user interface runs.
The accessors for the attributes have not been implemented yet, and the wiring of the defined widgets to the display has not been coded yet, either (as it requires the attribute accessors first.)
The ListJPanel now comprises of a JTable implementation customized to the data set referenced when initializing the construction or refresh of the JTable over the SwingDataCollection captured by the DefaultTableModel which provides the JTable with it's data model for processing. Applications manipulate the DefaultTableModel returned by getSchemaTableModel(), updating all runtime views of the modelled data simultaneously.
The various JPanels and JTabbedPanes of the user interface now all implement ISchemaTableJPanelCommon for their data spaces.
The ListJPanels and EltJTabbedPane objects now build properly. This is a good build for CFCrm 2.0.
The sub-object JPanels of the EltJTabbedPane are now instantiated and wired as tabs of the JTabbedPane when it is constructed. You should now be able to see the referenced objects and object lists getting constructed and displayed in the ViewEditJPanel.
The Swing panel infrastructure has been fleshed out considerably today, as I worked on the factory interfaces and implementations, replaced direct construction with the use of the factory interfaces, and rationalized the objects to their interfaces as much as possible.
The EltJTabbedPane now has a list of properly specified sub-tab JPanel instances, identifying the type of data to be displayed by each of the tabs. Next I need to wire the instantiator methods for the new attributes, which can then be overloaded by specializations of the generic GUI to deliver customized user interfaces by adding and placing additional widgets in the display, and by repositioning the widgets of object as one wishes, including "hiding" them by placing them at 0,0 and making them invisible, with no attachments to the formatting/layout grid.
The supporting code for the new ICFLibAnyObj2 interface has been added to the Java code that gets manufactured by the system. CFCrm 2.0 has been tested and clean compiled, which implicitly tests CFSecurity and CFInternet 2.0 as well. The other projects have *not* been tested and may have errors.
You'll need to use CFLib 1.11.806 with the code produced by this build; it's included in the installer, or you can download it and the source zip for it separately.
ICFLibAnyObj2 extends the ICFLibAnyObj interface with the additional methods required for performing named lookup resolutions and named objects. The original ICFLibAnyObj had to be left alone, because MSS Code Factory itself depends on those interfaces and all hell would break loose if I added the new methods to that basic interface instead.
Eventually there will only be one interface, but for now the 2.0 code base will expect to use the ICFLibAnyObj2 interface, while the 1.11 code base will continue to use ICFLibAnyObj.
The method getObjName() has been added to objects when they are manufactured to ExtendCFCore. See earlier release notes for details.
The rules for the Swing support have had the Insets properly instantiated when constructing an EditJPanel, so all the code clean compiles now.
An error in the index copies has also been corrected such that it is no longer necessary to hack the MSSBamIndexColTableObj.
The binding HasQualifyingTable returning yes/no has been added, as well as the reference QualifyingTable. This should be all I need to implement the naming scopes, though I could be wrong. I have a sneaking suspicion there will still be another piece of the pie to be added, but I'm not sure what it might be yet. It's just a gut feeling.
The QualifyingName attribute has been added to the Table specifications accepted by the parser, although the accessors for the newly referenced objects have not been implemented yet. I've also restored the customization tweak to the MSSBamIndexColTableObj.java file that prevents an exception at runtime (at least one of the models has a duplicate name, but I've been unable to find it and this "hack" works around the problem.)
See earlier notes from today for 1.11.786 for the purpose of this new attribute.
The reference ObjNameColumn tries to obtain a column reference that can be used as an object's name. First it tries to find a LookupIndex for the table, and uses the last column referenced by that index if possible (in keeping with the way LookupIndex is processed by the manufactured code.) Failing that, it will try to find a column named "Name" (case insensitive) with in the table's inheritence hierarchy. If that fails as well, the last column of the table's primary index is used as the name column. In the unlikely event that no primary index is specified, an exception is thrown (virtually all of the rule base presumes a table has a primary index anyhow.)
Next I want to add a QualifierContext attribute to tables, which will specify the name of the object in it's container hierarchy which forms the root of a qualified name. A qualified name is a dot-name which will be searchable within the qualifying context (eventually.) This will allow me to add in features for named resolutions in the manufactured SAX parsers, so that you can do things like specify Table.Index references or Table.Column references in the XML documents. Eventually I want to be able to manufacture a SAX parser that is as powerful as the one I've written by hand for processing the BAMs -- I really don't want to maintain the SAX parser by hand for 2.0.
Once I've added the qualifying context, I'll have the manufactured code add the artificial methods getObjName(), getQualifiedObjName(), and getFullObjName(), where the full obj name refers to a dot-path name starting from the root of the schema's object hierarchy document.
With those pieces in place, I'll be ready to add rules to manufacture the generic object function getNamedObjec(), which will presume that the name it receives as an argument is a qualified name from within the scope of the object being probed. Thus a leaf object that has no XML components will always return null from this method, while objects that have XML components will probe their object lists for the next named component, trim the name, and if the name is not empty yet, recursively call getNamedObject() on the sub-object with the trimmed name. This will be required in order to process the qualified names I intend to add to the SAX parser.
I'm not sure at what point I'll be modifying CFLib's ICFLibAnyObj specification to incorporate these new standardized methods, but it'll have to be done before the SAX parser support can be coded.
The skeleton of the java+swing layer has been sketched out, though all of the conceptual objects of the GUI do not exist yet, much less are the details of the panels fleshed out. But it compiles properly, so I figured I'd stick with the good old "release early, release often" philosophy that I've followed to date. However, I do not intend to make any of the GUI support packagings the default SourceForge download until much further down the road. For now, I'll leave the PROD release (735) as the default download. But if you want to see what I'm working on, you can download the current release instead -- it just *adds* new code; it does not alter the core that was delivered with 735.
Add the ListJPanel, which displays the attribute values of list of object elements which derive from TableObj. The Container, if present, is listed as the first data column. Then a group of the Parents, a group of the Masters, the object's optional name attribute, and then the raw type data attributes, and finally the Lookups in the rightmost columns. For each of the relationship types that are embedded as columns, an ellipses button is displayed to the right of the optional name for the referenced object which brings up a view of the referenced object.
The row header cell of each listed object displays an ellipses button which is used to bring up a view window over the object.
If a ListPanel is a subelement of a window which is in edit mode, then the JPanel containing the list of elements also displays buttons for adding new objects in the home cell of the list (0,0), and adds sheet-of-paper-with-pen edit buttons to the row headers of the listed objects. The ellipses buttons for the Parents, Masters, and Lookups are joined by binocular buttons which are used to bring up object chooser dialogues.
Double-clicking on a data cell outside of an action button brings up the view window for the object associated with the row.
Ladies and Gentlemen, for the first time in my life, I consider a piece of software to be ready for release to production, rather than simply having reached an artificial due date set by some marketing department or sales representative. This code has been tested and exercised as thoroughly as I know how to, and I hereby unleash it to production.
I found an error in the CFLib code that was resulting a null exception while trying to report a null exception that had been caught from the Xerces libraries. While the root cause of the exception being thrown by the CFDbTest 2.0 XMsg tests has not been identified, the programs no longer crash due to the secondary null exception, but instead properly report on and ignore the first exception, proceeding thereafter to run normally.
I have no idea why this would have cropped up now -- obviously I've been running the CFDbTest 2.0 suite from the command line all along, and I never had problems with the XMsg tests before now. However, there was a new OpenJDK release installed recently, and I had Java actually crash hard (trying to dump core) earlier today, so perhaps some fundamental flaw has been introduced into the JIT optimizer and I just happen to have encountered some nasty side effects being caused by that optimization. It wouldn't be the first time something like this has happened to me during all these years of Java development since the 1.0 JDK.
Regardless. I will be rebuilding and repackaging all of the CF* 2.0 projects now that I've got the problem sufficiently resolved to move ahead. It's ugly to have to report that null exception thrown by Xerces, but such is life. Life is rarely pretty, and code tends to be much uglier than life.
There is a problem with the way getLog() is used during the initialization of a parser which could result in null pointer exceptions while reporting errors and exceptions, due to race conditions during initialization that seem to differ between running under Eclipse and running from the command line. At least, I sure hope this fixes the problem I'm having.
There are some weird things happening with the code right now. Things that worked don't any more, so rather than chase my tail too much I'm going to start from ground zero and rebuild and repackage all binaries from their raw source. That includes the CF* 2.0 projects as well.
I'm getting a null pointer exception out of CFLib's XML core when trying to run the XMsg tests on the command line. This does not happen when running under Eclipse, so I'm thinking there has to be some problem with the library image. I'll rebuild and repackage the libraries, then rewire and rebuild all the projects to reference the updated jars.
Why the hell does shit like this always have to crop up at the last minute?
CFDbTest 2.0 has passed it's confirmation tests for PostgreSQL, DB/2 LUW, and MySQL. I didn't break anything with those databases since they were last tested. I did get a Java crash, which I haven't seen in a long time, but the test ran successfully on a rerun, so it's some glitch in OpenJDK for Debian as of this date (java version "1.7.0_55", OpenJDK Runtime Environment (IcedTea 2.4.7) (7u55-2.4.7-1~deb7u1), OpenJDK 64-Bit Server VM (build 24.51-b03, mixed mode).)
I'm installing the CFDbTest 2.0 database instances on the Windows laptop for Oracle, SQL Server, and Sybase ASE, but that's going pretty slowly as the box is bogged down with manufacturing CFBam and CFUniverse. I won't be able to *run* those database confirmations until tomorrow after the manufacturing is done -- the laptop tends to crash if I try to run 4 Java jobs at the same time (my guess is it wants to keep the whole Java runtime in memory, because it's a 4GB box and I tend to run with 1GB Java instances. Three -- runs like a charm. Four -- not so good. Hard crashes and freezeups.)
Reducing the sizes of the varchars for DB/2 LUW did not address the issue of the record size exceeding the page size. However, I got off my lazy posterior and learned how to specify "PAGESIZE 16384" for both the database and the tablespace when you create them.
DB/2 LUW will then load the CFUniverse schema successfully.
DB/2 helped me to identify a duplicate relationship between an EnumTag and an EnumDef in CFBam. With this duplicate reference removed, the CFUniverse database schema should now install cleanly on all database engines that are supported (give or take the limitations of MySQL, about which nothing can be done -- you'll simply have to ensure your models abide by MySQL's restrictions if you choose to use that database.)
Rather than waiting for the runs to finish, I went ahead and did the DB/2 LUW and Oracle testing as soon as their database creation scripts were manufactured by 1.11.644, and also re-tested MySQL to verify that I'd corrected the problems it had with last night's changes.
Oracle found a couple of column names that were too long, a VARCHAR2(8000) that had to be converted to TEXT in the models, and a pair of duplicate relation names for the IndexCol table (it was using relncol prefixes in it's relationship names for Next and Prev.)
MySQL reported the same duplicate relationship names, although it improperly reports it as a "Can't create table" error 1005 instead of letting you know it's relationships that have the naming problem.
Testing DB/2 LUW, I found that the row sizes are limited to 4005 bytes, and the Table definition is over 5000. So I've reduced the size of the Name and Description attributes of the ShortDescription and Description to 50 and 100 bytes. That should be enough to bring it down under the required limit. I've also reduced the size of CFBam's "NameType" from 255 bytes to 192. In CFAsterisk, the configuration file full names have been reduced to 512 bytes from 2000, in order to address an index creation error from DB/2. The SchemaDef "PublishURI" column in CFBam has also been reduced to 512 bytes. If those two values aren't short enough to satisfy DB/2 LUW, then DB/2 can kiss my rectal sphincter -- I need to index DATA, not integers.
Rather than letting the manufacturing run complete, I've stopped it and made the corrections for those errors, and am restarting the manufacturing runs. With any luck this will be the magic run that finally installs CFUniverse schemas to all of the databases. I'll know tomorrow -- I'm going to be manufacturing the smaller jobs during the day today, and leaving CFBam and CFUniverse to run overnight some time this evening.
I am *so* close to done now. PostgreSQL and Sybase database installs ran cleanly, but Oracle had issues with the constraint name lengths and MySQL had issues with some reserved words (Precision and Interval.) SQL Server Express won't let you connect while running under a Cygwin bash script anymore, so I switched that database over to using .bat scripts like Sybase ASE does. I saw no point with testing DB/2 LUW at this point as it's probably going to have issues with the same reserved words as MySQL (my guess is MySQL has them for compatability. But that's just a guess.) MySQL also gripes about indexes having keys that are too long (over 700-odd bytes -- several of my keys use strings in the 1-2 KB range), but there is nothing I'm willing to do about that, so those tables will just have to use table scans instead of indexes for MySQL joins and queries.
Rework the SQL Server .bash scripts as .bat scripts because one of the Microsoft patches/updates since I last used SQL Server Express 2012 has broken the ability to run it under a Cygwin bash script. It just freezes up trying to connect to the database. The scripts have been tested.
Changed the names of the Precision and Interval attributes in CFFreeswitch, CFSecurity, and CFBam because those are reserved words for MySQL.
Broke up the construcion of the SAX document element handler for XMsgRqstParser because it was blowing the 64K function binary limit imposed by Java. At some point in the future it might be a good idea to do the same for the XmlParser and the XMsgRspnParser, but seeing as it's working ok for now I'm not going to futz with it. This code has been test-manufactured but not built; there is no reason for it not to build.
I want to be done with this release already. I'm oh-so-eager to be done.
Rather than trying to actually run the Oracle scripts, what I ended up doing as a first pass at checking for long names is taking a directory listing of the Oracle scripts, stripping off the leading "cr" and trailing ".plsql", then searching for any strings where the length was over 30 characters. This won't detect any variable names that are too long in the PL/SQL, but it's a good first cut at meeting Oracle's requirements and should at least save me *one* run of manufacturing.
But for now, back to the drawing board: running all the jobs again.
There were a few duplicate names in the database models for CFGui and CFBam which have been corrected. CFSecurity had renamed AuditAction to AudAct at the database level, which of course causes all the history table creations to fail -- I changed that back to what it used to be.
On the bright side, I had only started manufacturing CFBam and CFUniverse a short time ago. On the down side, I have to remanufacture everything again.
Oh well. It's only CPU...
WARNING: That means NONE of the CF* 2.0 installers will install to a database right now, including CFDbTest 2.0.
There are two syntax errors left in CFUniverse caused by a duplicate "Tenant" suffix in the Index, which inherits from Scope -- Scope has a Tenant reference already, so the Index one has been renamed as an IdxTenant suffix narrowing the Scope reference. This will also ensure that the attributes of both tables are kept in sync.
After I finish manufacturing CFBam and CFUniverse with the fixes, I'll refresh their builds and post them.
Then I'll be taking a day or two break to deal with my garden outside before I tackle running the Oracle database creation scripts for CFUniverse.
And the inevitable model edits and remanufacturing. I would be absolutely *shocked* if it installs cleanly to Oracle as-is.
Eventually I'll be ready to try another test build of CFUniverse, the near-14 million line monster. I've got an exact count of the Java source code coming up -- the 14 million includes the database creation scripts, and with six databases, they take up a lot of space.
TortoiseSVN has finally completed the checkin of CFUniverse. It only took 2-3 hours in total to delete the old source files from subversion, add the new ones, set their attributes, and check them in. Mind you it *was* a few thousand objects in total. :)
There are 7,932,771 lines of Java code in CFUniverse 2.0
Once I hit production, I'm going to step back into R&D for a year or two, working on the GUI for CFBam. It's high time I put a user interface on this thing so it's easy to use. I've spent a lot of years thinking about user interfaces, and I have an idea I want to code. I'll do it with Java Swing for portability.
One of the structural ideas I have for the Swing code is to author Panels with the window contents rather than the window itself. That way you can embed the display in a browser Applet or in a desktop window easily.
You can only bring up one view/edit window per object. No duplicate edits allowed. There will, however, be multiple ViewPanels instantiated -- one or more in the view port on the right of the display, plus at most one in a window.
More importantly, Panels can be embedded as sub-elements of a tall wide virtual canvas, where they dangle off the label tags of graph nodes in response to hide/show user requests.
So on the left you'll have a common tree view of icon-tagged nodes with their names. Click on a node, and the viewport to the right will zip over to focus on that object, whether it is currently displayed as a tag or as an expanded view panel in the detail view port to the right.
This view is also an MDI top window, and can bring up windows within itself. The Window menu of the MDI frame displays a list of all the windows you've brought up. There will also be an Editing sub-menu, which displays a list of the the windows you have open in Edit mode.
Double-clicking the tag which anchors the top-left of the optional view panel in the view port brings up a window in view mode for the object.
Lookups and subobject singletons are indicated by glasses icons next to their displayed name in the view panel. The attributes which participate in the id sets of such objects are hidden from view automatically, because you're expected to set them by picking references to objects in the user interface, not by entering id codes.
Lookups also have a binocular/search icon when the panel is in edit mode, used to bring up an instance picker panel in a modal dialog window. Lookups display their FullName.
FullNames are used throughout the GUI to reference singleton objects.
If a Lookup is named, and the name column is the only attribute in the naming index specified for the target/to object, then a combo box is used instead of their FullName with an instance picker.
Parent references are treated as Lookups in the GUI.
Also in the view panel is an edit icon for the object, indicated by the usual paper and pencil icon. Clicking this brings up a window, rather than changing the in-place edit mode of the view panel unless it is embedded in a window. I don't want to deal with having to play with dynamic menus based on which panel has focus, and a lot of people hate relying on right-mouse-click feature menus as part of the day-to-day navigation of an object.
I won't be checking permissions, just letting you fail when you try to save. Not the best user feedback, but workable from a data-protection standpoint. The view panel also displays a delete-this button with a big red X on it that brings up a confirmation dialog before executing the delete.
At the bottom of the view panel is a set of tabs, each tab containing a list view over sub-objects whose referenced indexes are not unique. Only the attributes and referenced attribute names of the referenced object are displayed in the list, the same as for a view port except mapped to list cells instead of name-value widget pairs. At the left of the list are the names (if available) and action icons for the rows: edit (pencil and paper) and delete (red x). Note that in this case, only the name is referenced by the cells, not the full name. However the lookup cells also display glasses icons for bringing up a view window over the referenced object.
Each row of the list box displays a row header including View and Edit icons that bring up the corresponding window over the object, or focus on the current view/edit window for the instance in the MDI Main frame.
The lists in the tabs also display a sheet-of-paper AddNew action button in the list row header. If the referenced object has sub-classes, clicking the AddNew button brings up a list of the instantiable sub-classes of the referenced object, each menu item in the list triggering a new instance to be created and wired in edit mode within a new window.
The same ViewPanel instance can dynamically serve to display and modify new and existing instances, as well as being in view-only mode. It's going to be fun to code such dynamic widget construction. But I've done it many times before, both with Swing and other GUI toolkits. Nothing new here. What will happen in practice is the panel will have all possible edit widgets instantiated and wired, but the edit widgets will be made invisible if the window is put into view mode. Simple stuff.
At some point clicking on a row header will sort by that header. Double clicking a row cell brings up or focuses on the view panel for that object.
The elements of the view panel can be resized and reshaped using dividers.
I'll deal with customization of the view panels in 2.0, when I gain the ability to model a CFGui as well as a CFBam. To do custom windows, you need some way of storing/sharing and editing layouts and the code associated with them. For now what I envision is adding method signatures, parameter invocation specification, and using that to specify the code for taking over the Client layer code for custom event messages, and wiring them to GUI elements. I'm ambivalent about providing an actual scripting language; I think I can get the behaviours I want by simply being able to wire the custom events to buttons with a specification/linking of the current object's attributes. I still need to think more about those details, and I'm sure a lot of the pieces will fall into place if I try customizing some of the GUI code produced by the earlier steps above.
Instance picker windows are simpler to code than full-scale Find windows. Before I can tackle Find windows, I'd need some way of specifying match-set parameters dynamically, and creating the SQL code on the fly in the server layer. I know how to do this in theory, but I'm iffy about implementing it until much farther down the road.
So what an instance picker will do is display the entire set of lookup objects for the search scope of the reference (identified by specifying a new SearchScope attribute on the Reference specification, which is the name of an object/class that is expected to be in the scope hierarchy of this object.
To implement such scope searches, one simply navigates the scope hierarchy to the desired object by invoking the ICFLibAnyObj interface.
I'm going to modify the ICFLibAnyObj interface and it's implementations to specify a Name attribute of the object. Table definitions will have an optional NameAttribute attribute added to them, which is a late-resolved reference to a column of this table. So when you're determining the name of this object in the enhanced ICFLibAnyObj contract, you search this table and it's inherited lists for one that specifies NameAttribute, and use that attribute as the name of this object. So you could, in theory, change the Name attribute in a subclass. You can also end up with nameless objects near the top of the object hierarchy. Those are going to have to be made non-instantiable objects so that they can't appear in the GUI.
In the navigation tab, you have objects with their names taking up a row and indicating their containership/parentage with the usual UI tree of hide/show elements. Under an object are the automatic elements which comprise it's lists of sub-elements. Clicking hide and show expands the node to show those sub-element names. Thus each row of the tree view references an object instance -- the automatic elements reference the instance which provides the container for the list.
I also need to add a getNamedSubElement() method that takes a dotted name, and searches the named sublists of each object in order. This, of course, presumes that a name will never be duplicated amongst the named objects under this object container.
I also want a getNamed[ReferenceSuffix]() method that lets you deal with duplicate names amongst the sub-element sets, by expecting names to only be unique to each subelement list.
A dynamic read-only FullName accessor method will be added to objects. I'm going to add the general modelled object attribute NamingScope to the 1.11 BAM. Thus a named object which specifies a NamingScope will dynamically construct a dot-separated full name by chasing it's container hierarchy. An object which doesn't specify a NamingScope will instead just return it's Name.
The BAM will specify Schema as the naming scope for all schema elements. This will allow me to *finally* get around to implementing late name-based resolution of sub-element references in the SAX Parser by passing around the FullName specifications in those attributes, and adding artificial name attributes to the objects for those relations. So you will have to specify the full Table.Column name references in the IndexCol specifications rather than just Column as in the 1.11 model, but it will *work*.
I've been trying to decide how to deal with that so I can fully automate the production of the CFBam parser for 2.0. The model is just getting too big for me to keep coding that parser by hand as I have for 1.0-1.11.
I think I'll release the rule base for that GUI code as an SP1 release of 1.11, rather than deferring it all until I can get the engine running in the 2.0 code base. I really don't feel like dealing with that again right now -- I've migrated the engine through several releases of MSS Code Factory over the years, and it's a tedious cake-walk of mindless effort after 11 iterations of doing so (1.0 through 1.11.) I'd rather work on a GUI for CFBam for now.
I don't expect to be posting builds of the new code for some time. After four years, it's time to take a bit of a break, I think. Do something else with my time -- like tend to the garden outside for the summer. A wee vacation from coding. I'm sure I'll still code for fun, but there won't be any self-imposed pressure to get it done once production is out the door. I'm not going to hold up production for this mammoth project. But as it only requires minor enhancements to the 1.11 model and parser, I won't be deferring it to 2.0's new model structure, either.
There was a major bug importing the relationships of a sub-model/project. The IsRequired attribute was not being copied over, so although a model would produce valid, building code for the sub-project itself, it would produce erroneous code when imported. This was highlighted by trying to build CFUniverse, as all of the Optional relationships referenced by the custom code for CFBam would not compile properly -- with the bug in place, all the relationships it was looking for were flagged as Required, so the method signatures didn't match.
The only safe response is to remanufacture *everything* from scratch again, starting with CFUniverse.
The new tags have been verified by adding their specifications to the CFSecurity 2.0 model, and manufacturing it with the enhanced rules and engine that support the new tags. I'm now ready to do the manufacturing run for CFUniverse so I can do the test install of the Oracle schema.
The complete list of new tags is JavaXMsgSchemaImport, JavaXMsgSchemaFormatters, JavaXMsgClientSchemaBody, JavaXMsgClientSchemaImport, JavaXMsgRqstSchemaBody, JavaXMsgRqstSchemaImport, JavasXMsgRqstSchemaWireParsers, JavaXMsgRqstSchemaXsdElementList, JavaXMsgRqstSchemaXsdSpec, JavaXMsgRspnSchemaBody, JavaXMsgRspnSchemaImport, JavasXMsgRspnSchemaWireParsers, JavaXMsgRspnSchemaXsdElementList, JavaXMsgRspnSchemaXsdSpec, JavaXMsgTableImport, JavaXMsgClientTableBody, JavaXMsgClientTableImport, and the unused JavaXMsgRqstTableBody, JavaXMsgRqstTableImport, JavaXMsgRspnTableBody, and JavaXMsgRspnTableImport (there are no generic table objects for the requests and responses -- they're custom element parsers for each element mapping to a table.)
The JavaXMsg tags as described by the 1.11 programmer's notes have been coded and wired. They are not being used in the rule base yet; that shall be the next step.
The new tags will be JavaXMsgSchemaImport to specify custom imports for the [SchemaName]XMsgSchemaMessage.java formatter, and JavaXMsgSchemaFormatters for the custom formatter implementations. Table definitions correspondingly will have the new tags JavaXMsgTableImport and JavaXMsgTableFormatters. That will take care of customizing the code for the [SchemaName]XMsg package.
For the [SchemaName]XMsg[Rqst/Rspn/Client] packages, there will be the new tags JavaXMsg[Rqst/Rspn/Client]SchemaImport tags. Similarly there will be TableImport tags added to the table definitions.
The new tags JavaXMsg[Rqst/Rspn/Client]SchemaBody and TableBody provide the means of embedding the custom methods in the implementation body. For Client in particular this is used to replace the base implementation of a custom database method's logic with a serialize/deserialize implementation pair. It is recommended that most custom XMsg methods should follow the XML pattern of inheriting from the PKey XML for the object, adding on attributes for the pass-as-constant arguments to the method, and responding with a refresh read of the object that was potentially modified by the invocation of the server method. For example, you'd want to implement a remote invoker for a "transferFundsTo( tenantId, depositAcctId )" method that has to update more than one record within a transaction to complete reliably. Anything that requires a transaction to maintain consistency should be implemented as a remote message invocation.
The new tags JavaXMsg[Rqst/Rspn]SchemaWireParsers and TableWireParsers provide the custom code within the same code block that wires the top level document message parser's element list for the manufactured code. There is no corresponding routine to be initialized for an XMsgClient layer.
A pair of JavaXMsg[Rqst/Rspn]XsdSpec tags will provide the means of defining custom records/elements/types that will be included just before the top level document object's elements are bound to the top level object. Finally, a pair of JavaXMsg[Rqst/Rspn]XsdElementList tags provide the hooks into the element lists within the top level document's specification so you can list the relevant objects and elements used to implement your custom messages.
You really don't want the default server-transaction environment implementations of many custom methods to be executing on the client -- you want to customize a request/response message pair using the new custom tags, which adds the parsers to the relevant tables as internal classes, and connects them to the element parser lists for the document definition object.
There are, of course, methods which *should* run on the client, in which case you just do the regular customized code for the Interface, Obj and EditObj java as demonstrated by CFBam and CFSecurity. I just need to add remote execution capabilities, even if it does require custom code for now. (I'd like to model method arguments and automagically generate the client-server message pair and parsers, but that won't happen until some time in the 2.x series.)
CFBam and CFUniverse were the last projects that still had compile errors. I believe this release corrects that problem (final testing is still in progress, but I've tested the *bits*, so it should go ok.)
PostgreSQL index scripts have been re-enabled. I don't know why they've been disabled. I don't remember doing it, but I must have.
I've pre-emptively shortened several table names in the models so you'll pretty much have to wipe and replace your database scripts, or do the handy-dandy sort-by-date and delete all the untouched old script files for each database installation script directory. The script names change with the table names.
I'll let the manufacturing of CFUniverse by 1.11.540 finish so I can do a test compile with it before I start remanufacturing all the projects with 1.11.547 RC5.
In theory, this release should produce valid code given a valid model for any Business Application Model you care to throw at it.
Once I've verified that the CFUniverse database creations scripts produced by this release can be installed to all of the database servers that are supported, I'll do the production release of 1.11. I honestly can't see that taking beyond the end of this week.
It took over three years of effort to bring the 1.11 code base to a release-ready state, but it's there and ready for you to download and use. Please report any bugs you find with the code so that they can be fixed before the production release.
At this point in time, all of the project Java code produced by MSS Code Factory should compile if the model is valid. If it won't compile, look into the details of the model rather than suspecting bugs in what the factory is doing. The same applies to the database creation scripts.
After I have a build of CFBam and CFUniverse (the last two projects which still have compile errors), I'll switch over to debugging the Oracle database creation scripts for CFUniverse. Oracle is the fussiest about name lengths, so it should expose any problems in the various models imported by CFUniverse (all the projects except CFCore and CFDbTest.)
Once I can install CFUniverse into Oracle, I'll test the other database creation scripts having met the naming restrictions of the overall system: DB/2 LUW, MS SQL Server, MySQL, PostgreSQL, and Sybase ASE.
With the code building and all of the databases installed, it will be time for one more task:
The production release of 1.11.
Nearly 4 years of development has gone into 1.11. It's been a long haul to get all the features implemented that I wanted to. But I'm almost done painting this "fractal" of code: one big huge function of string concatenation and logic that predictably produces a pattern of text files based on an input Business Application Model.
During that 4 years, as a side effect of working on the overall project, I'll have also created nearly 14,000,000 lines of code for CFUniverse: an average of over 9,000 lines of code every single day for four years without a day off, a weekend, or a break. Needless to say I didn't actually work that hard. :P
But it gives you an idea of the dollar value of what this system can do in about a day and a half on a Core i7 laptop. On a big SMP box where you could split the manufacturing jobs, it could produce the code much, much faster than I can on my home systems. Translation from model to source is more a factor of CPU power than anything else.
For whatever reason, the manufacturing of the indexes for PostgreSQL had been commented out. This has been corrected. PostgreSQL performance would suck hind teat without indexes.
The model for CFBam provided with this release should enable both CFBam and CFUniverse to produce compiled code. I've tested the pieces of the broken build to verify that those errors are resolved, but there is always the possibility that I still have a hardcoded schema name left in the custom code specifications of CFBam (I'm pretty sure I don't, though.)
If I can build the CFBam and CFUniverse code that is manufacturing right now, then I'll proceed to the last phase of my pre-production testing: installing the CFUniverse schema to each of the databases. I'll start with Oracle -- it's the fussiest about name lengths, and the most likely to have issues.
The CFFreeSwitch 2.0 model needed a fair bit of work before it would produce a valid build. But it compiles now.
All of the 2.0 models save for CFBam and CFUniverse now have compiled builds.
The SQL Server code was incorrectly trying to close non-existent audit statements if a table containing a BLOB or a TEXT field was not audited. This was not uncovered previously because all tables in CFDbTest 2.0 that have BLOB or TEXT attributes are also audited.
The CFAsterisk and CFBam models have been corrected to remove the use of duplicate index suffixes in the table inheritance hierarchies.
However, it will be quite some time before CFBam will compile, as that code has a lot of embedded logic that has not been refreshed in well over a year, so it references objects which no longer exist in the update CFBam 2.0 model. As a result, CFUniverse will not compile until CFBam does as well. However, I'll move forward with building and correcting the other smaller projects before I tackle the CFBam issues.
C'mon, Dudes and Dudettes. It's 4:20. Time to chillax and spark a bowl. :P
The verb ColumnInContainerOrNamedLookupRelation was not properly considering Lookup relationships in it's candidate set of inherited relationships.
This may bring out another bug elsewhere, in which case I'll have to spin off a new verb, ColumnInContainerRelationship and use that in the areas that exhibit the new bug. i.e. I may have removed the Lookups from the candidate set intentionally at some point, not considering the side effects that would cause.
I'm switching the license on CFGCash to be compatible with the other MSS Code Factory project specifications by providing a Dual GPLv3/Commercial license. The only thing I need from the GNU Cash code are the calculations for applying interest and currency exchange charges in accordance with North American banking and accounting standards.
I know calculations are to be performed with 5-fractional-digit accuracy, and then rounded to two decimal cents after each step of the calculation, so that there is a consistent application of rounding rules when doing interest calculations. This is code that lends itself very well to COBOL, but it's a bit of a pain in languages like Java, C, or C++, because you have to manually specify the roundings using the math libraries for infinite-precision calculations, such as Java's "BigDecimal" class.
Let's clarify: CFGCash will the the Java Swing GUI implementation for the CFAcc core accounting model, which in turn is derived from CFCrm, CFInternet, and CFSecurity. I intend to add hand-written interfance panels that can be used as the contents of applets or applications by simply instantiating one as the full sized content of a JFrame or JWindow to each of the layers, starting from the top-down as much as possible.
As each layer will want to make use of the interface panels from the inherited layer, I'll need to start building all five projects, passing the compiled jar libraries along from each layer to the next in their bin directories and on their build paths.
Each object class will register generic panel constructors for each of the data objects it defines, keyed by an interface tag/suffix and naming pattern to be followed by each of the layers. Then the top level interface dialog script engine can simply do object-verb menu presentations where verb is the tag/suffix for an interface over the object. For example, UIView, UIEdit, UIConfirmDelete, UIMessageDeleted, UIConfirmCancel, UIMessageCancelAborted, UISaveAndEdit, UISaveAndView, UIMessageSaved, and UISaveAndClose are verbs used to manipulate a window-based interface of UI objects used to present the various object instances specified.
That enables the implementation of a hashmap-lookup based set of interface factory instances for the various objects, with one hash map per verb. So I'll have a hashmap of hashmaps of interface factories in the Java code, keyed by a string at the top level and by an interface class specification at the second layer. That's the best I can do for a generic user interface event firing mechanism for now.
Then I can do a generic window manager interface on top of those objects as a generic object browser based on the XSD container relationships, displaying each object as a generic named object, or as a customized panel used for viewing and/or editing the object (basic hide-show behaviour.) I know how I want this top layer to behave, but the components could be used to follow different interface navigation paradigms as well.
This is release candidate 3.
The X(ml)Msg layers have been tested using the PostgreSQL persistence implementation for CFDbTest 2.0. All tests pass when run against a clean database; test 0004 fails on a "dirty" database (as expected.)
Saskatchewan 306 area code in the house...
The mains for the various loaders have been reverted to using localhost for initializing the connection configuration files when they don't already exist. CFDbTest 2.0 has been retested with JDK 7u55, and the "localhost" lookup errors have been corrected by the update. It's even possible that a recompile wasn't required for the correction to take effect.
Open JDK 7u55 has been released for Debian, so MSS Code Factory has been rebuilt and repackaged with it.
I found the cause of the NumberFormat exceptions. For whatever reason, you can no longer rely on DNS lookups for the host name parameter of a database connection configuration file. If you use 127.0.0.1 instead of localhost, for example, the number format exceptions do not occur.
Confirmed: The PostgreSQL NumberFormat exceptions are gone. I no longer have to settle for MySQL debugging.
I also ran the regular PostgreSQL loader, and it had no problems with tests 0002 and 0004, so I didn't do something to screw up the named lookup resolution; I have a bug somewhere in the data readers for XMsg.
Very odd errors this morning. Code that has clean compiled for ages would not compile without correction.
Seeing as PostgreSQL 9.1 is misbehaving under Debian, I'm allowing for the use of MySQL 5.5 as an alternative test bed for the XMsg testing.
The distribution has also been rebuilt and packaged with CFLib/CFCore 1.11.243.
Modified the CFInternet model to consistently reference the TLDId in hopes of resolving the errors in tests 0034 and 0035.
Running the CFDbTestRunXMsgMySqlTests test suite, there are still 4 issues that are not caused by the limited date range support provided by MySQL.
Test 0002 LoadISOCurrency attempts a duplicate insert of the ISOCurrency Id 2, the same as had been occuring for the PostgreSQL version of this loader.
Test 0004 TestNamedLookup does not get to attempt a duplicate insert of the ISOCountryCurrency as the PostgreSQL code used to do, because it is getting the same error as Test 0002 does before the lookup searches can be invoked, which don't seem to be happening properly for join-by-name for the XMsg interfacing when operating via the command line.
Basically it sounds like there is an issue with the indexed name resolvers with a client implementation. I'll rerun the raw MySQL SAX loader tests as well to see if the problem exists in that code as well. It shouldn't. It used to work.
Test 0034 has developed a new exception. It is now throwing a ClassCastException trying to coerce a CFDbTestTopDomainObj into a ICFDbTestProjectBaseObj.
Test 0035 is still producing the "Unrecognized attribute 'TLDId'" exception from the CFDbTestXMsgRqstTopDomainUpdateHandler.
Seeing as PostgreSQL 9.1 is misbehaving under Debian, I'm allowing for the use of MySQL 5.5 as an alternative test bed for the XMsg testing.
The distribution has also been rebuilt and packaged with CFLib/CFCore 1.11.243.
When encoding XML strings, use ' not &squot;.
This is disturbing. I can sometimes replicate the NumberFormatExceptions, but they seem to be getting thrown by the INet Cache layer of the new 9.1 PostgreSQL driver. It also happens with the old driver and the current release of PostgreSQL 9.1 as just updated today by Debian. PostgreSQL is now broken under Debian. That's not good.
25 of 36 tests had passed with the previous release, so 5 tests have been corrected.
Test 0004 TestNamedLookup attempts a duplicate insert of the ISOCountryCurrency, so the lookup searches don't seem to be happening properly for join-by-name for the XMsg interfacing when operating via the command line. I'll have to investigate further (such as testing under Eclipse.)
Test 0005 InsertOptFullRangeNullValues is throwing a NumberFormatException at the command line, but runs fine under Eclipse with a current Debian box.
Go figure. Anyhow, there is nothing I can do to fix it, so I'll have to let that one go for RC3.
Test 0012 InsertOptMinValueNullValues also throws a NumberFormatException at the command line. I have not tested it under Eclipse yet.
Test 0014 InsertOptMaxValueNullValues also throws a NumberFormatException at the command line. I have not tested it under Eclipse yet.
Tests 0034 CreateComplexObjects, and 0035 ReplaceComplexObjects are reporting an unrecognized "TLDId" attribute. I'll have to investigate this. Specifically, I'll start by checking the model to see where there are any references to a TLDId, and go from there.
MSS Code Factory 1.11.210 has been repackaged with CFLib/CFCore 1.11.207, which corrects the formatting of output TZ values by adding the missing ":" between the hour and minute components of the timezone offset.
Parsing the TZ values formatted by CFLibXmlUtil was causing exceptions due to the missing ":" between the hour and minute components of the timezone offset. The error in the formatters has been corrected.
The subversion repository at SourceForge was corrupted at some point in the past, so rather than continue on with a repository you couldn't do a fresh checkout from, I opted to nuke the existing repository and reload the source code from my hard drive. That way I know for sure the code is safely squirreled away, though it meant losing the subversion comment history that had so much information gathered in it over all these years of work.
Here's to Douglas Adams, one of the funniest men to ever walk the planet.
The source code for CFLib and CFCore have been restored to the subversion repository, which resets the revisions. There has been no change to the code since the last release, but this version is resynced with the current subversion repository.
The first four CFDbTest 2.0 X(ml)Msg tests now run successfully. There are a lot of problems with the remaining tests, some trivial, some not so much. But I got a lot accomplished for today so I'm calling it a night, and will pick up the debugging at some point in the future. Maybe tomorrow, maybe later. It depends on my mood, and the weather is not cooperating (migraines.)
There is enough of the code debugged and implemented with this release for the XMsg PostgreSQL Loader to issue a request, receive an (invalid) response, and to attempt processing the response.
This code still doesn't work, but it's getting a lot closer than it was a few hours earlier today. I think I've put about six hours into debugging tonight.
Note that this release also links to the latest CFLib/CFCore 1.11.10788, which correct CFLibXmlUtil's formatting of attribute names and values when forming XML messages.
The precedence rules for Java meant that instead of having the separator or an empty string prepended to the attribute name and assigned value, only the separator was being returned for each attribute that was to be emitted. For the lack of parens the code had gone boom when parsing a response as formatted by the request processing of the XMsg layers.
I've come to the realization that my tests for the XMsg PostgreSQL Loader will not be able to exercise the full range of functionality available to the server layer code. Specifically, I have not yet implemented any sort of login/logout functionality for initializing the server side of the Authentication data. So the test client will only function with it's default system/system/system authentication, preventing the security tests from being exercised properly.
That's ok for now. I was planning to get around to the login code soon, and this just bumps up the priority. I may as well add it to the RC3 feature set. That way RC3 should have all the foundation pieces in place that one would need to build a custom Android or Java client running via the XMsg protocol over https to a JEE server. You'd just need to work out how you're going to serialize and deserialize the messages, and I'm sure any half way decent web programming house has a dozen ideas of how they could do that, so there is no point in me dealing with it right now myself. Someday, yes, but not now.
The PostgreSQL-persisted implementation of a direct-invocation of the request handler by an XMsg Client implementation has been coded.
A persistent PostgreSQL store is made the backing store of a caching layer. That caching layer is stitched to a XMsgRequestHandler as it's back end storage provider.
This back end handler is wired to a direct request invoker, which implements an XMsgClient by directly invoking the XMsgRequestHandler from the sendReceive() method that the client uses to send requests, receive responses, and then process those responses. That invoker is then bound to a client-side caching layer, such that any DbIO requests go through the XML serialization/deserialization process with automatically managed per-request commits and rollbacks.
The code has been rebuilt and repackaged with the recreated source tree. The Linux dev box and the SourceForge servers are now in sync for MSS Code Factory 1.11.
Seeing as the net-sourceforge-MSSCodeFactory-1-11 source tree was corrupted in Subversion, I renamed it to -old, then created an empty directory, checked it out, and have just finished adding all the source tree and setting the file properties for the source code files.
This should resolve the problem with Subversion, although it means all of the code history is now only available in the corrupt -old tree.
The first cut of the integration test layer java+xmsg+pgsql has been wired to the rule sets and scripts and will now get manufactured by the system. That's not to say that it's valid code yet... just that it's been hooked up for manufacturing.
I think that's a complete implementation of the XMsgClient relying on the Parser overriding the implementation of sendReceive() to provide actual communications.
I need to wire up a test implementaton that persists data using PostgreSQL, via an XMsgRqst handler whose parseStringContents() are invoked directly by the Client. The sendReceive() will simply pass the sent string to the parseStringContents() of the request handler bound to the specialization, which will in turn leverage the PostgreSQL persistence implementation to respond to the requests and formulate appropriate response messages for processing by the XMsgRspn parser instantiated by the XMsgClient implementation.
Yes, that was a mouthful.
The request and response handlers are getting closer and closer to done. But I still need to tease out the last object to be manipulated by the response parser, and use it to clobber the parameter buffer where such was passed into a database method.
The code produced for CFDbTest 2.0 clean compiles with this release. There are still some enhancements needed to apply the response buffers to the pass-back arguments and instances in the object hierarchy. (The XMsgClient layer, specifically.)
The state of the rules is unknown at this point in time; they probably don't manufacture clean code right now.
The CFLib exceptions can now be factoried with a single string argument. This signature is used to map a SchemaXMsgRspnException response message to an appropriate exception implementation on the receiving client's end.
The unwrapping of the results of the parse into the return values to be passed as responses of the various database I/O methods has yet to be implemented. But the responses are properly parsed into their conglomerate attributes by the structure of the parse.
A number of enhancements have been made to the functionality of the various XML and SAX parser layers in order to support the newly added parseStringContent() method of a basic SAX parser implementation as expressed by CFLib/CFCore 1.11.10626.
Wire constructors that take the SchemaObj that is to be used as the persistent storage implementor by the request and response parser processing.
The XMsgClientSchema constructors have also been enhanced with SchemaObj argument variations.
This form of XML Parser construction will become the preferred version as opposed to explicitly invoking setResponseHandlerSchemaObj() after constructing an instance. I think it makes for a clearer understanding of what the arguments are to establish a realizable
Added the sendReceive() method to the XMsgClientSchema, and made use of it and the XMsgTableFormatter implementations to prepare the request message and issue it to the server.
Now I need to take a step back and figure out how to pass a string into the SAX XML parser -- I've always been parsing files until now. I'm sure there must be a method for doing it short of creating a virtual File implementation.
The CFLibXmlCoreSaxParser.parseStringContent() method has been added, and the copyright notices of all the CFLib and CFCore source code has been refreshed and made consistent with standard conventions.
The commented source now clean compiles.
The core Schema code base implementation interfaces and the Programmer's SchemaObj interfaces have been documented. Most of the remaining undocumented interfaces are intended for internal use only, save for the ISchemaSchema accessor methods. But those are so obvious I decided not to document them.
The manufactured code for the XMsgClient layer now uses the CFLibNotSupportedException to identify methods which are not supported for direct client-side invocations.
CFLibNotSupportedException is thrown by client-side implementations if the operation is not available to client layers, but only server layers. For example, id generators cannot be invoked by the client layer, because clients are supposed to instantiate new object instances, edit them, and invoke the object's "create" method to generate the id in the server layer and pass it back to the client.
The XMsg Client will prepare XMsg request messages and issue them through an easily-overloaded call to the client schema. It will then use the client schema's XMsgRspnHandler parser to analyze the response string received from the invocation, and apply the resulting data to the client cache.
All of the database layers had a redaction notice, I realized. It's been removed. Until redaction is implemented, just don't use MSS Code Factory for implementing access to protected/private data without a custom client access layer that implements redaction and obfuscation.
The response message formatters have been fleshed out and wired to the request parser. You should now be able to parse a valid request, have it processed, and see the request parser prepare a response XML buffer, which is stuffed into the getResponse() accessor of the request parser.
Next I need to code the XMsgClient, which will implement the database I/Os as XMsg request/response processing from the client side. At this point, the server side code just needs to be wired in to a valid communications protocol.
I'm ambivalent about making the XMsgClient specific to a protocol; I'm leaning towards it being generic and then adding another subclass layer to specifically implement the web client implementation. That way you should be able to easily reuse the XMsgClient code with something like an MQSeries server as well, by simply redefining send/receive interfaces instead of having to muck with redefining the protocol and it's parsers.
The parsers and formatters produced for the three XMsg layers now properly implement the formatting and parsing of the Revision and Audit attributes.
This version produces a clean compiling implementation of the CFDbTest 2.0 code. It is ready for use in remanufacturing the entire project set to complete the refactoring from IMssCFAnyObj to ICFLibAnyObj.
The CFCore 2.0 model can not extend CFCore because it *is* CFCore. :)
This version of the latest CFLib and CFCore support actually runs.
MSS Code Factory 1.11 has been rebuilt using CFLib and CFCore 1.11.10420, with a corresponding refactorying of all references to IMssCFAnyObj with references to ICFLibAnyObj. This also necessitated updating the import statements for most of the hand written code.
CFLib and CFCore have been updated to refactor all IMssCFAnyObj references as ICFLibAnyObj references.
The message formatting static methods for requests and responses have been added to the XMsg package.
The CFAsterisk, CFCrm, CFEnSyntax, CFFreeSwitch, CFGui, CFGCash, CFInternet, CFSecurity, and CFUniverse projects are no longer defined to depend on CFCore. That means you are no longer required to link those projects to any GPLv3 libraries to use their code. However, you do need a commercial license if your project is not GPLv3 compatible. But you are not required to be GPLv3 compatible with a commercial license, hence the removal of the jar/library dependency.
CFLib added support for the missing Uuid XML formatters.
The factory has been rebuilt to use the latest CFLib and CFCore.
The Uuid formatting methods have been added to the CFLibXmlUtil implementation.
The code factory has been rebuilt and repackaged with the latest version of CFLib and CFCore.
In order to simplify the manufactured code, the logic for deciding whether to apply separators and attribute emissions have been centralized in CFLibXmlUtil.format[Optional/Required][Type]() routines.
Both packages have been recompiled with the changes in place to ensure that all code remains in sync.
MSS Code Factory has been refreshed, recompiled, and repackaged.
CFLibXmlUtil.formatBlob() has been added to the XML formatting/encoding repetoire.
CFLibXmlUtil.formatBlob() has been added to the XML formatting/encoding repetoire.
MSS Code Factory has been rebuilt and repackaged using the latest versions of CFLib and CFCore.
The formatting functions for outputting XML-encoded attributes have been added to CFLibXmlUtil.
The formatting functions for outputting XML-encoded attributes have been added to CFLibXmlUtil.
Both CFLib and CFCore have been rebuilt and repackaged.
Do slightly different checking for required string-type attributes and general value attributes. Values have a minimum required length of 1 character, while string types can be of length 0. In practice, the values are further validated according to the data type, and many such data types have a fixed length and format specification to be followed.
The XMsgRqst and XMsgRspn packages have been updated to correctly parse the grammars defined by their XSDs, found in the xsd directory of the Java source tree for the package, and shipped as a resource of the parser. There is no need to publish the schema to the internet; it is carried by the code itself. The XSDs have been validated by Eclipse.
The XSD specification files have been updated and fleshed out, and now properly express the intended grammar as validated by the Eclipse XSD validation tool.
I just realized this morning that I forgot to incorporate the Revision attribute in the X(ml)Msg layer. That's going to be kind of critical for "teleporting" a request via XML messaging.
Note that I'm trying to author the X(ml)Msg layers so that they can be used with any transport mechanism, not just JEE submission requests. You should be able to use the same parsers and message processing with message queueing, file transfers, or any other mechanism that you choose.
I also neglected to mention in the log notes that I exposed the delete-by-index methods through the TableObj layer. I didn't realize I'd forgotten to implement that piece of code until I tried to wire it up for the X(ml)Msg request parsing and processing.
The X(ml)Msg layer has been split into seperate request and response packages because although I like to do peer-to-peer coding, client-server is also very popular and there is no point bogging down a client with server/request parsing code.
The NoDataFound and Exception responses are now on a per-schema basis instead of per-table. Also, sketches of their element definitions have been added to the response XSD. The request and response XSDs have had their attribute lists corrected as well; they were still carrying some hidden attribute cruft from the original structured XSD code.
The exception factory has been enhanced to allow the X(ml)Msg response processing to recreate an exception on the client side based on an exception element parse.
CFCore and MSSCodeFactory itself have been recompiled to use the enhanced library, and they have all been pushed to the SourceForge site for distribution.
The attributes and accessors for LastObjectProcessed and SortedMapOfObjects have been added to the response parser, and are now used by the element processing of the response parser to pass back results to whoever invoked the parse.
Next I need to flesh out the Exception responses and parse them. On the way to doing so, I'm going to consolidate the Exception and NoDataFound messages to single definitions instead of one-per-table.
The X(ml)Msg production had only beeen enabled for CFDbTest20Work projects up until now as I developed the initial code. There is enough of it in place and clean compiling for me to turn it on for the other projects as well, so I've done so.
The X(ml)Msg response parsers have been largely coded, though they're not done yet. I need to work out how I'm going to let the invoker of the parse retrieve the data that was created/passed back by the parsing.
The X(ml)Msg request parser has been enhanced to support the delete-by-index requests. All of the requests are now coded to perform their duties, though they do not package up a response message as they'll need to in the future.
As long as you begin a transaction before applying the parser to a file, and handle exceptions to roll back or commit the transaction accordingly, you should be able to apply a request message to the database.
The request parsing and application code for ReadByIndex and ReadAll have been coded and clean compile. All that remains to be fleshed out to this degree is the DeleteByIndex processing.
Note that the responses are not prepared yet by the request parser. So while you could apply a set of requests conforming to the request document schema to this parser and have those requests applied against the database, you wouldn't get any responses back to let you know what the results were.
The XMsg request parsers for Create and Update now parse out their attributes, apply them, and perform the requested operation.
Results aren't prepared/formatted yet, of course.
CFDbTest 2.0 clean compiles it's XMsg library. Just a little more work done. There's a long way to go. Right now I'm putting the calls and processing in place for the request processing, littering the place with comments about formatting response messages.
The code doesn't even come close to being complete or working, but it clean compiles so that's a good release point.
The DB/2 LUW implementation has been modified to store Blob data as Base64-encoded CLOBs. This means that DB/2 LUW can't store full-sized objects as Blobs, because it's losing out on the ratio between Base64-encoded character streams and octal bytes.
In compensations, DB/2 LUW now implements all of it's operations as stored procedures with no client-side logic. This improves overall performance, especially in conjunction with the prepared statements used to access stored procedures.
Subject to the limited size constraints at runtime, DB/2 LUW now passes all CFDbTest 2.0 exercises correctly, and joins the ranks of PostgreSQL and Oracle as a complete implementation.
I'm working on switching over the DB/2 LUW implementation of Blob persistence from using BLOB columns to using CLOBs of Base64-encoded data. This allows the implementation of the stored procedures for create and update, which improves the performance of a deployed system at the expense of having to encode and decode byte arrays as Base64 strings in the JDBC layer.
Along the way, I discovered an error in the DB/2 LUW scripts caused by an old coding bug/error that treated MaxLen as an optional attribute of a BlobDef. MaxLen is required for BlobDef.
Do not download if you're using the manufactured DB/2 LUW code -- it's broken right now.
I think I've figured out how to work around the limitation of unreliable BLOB support with DB/2 LUW. Instead of storing the data in a BLOB, I'll store it in a CLOB twice the defined size, and use base 64 encoding of the BLOB data to persist it as a CLOB.
Clumsy, but it should be effective at bringing DB/2 LUW up to full functionality, so I'm going to stop working on RC3 for a bit and focus on fixing the DB/2 LUW support instead for an RC2.
I still program of course. Perhaps not as obsessively as I once did, but quite regularly. So many ideas, so little time. It's tough to pick which idea to make reality next for my pet project.
Today I've decided to document what I'm planning to do for RC3.
I'm starting with the common code at the heart of it all: $SchemaName$XMsg, where XMsg stands for X(ml)Msg.
Over the years, I've done dozens of messaging projects, and 90% of them start out with a core set of services to create, read, update, and delete objects over the network using messaging protocols over a wide variety of technologies ranging from raw IPC to IBM MQ Series, and everything in between.
The heart of the messaging doesn't change much, so implementing it as an XML-body protocol is trivial. Verbose, but easy. That's what I'm working on right now: the parsers and XSD specifications for CRUD message requests.
I've always designed systems on a peer-to-peer basis as much as possible, though client-server-server was also very common. And that's what my goal is over the next few months leading up to RC2 -- the additions of a JEE server using the XmlMsgRqstParser to execute requests against the database of your choice.
Once the request parser is coded and wired into a JEE page to accept XML messages as HTTP "POST" operations, I'll shift focus to adding a $SchemaName$XMsgClient that uses the $SchemaName$XMsgRspnParser to evaluate the XML responses sent by the JEE server.
So you'll use an HTTP client module to send requests to a JEE server, which in turn hands off the processing of those requests to the stored procedures in any one of PostgreSQL, MySQL, DB/2 LUW, Oracle, Sybase ASE, or MicroSoft SQL Server (subject to some bugs and restrictions, particularly for MySQL, IBM DB/2 LUW, Sybase ASE and MicroSoft SQL Server. Only PostgreSQL and Oracle provide full functionality support of all the required concepts.)
For RC4, after the fundamental technology is in place, I will turn my eye towards securing the communications beyond what you can do by running the JEE server over HTTPS service mountings. So in the future you won't need to rely on how well or how secure the HTTPS/SSL implementation is, because I'll be using Java's encryption APIs to send all responses to a given user with the encryption key they specify as part of their account options. It's under their control. They can be as NSA paranoid as they like, provided the key is supported by the standard Java JEE client and server deployments.
I'll have to look into the Java Wallet APIs as well, so I can access the private keys on the client side needed to decrypt those responses.
Similarly, the server will generate it's own server key, which you'll receive when you log in. So the server will use a different key for each session for it's request messaging as well.
I'm tired of being snooped on.
Android runs Java... different byte code, same language. Source code compatible.
An Android, Linux, Windows, or Apple client running a Java stack can then access those servers by including the Java or Android byte code jars for $SchemaName$, $SchemaName$XMsg, and whatever else is required to issue the requests to the server as specified.
You'd want to heavily restrict the Create, Update, and Delete messages beyond what I've coded by limiting participation in the corresponding table manipulation group memberships.
Instead, you'd extend the specification of the XSD's for requests and responses with your custom transaction APIs, ideally specifying their arguments in terms of the existing objects.
Overload the parser in an extension JAR for the client and the server to implement your parser extensions by extending and overloading $SchemaName$XMsgRqstParser and $SchemaName$XMsgRspnParser.
The reason you want to do this is that the operations the core server provides are all atomic requests -- there is no transaction control when manipulating the data.
This is particularly bad when data integrity requires the coordinated CRUD "statements" of a transaction to be treated as one larger atomic unit.
There's still a lot of work before RC3 provides me with the pieces I need to implement an actual application. RC1 embodies my original vision of providing the Java object coding pieces for the CRUD statements, as you need those to implement the server Rqst processing.
When I'm using RC3 to implement an application's custom services, a request will always respond with an Exception, a NoDataFound, a single object, or a list of objects derived from a common base class, ordered by the primary key in scope at that common base class.
It takes time to make a mountain of code.
The request parsers are sketched out and clean compile. They won't actually do anything yet as the code is incomplete, but the general structure is in place for them. The responses aren't complete yet -- there are many missing classes.
It's the last of the four digit releases. I've been working away at the X(ml)Msg layer, but it's not fully coded and doesn't clean compile. But it's a lot closer than it was this morning and yesterday.
The X(ml)Msg layer provides for request and response parsers so that a node can participate as a peer in the network.
The request and response messaging XSDs have been drafted, the parsers have been sketched out, and a couple of the element parsers thrown together crudely. There is a lot of work to do yet.
If you download this release, I do not recommend including the $schemaname$xmsg$release$ Java package in your build just yet -- it has many errors due to undefined types, so I haven't even tried to test build it yet. The XSDs should be ok, though.
There are three known bugs with the code produced by the release candidate:
DB/2 LUW will periodically throw SQLCODE 204 exceptions for no reason that I can fathom. Sometimes the code works, sometimes it doesn't. The manuals have been notoriously unhelpful, as they indicate that it means an object isn't defined that is being referenced, yet all the tables and columns exist that are being accessed, as are the stored procedures that are being called. The one common feature when the bug occurs is that the table being manipulated has BLOB columns.
Sybase ASE will allow you to delete data when you don't have an entry in the security tables allowing you to do so. Other permissions are validated properly, so I'm baffled as to why this particular issue remains.
SQL Server will also allow you to delete data when you don't have permission to do so.
I've spent hours trying to debug the Sybase ASE implementation to see why it's permitting deletes when permission should be denied. The core function sp_is_tenant_user() correctly finds permissions as evinced by the rejection of updates by users who don't have permission to do so, the sp_delete_optminval() stored procedure looks good and properly checks for permission DeleteOptMinValue, yet for some perverse reason the code executes anyhow without throwing an exception.
I had thought maybe it was failing silently and there needed to be more code at the JDBC layer, but that turned out not to be the case.
Until some point in the future when I fluke across the real problem, I'm just going to have to call Sybase ASE "done" for now.
Next up: Similar problems with SQL Server, and likely the same results, seeing as SQL Server code is based on Sybase ASE code.
The database schema names are now entirely dynamic so that you can point an application at any business application database which referenced this application's schema model. This will allow the construction of common "utility" programs for layers such as the security system to be used with all applications produced by the system instead of having to create custom versions for each referencing application.
There was an occurance of "oracle" where "Oracle" was required, and thus things were going boom...
I had to rename CFParseEN to CFEnSyntax, due to some weird bug in Java/MSS Code Factory which flat out did not like the CFParseEN name for no reason that I can fathom. Regardless, renaming the project fixed the problem, so good enough.
CFUniverse now includes all the projects it should.
My world is complete -- for now.
There are still bugs with Sybase ASE and SQL Server to address.
There was one last remaining issue with the schema merges being exposed by referencing CFCrm. It turned out to be a fluke of the data that exposed a significant defect in the way I'd written the code. I was just lucky it wasn't a more common problem.
CFCrm has been added to CFUniverse now.
CFGui manufactures properly now, so it's been wired to CFUniverse as well. CFUniverse now contains everything except CFCore, CFParseEN, and CFCrm. There is a problem importing the CRM model for some reason. I'll need to fix that.
I noticed a problem while watching CFDbTest 2.0 produce it's files. There were multiple "Tenant" relationship objects being produced for projects. The duplicate entries have been purged.
Everything needs to be remanufactured to clean up from the mistake.
Such is life.
There is now an msscf project created similar in scope to the ram project, to which the MssCF Java packages have been moved. This means that the core package produced for a project will no longer include the MssCF support by default; you'll have to specify the jar/library explicitly.
Hey, look, at least I finally caught up to Intel's release schedule with a '786. :P
I forgot to upgrade CFDbTest 2.0's model to reference the containing DomainBase object instead of a ProjectBase object.
That's not to say that the CFBam 2.0 code has been test compiled or packaged, just that the manufacturing run will now proceed without any errors from the CFEngine.
I need to remanufacture and repackage the projects CFInternet, CFCrm, CFBam, CFFreeSwitch, and CFAsterisk, as they all depend on the changes made to CFInternet.
There is still a problem with the SchemaDef manufacturing for the CFBam 2.0 SAX XML Parser code layer, but everything else has been tweaked and updated to fix the issues encountered to date.
I had to make some model changes to CFBam 2.0, though, so I need to start a full manufacturing run and call it a night.
There was a reference being followed, but it's optional, not mandatory, so there were problems with the CFBam 2.0 manufacturing. The new verb HasNarrowedRelationDef in conjunction with it's use in the rule base should resolve the problem.
You can now manufacture CFUniverse.
The only project remaining to be fixed is CFParseEN. Once that's fixed and added to CFUniverse, I'll have brought all the projects up to date with the new syntax, as well as having added several.
The CFBam and CFUniverse models have had some corrections applied, resulting in more obsolete code to be pruned from CFBam.
Now that I've merged the Token types and columns, the CFUniverse model should build/manufacture.
The license headers of the Dual GPLv3/Commercial licensed projects have been updated to reflect the references to CFSecurity, CFInternet, and CFCrm by application code instead of the old conglomerate CFSme project.
The CFIso objects could not be compiled without the CFSecurity objects, so they've been merged into one model and CFIso has been purged from the system.
The schema name is now dynamically set by PostgreSQL based on the database name specified in the configuration file. Thus you can point any applications that were built using CFDbTest 2.0's code base to any schema that was manufactured with a reference to CFDbTest 2.0, and the custom CFDbTest 2.0 client code would run because all of the stored procedures and signatures should match.
After all, there is no way to add extra columns to a table after the table has been defined, which prevents anyone from changing the signature of the stored procedures associated with a referenced/imported table.
PostgreSQL regression tests have passed.
Apologies. Beta 21 should never have been issued. This release fixes all the problems Beta 21 had.
The PostgreSQL JDBC interface is now entirely dynamic with respect to the use of SchemaDbName in all prepared SQL statements and dynamic SQL invocations.
The code is now ready for regression testing.
The "Parent" lookup hack is now part of production. Unlike I'd hoped, restoring the Table.Lookup references to the lookup indexes did *not* correct the problem with the XML SAX parser code. The old syntax/undefined function invocation error resurfaced.
So the hack is back, and here to stay.
This version is finally good enough for a release. There is still more work to do on the PostgreSQL layer, but as long as you're not changing the value of setSchemaDbName() (which you wouldn't be because it's a new API), the PostgreSQL code should *work*; it's just not fully dynamic yet.
And for that reason I'm going to postpone the next beta until I have that new functionality fully implemented and regression-tested for PostgreSQL.
The Lookup indexes weren't being properly resolved by the merge process, so I've done that and backed out the check for "Parent" relationships because the ones in question should have been identified as named lookups. Now that the lookups are properly wired, that code should function as *originally* intended instead of requiring the recent workaround/patch.
And after a couple days of wrasslin', we're back to producing clean compiling (but not regression tested) code.
Use at your own risk.
This may resolve my bad XML problem.
There are still problems with the manufactured SAX XML Parsers producing invalid code, but I'll get that figured out in due time. I know what I want to do to resolve the issue, and I think the pieces of the engine I'm thinking of using already exist.
The iterator SchemaDef.SchemaRef has been added to the repetoire of the engine's integration with the business application model (CFBam).
This is a development release. It does not produce functional code.
The implementations must now be updated so that any final strings which depend on the expansion of the runtime dbSchemaName can no longer be implemented as final values at runtime, because the expansion has to be interpreted at runtime so that the SQL interface adapts to the global environment variable configuration.
The CFSme 2.0 project is now distributed under an Apache 2.0 license, and referenced by the other CF* 2.0 projects, which give credit to the underlying Apache 2.0 code in compliance with a reasonable interpretation of the Apache license, as it does not specify that anything needs to be retained from the original license header when extending the code. CFCrm, or CFInternet (recommended) into your projects.
The CFBam model is published under a Dual GPLv3/Commercial license, not the Apache license. The same goes for the CFGui model.
The new <SchemaRef Name="CFSecurity"
IncludeRoot="net.sourceforge.MSSCodeFactory.CFSecurity.2.0.CFSecurity" /> element specified under a <SchemaDef>...</SchemaDef> pair allows you to import and name referenced models rather than manually copy-pasting their common model code as you had to up do until now by copy-pasting the Template code from the CFSme 2.0 model specification file.
The projects included by CFSme 2.0 are all published under the Apache 2.0 license, including CFIso, CFSecurity, CFCrm, and CFInternet.
Under the Dual GPLv3/Commercial license are CFUniverse, CFBam, CFGui, CFParseEN, and CFCore..
If you want to see code bloat made flesh, import the CFUniverse project, which includes everything that manufactures code cleanly to date. It's the kitchen sink project, under GPLv3/Commercial licensing.
All of the databases now have their full primary feature sets implemented. The most important bugs are that SQL Server 2012 doesn't seem to perform cascading object deletion properly, resulting in duplicate key errors during the "Replace Complex Objects" test. All of the other databases no longer have a problem where they permit users with Update permissions to Delete data that they shouldn't be allowed to. Only Sybase ASE 15.7 and SQL Server 2012 still have those particular bugs left to be fixed before the production release.
The EnumTag children of an imported schema are now properly integrated with the destination schema.
The CFSme and CFParseEN projects don't want to manufacture.
I also realized I forgot to clone/copy the enum tags when merging and EnumDef. That's not critical because none of the rule sets *evaluate* the enum tags, but it needs to be taken care of before I forget about doing it.
Once I've got that extra code written and tested, and CFSme and CFParseEN are manufacturing properly, I should be good for a refresh of the beta. I don't consider the current beta to be as functional as I'd like. I'm also getting compile errors in the CFDbTest20 SAX parser code, which makes no sense to me as the rules for how to produce a SAX parser have not been changed. There may be some subtle thing I forgot to merge/migrate with the column definitions; I'll have to look into it.
You can now reference and import schemas to your model by using the new SchemaRef elements of a SchemaDef. Examples of how to use this new construct are present in all the models provided by the distribution packaging.
You need to import CFIso and CFSecurity at a minimum; CFInternet is highly recommended as well. CFSme makes the CFCrm model available under the Apache license, but you do not need to include it in your application models. As a result, most applications that are modified to reference CFIso and CFSecurity should see a substantial reduction in code size compared to copy-pasting the old complete CFSme model.
This version of the code has successfully manufactured CFDbTest 2.0 using the new SchemaRef syntax in the model files. The parser changes have been applied across the board, and the code is ready to rock'n'roll.
I had forgotten to specify the RelationType when merging the relations, so of course there were problems at runtime because all the relationships were defaulting to unknown, causing problems for the tenant and cluster id resolutions.
I'm comfortable with the level of schema merging that is going on with the parser. I think I've covered pretty much every critical piece of information that needs to be merged into the target schema. I'm now ready to shift this over to the Windows laptop workhorse and let it chew on a test run of CFDbTest 2.0. In theory, despite the changes to the way the MSS BAM models are structured, there should be little to no change in the resulting code being produced for CFDbTest if my merge process was successful.
All that remains is to make a pass through the object to resolve the optional lookups from tables to their superclass relations, and from resolving the narrowed relationship references.
The CFBam specification file parses successfully, but only because the relations aren't being processed and merged yet. (Nothing ever looks up a relation by name, so by fluke there is nothing setting off the exceptions that should be possible right now.)
The index columns are now properly merged while merging in the indexes from a source schema, which resolved the name not found exceptions that were being reported by the existing relationship processing.
This code may run cleanly, but it's still not ready for testing, much less deployment yet.
The table and index definitions are now properly merged from the referenced schema.
Next I need to flesh out the remaning fragments of the schema type merging process. It's kind of verbose, but what can you do?
After that I need to flesh out an equally big piece of code to merge the table columns. There is a stub routine in place, but it doesn't really do anything yet.
That will clear the way for merging the index columns.
Which will finally leave the relationships, which are just empty stub routines for now.
The code for merging the schemas has been outlined in terms of the steps to be performed. A large amount of the class hierarchy evaluation code has been fleshed out and is not hitting any unresolved types (though it doesn't actually clone the atoms yet), and the top level objects for the Table and Index definitions are merged into the target schema after performing a load of the referenced model.
There is much still to be done before I'd consider downloading a development release.
I think I'm done the parser changes themselves. The code now successfully imports referenced schemas and can resolve them in the current dictionary model after they've been loaded.
Next I need to apply the loaded schema elements to the current referencing schema. This merge process is going to be one big whack of code, but it's not really all that complex in reality.
The SAX Parser changes for the MSSBam loading is pretty much code.
It properly invokes sub-object loading, but does not merge the definitions yet. I'm not entirely sure the name is resolvable after the load; there are some things I'm still looking in to.
But it does execute a parse of the new model definitions. It just doesn't *apply* them properly yet.
I still don't recommend downloading this release. Consider it a work-in-progress debug instance.
The SAX Parser has been enhanced with partial support of the SchemaRef elements of a SchemaDef. The SchemaRef is instantiated, initialized, and persisted, and the referenced schemas are loaded by sub-parsers now, but I still need to merge the resolved SchemaDef into the current schema being defined.
Then I'll need to rework a number of the parsers so they don't update elements that already exist in the object hierarchy, such as the TLDs and domain objects leading down to the SchemaDef we're interested in resolving.
Don't download this release -- it's a work in progress snapshot that still throws exceptions; they're just more meaningful now and consistent with the interface I want to eventually deploy.
The CFSme 2.0 project is now distributed under an Apache 2.0 license, and referenced by the other CF* 2.0 projects, which give credit to the underlying Apache 2.0 code in compliance with a reasonable interpretation of the Apache license, as it does not specify that anything needs to be retained from the original license header when extending the code.
CFSme is now licensed under a dual LGPLv3/Commercial license, so you are allowed to extend it's implementations without providing a BSD-style code credit.
It also allows you to release your manufactured code under GPLv3, extending the LGPLv3 code base and restricting it to GPLv3.
The new SchemaRef.RefSchema reference was incorrectly registered as SchemaDef.RefSchema.
The binding of SchemaRef.HasRefSchema had been incorrectly registered as SchemaDef.HasRefSchema.
The binding AnyObj.HasDefSchema and the reference AnyObj.DefSchema have been added to the engine.
Remove remaining vestiges of Singularity One licensing.
Updated the XSD specifications to incoroporate the new SchemaRef element of a SchemaDef, and to add the RefText attribute to the License element.
The license headers have all been refreshed with a view to it now being 2014, including extending the copyrights of older files from the year they were created to a year range up to and including 2014.
The new attributes of the BAM objects add the bindings License.RefText, License.HasRefText, SchemaRef.HasRefSchema, and the reference SchemaRef.RefSchema (dereferencing SchemaDef.)
Next I need to update the XSD which defines the XML file syntax, and implement and wire the XML SAX parser classes for the SchemaRef object.
Then I need to update the License parser to support it's new RefText attribute.
Then I need to check my todo list and see what's next for XML parser changes -- there is a lot of detail to flesh out there to deal with reloading of non-SchemaDef-contained data. That's why I create those "Programmer's Notes" sections -- so I can follow through on my thoughts.
Downloading this release is harmless, but it doesn't really do anything new as far as code manufacturing goes. The manufactured code should be unchanged from the previous release.
I just wanted to post a note apologizing for the lack of progress on the new features I've been planning to work on. I haven't been feeling all that well of late, but I will be able to pick up some medication in the near future and hopefully I'll get some work done then.
On the bright side, seeing as no one pays me to work on this beast of mine, I don't feel guilty about the delays, just regretful.
The RefText will have to be populated as part of the License object parsing.
I'm working on being able to import schema references as part of the 1.11 model syntax. The first step is to wire in the new attributes that are required by the object, which was done by simply updating the 1.10 model for the MSSBam 1.11 specification.
Add in a SchemaRef object that is a component of a SchemaDef, which is used to bind/import schema defintions to the current schema.
Also wire a DefSchema element and relationship to the AnyObj, so that the parser can be modified to process the schema include relationships.
The idea is simple. When an import reference is parsed, the parser tries to locate that schema and load it instead. The resolved schema object is then scanned, and it's contents are bound to the current schema. For any elements of the referenced schema with null DefSchema specifications, the imported element is specified as being defined by the imported schema. For elements with non-null references, the reference is propagated to the cloned object in the current schema.
So as far as the main code manufacturing process goes, it'll still see a fully defined and qualified schema specification. It's just that some of the elements will have been defined by an external schema instead of the current one. A null DefSchema means that it was defined by the containing schema.
The definitions above a SchemaDef will have to be modified to allow for the fact that their definitions may already exist, and if so, simply reuse the existing definition rather than re-defining it.
Licenses are going to be fun to modify, too. I'll want to add a RefLicense body block to the license text parser, so that when you are manufacturing a Table that specifies a DefSchema, you use the RefLicense from that schema definition as well as the License body from this SchemaDef. That will allow for incorporating BSD code into GPL projects. The presumption, of course, is that your project's license is compatible with the requirements of the RefLicense.
DB/2 LUW 10.5 requires the use of the NUMERIC type when setting null values, rather than the DECIMAL type.
There is still a bug in one of the tests where an SQLCODE -103 is being thrown when running the CFDbTest 2.0 test suite from the command line, but when running the failing test (Replace OptFullRange) under the Eclipse debugger, the error does *not* occur. This had been a problem with the previous release of DB/2 LUW as well (10.2.)
If I can't replicate a problem, I can't very will fix it. Aside from that, both the insert and the update code work for earlier tests on the same data object, yet fail for this particular test, even though the same values are being inserted!
Update DB/2 LUW supported release to 10.5
Oh, now this *is* embarassing. The delete permission checks have been just fine all along. The problem was I specified to replace the OptFullRange objects, not the OptMinValue objects in the command line invocations of the test driver programs in the various CFDbTerstRun*Tests* scripts.
The delete code itself is *just fine*. :D :D :D
Well now, isn't this interesting...
I checked my permission table population in PostgreSQL, and it is ok -- only one entry for the DeleteOptFullRange table (the CRUD user)
I tested the sp_is_tenant_user() stored proc for the CRU and CRUD users, and they properly return false and true respectively for the DeleteOptFullRange permission check.
Next I tried testing sp_delete_optfullrng() for those two users, and much to my surprise, the CRU user was denied permission by the stored proc.
The reason this is surprising is my mainline test program is letting that user go ahead and delete data.
There must be an error in the way I'm initializing or passing the user id to the stored proc when invoking it from the client. Or perhaps a problem with the query for the user id UUID from the table during the initial authorization of the client program.
But this is probably a problem *specifically* with the invocation of deletes, because the read and update tests pass just fine, which suggests that the user id *is* being properly initialized in the client code.
Regardless, the problem is NOT where I thought it was...
Microsoft SQL Server 2012 Advance Express Edition was used for testing this release. The "Replace Complex Objects" test fails, but I've no idea why -- I'm not getting any exceptions thrown by the code at the server nor at the JDBC layer. Maybe there is some other way the SQL Server 2012 can report errors to the client, but I've been getting proper exception throw reporting in every other case I've tested, so I *should* be getting them if the sp_delete_dbtablename() and their cascading sub-object deletes are failing for any reason.
All of the databases also have a problem with allowing unauthorized deletion of data. I suspect this has to do with a problem in the XML SAX parser/loader code itself, rather than being a problem with the underlying database stored procedures or the JDBC layers themselves. The fact that it's a consistent problem across all the databases is what supports that idea.
However, this is only a beta -- it doesn't have to be perfect. I just need to make sure I fix the outstanding issues for all the databases during the production cleanup of the implementations (I expect some of the later databases may have had additional functionality implemented that I need to backport to the earlier databases. They're largely equivalent in functionality, but I think I might have added some error checking later in the migration efforts.)
It took a bit of finagling, but I made sure the SQL Server 2012 beta was a "420" release in honour of the world's most effective anti-migraine medication.
The JDBC layer for SQL Server 2012 has been updated and refreshed to support the new stored procedure interfaces, audit stamping, and history tables. The code clean compiles and is ready to begin testing.
The Microsoft SQL Server 2012 database creation scripts have been updated with the new functionality from the Sybase ASE 15.7 implementation, and tested with the Express edition of SQL Server. The JDBC layer has not been refreshed and updated yet, so this is not a functional build for SQL Server support, but if you're curious about the work in progress, download it and give it a run.
The other databases are all fully functional, of course. SQL Server is the last one to be tackled.
The support for Sybase ASE 15.7 now passes all of the CFDbTest 2.0 regression tests and is ready for use. This beta also incorporates important fixes for the Oracle support, and some minor fixes for the other databases.
It is strongly recommended that you upgrade to this release and remanufacture your code in order to apply those bug fixes and patches.
There is still a problem with the delete stored procs which is causing a duplicate key error when Replacing a SchemaDef during the tests. All other tests are passing with flying colours.
Most of the Sybase ASE 15.7 CFDbTest 2.0 tests pass properly now. There are still a couple of nagging issues to be chased down and resolved before the next beta will be released, however. But it shouldn't be much more than another day, if that, to the next release.
Some of the tests are running now. Reads and inserts seem to be ok. I think deletes are ok as well. But I'm getting errors due to duplicate names and I need to look into whether that's an issue with the way I've enhanced the read security or not.
The reads clearly don't work or else the sp_bootstrap() is failing, but for whatever reason I can't resolve the cluster "system" and it's tenant "system". So *all* of the tests for Sybase ASE fail with glorious spews of error messages! :)
Client-side inserts now populate the required forcesynclock column.
The command line arguments for the Sybase ASE SAX Loader are now compatible with those of the PostgreSQL implementation. The scripts for invoking the Sybase SAX Loader can now be upgraded to match those used for PostgreSQL testing.
The JDBC enhancements for the Sybase ASE 15.7 binding layer have been coded and are ready to be integrated and tested. There are some changes to the SaxSybaseLoader implementation that need to be brought over from the PostgreSQL implementation of the same layer in order to drive the enhancements that will be made to the test invocation scripts for Sybase. The extra arguments are needed to allow for security constraint tests at runtime.
The remaining stored procedure bindings for the sp_delete_dbtablename_by_suffix implementations have been implemented as prepared statements with dynamic arguments. In other words, delete-by-index is now correctly implemented for Sybase ASE 15.7.
All that remains is the client side changes for auditing create and update records for BLOB client bindings, and supporting the audit columns themselves as required for the client-side code. Client side code will not function correctly right now, so there is no point testing what I know is still broken.
All of the existing stored procedure invocations have been refreshed to pass the standard security arguments.
The client-side insert and update implementations for dealing with BLOB data need to be refreshed to support audit history and audit columns/attributes.
The JDBC bindings for invoking the various sp_read_* and sp_lock_dbtablename stored procedures have been refreshed to match the latest invocation signatures.
The unwrapping of the responses need to be updated next to allow for the audit columns when those are required before the new read and lock routines will actually be ready for testing.
Not much to really note here. I've fleshed out the prepared statement buffer hooks and their cleanup code, because resource management is important. When you're done with a connection, you have to release your prepared statements, because their bindings are unique to an allocated connection, and have to be refreshed for each database session request.
The Sybase ASE 15.7 database creation scripts for MSS Code Factory CFDbTest 2.0 install cleanly.
Some of the other databases were not properly invoking the specialized delete methods for subclass objects/tables from sp_delete_dbtablename_by_index(), which could have been a problem for more complex data structures. This has been corrected, and the updated Oracle creation scripts are shipped with this installer. I believe DB/2 LUW, PostgreSQL and MySQL were ok, but you should download the 1.11.9215 installer and make sure you're up to date, because it was a rather serious bug.
Next up: Refreshing and resyncing the JDBC layer.
sp_is_system_user(), sp_is_cluster_user(), sp_is_tenant_user(), and sp_bootstrap() clean install to a Sybase ASE 15.7 instance. The id generators also clean install, but those hadn't been modified since the last Sybase refresh was done.
Many errors have been corrected throughout the stored procs, but not enough for a clean install yet.
The database structure definition scripts including table, index, and relationship creations have been tested.
The stored procedures are still a mess, though.
The changes to the Sybase database creation and stored procedure scripts are complete and are ready to begin test installations to a database server.
The implementation of the sp_create_$dbtablename$() implementations has had a number of syntax errors and formatting tweaks applied. It should be closer to a clean install when the time comes.
sp_delete_$dbtablename$() has been updated and now implements audit columns and history insertions.
The implementation of the crsp_create_$dbtablename$.isql code has been updated and refreshed to implement auditing and history.
The implemementations of crsp_delete_by_$index$.isql code has been updated and refreshed to properly implement iterative invocations of sp_delete_$dbtablename$() according to the class codes of the derived object. The implementation of sp_delete_$dbtablename$() itself has not been updated yet.
The code has not been test-installed yet -- I won't do that until I'm done refreshing all of the stored procedure implementations.
The sp_lock(), sp_update(), sp_read(), and sp_read_by_index() implementations all return result sets with the audit columns now, but they have not been updated with the security enhancements yet.
sp_lock() will also see the addition of a new implicit base column, forcerecsync, which will be a bigint not null, initial value 0. The implementation of the locking code in all of the updating stored procs will attempt to increment this column as part of their data locking procedures. If you already have an update lock on the record at runtime, this will proceed properly. If you do not have the lock and someone else has updated the lock flag, you are thread-blocked at the database until the locking transaction completes. You will probably get a data collision detected exception in this situation when you resume execution.
I'm not dead yet, though with the way the migraines have been going, I sometimes wish I were. I've been (very) slowly working on the Sybase ASE 15.7 support, but it won't be ready for a while yet. I should have that beta out by year end, though.
The upgrade to Ubuntu 13.10 on my box failed horribly, so I've spent most of the day doing a Debian install and recovering data. I lost nothing important (I'm paranoid and have multiple copies of anything important), but it's taken a fair bit of time.
I don't expect to be back in a position to do any coding for a while yet. Maybe tomorrow. On the other hand, I was planning on taking a break before diving into Sybase ASE support, so the system failure came at about as good a time as it could.
Oracle 11gR2 passes the CFDbTest 2.0 integration tests.
Note that there are some weirdities with the permissions for Oracle and DB/2 LUW. Both of those databases will grant permission to delete data if update permission has been granted to a user. This shouldn't happen, but it's a low priority bug at this time, so I won't worry about it until after I've finished upgrading Sybase ASE and Microsoft SQL Server support.
The Oracle JDBC and XML parser mainline enhancements have been coded and clean compiled. I'm ready to begin testing the integration of the new stored procs and the new JDBC layer.
I'm a pessimist; I expect it'll be a couple of days at least before it works. But if you're impatient, you can download this release and give it a go. Personally I'd check back in a day or few, or wait for the Beta 18 installer, when the code will have been tested and debugged.
The Oracle 11gR2 database install scripts run clean again. This is a non-functional release because the JDBC hasn't been updated to match the stored procedure signatures, but if you're using the stored procedures from other languages you might find this development release useful.
PostgreSQL, MySQL, and DB/2 LUW support is unchanged and still works.
The Oracle DDL for creating the tables, indexes, relationships, and history tables have all been tested. The creating of the history table indexes has not been tested yet. The stored procedures are rife with install bugs at this point in time, but it's been a very long day and I'm calling it a night.
The Oracle database scripts and stored procedures have all been refreshed and updated with the functionality available under PostgreSQL, MySQL, and DB/2 LUW.
The scripts and procedures have not been installed to a database instance yet. I expect a long cycle of lonely installs, bug fixes, typos, long object name issues, and other such problems before I have a clean install of CFDbTest 2.0 for Oracle.
The Oracle DDL should be in sync with PostgreSQL, MySQL, and DB/2 LUW now. The stored procedures have not even been migrated yet, though, much less had their functionality incorporated by the existing Oracle stored procedures. I'm not sure if I implemented the "hack" for dealing with option index columns in queries, either. That might not get refreshed until later.
Regardless, this is definitely untested and non-functional code for Oracle, though the other databases still work, of course.
I've invested the time in the CFBam 2.0 model to get it to manufacture properly again. I don't know how good a job I did of merging the CFDbTest and CFBam models for BAM's, but it should be close.
Along the way I encountered and fixed a bug that affects all of the database manufacturing layers. If a given table has too many data columns, prior releases of MSS Code Factory will produce an exception about a switch limb being too long, because each data column was appending a "yes" to the evaluation value. Now the rules append empty strings or a "no" if there are no data columns, so the exception should no longer happen.
I'm down to two errors for the DB/2 LUW implementation of CFDbTest 2.0. Both are cases of messages like the following when running the test suite from the command line. However, when you run those loads under the Eclipse debugger, they load just fine, as they should. So I have no outstanding bugs that I can replicate under the debugger and tackle.
Therefore I'm going to have to give DB/2 LUW a "qualified pass" for Beta 17.
This beta includes support for PostgreSQL 9.1, MySQL 5.5, and DB/2 LUW 10.2 as tested under Ubuntu 13.04 64-bit.
While looking for something signifcant about "8680", I came across the Massey-Ferguson 8680 tractor. Not exactly relevant to programming.
PostgreSQL 9.1 and MySQL 5.5 work as well, MySQL with date limitation constraints, PostgreSQL with a full unqualified pass of all tests.
Just to be clear: the affected DB/2 LUW code only comes into play for objects that contain BLOBs. As long as you aren't using BLOBs in your application, you should be good to go with DB/2 LUW.
It's getting close to beta 17 time, but I'm not there yet. But progress has been made and I was shooting for a 666 release but I missed it by one. :P
There still is a problem with client-side data manipulation due to issues with the bindings and invocations of the id generators. I know how to fix it (I think), but I'm not going to deal with it right now. It's been a long day, and I've made tremendous progress from a starting point of no runnable code this morning to half of it passing the tests this evening.
Not bad for one day.
This is a work-in-progress release. DB/2 LUW doesn't work yet, but it clean compiles, installs, and tries to run the appropriate sp_read() routines, which I've tested by hand using the db2 command integration.
The good news is the stored procedures seem to work ok.
The bad news is they're not working ok from the code, and I need to figure out why. Most likely I missed some critical detail in copying over the initial data setup/handshake that happens during the creation and registration of the command line arguments in the CLI code.
Yeah. That makes the most sense right now. I'll look into that next.
Creating a DB/2 instance and running the tests is straight forward. I run the following steps over and over, stopping when something errors out, fixing it in the rules, and repeating as necessary.
It takes a long time to create a DB/2 LUW instance compared to PostgreSQL 9.1 or MySQL 5.5 on the same OS and hardware. But the runtime performance is good, so suck it up, cupcake. This is a database that really benefits from prepared statements, with performance improving by as much as 50% when using prepared statements instead of dynamic SQL invocations.
developer> cd ~/msscodefactory/net-sourceforge-MSSCodeFactory-CFDbTest-2-0/dbcreate/cfdbtst20/db2luw
developer> su -l db2inst1
db2inst1> db2 drop database db2inst2;
db2inst1> db2 create databse db2inst2;
developer> db2 connect to db2inst2 user db2inst1 using mypasswordisspecial
developer> db2 create tablespace msidata1;
developer> db2 create schema cfdbtst20
developer> . ./crdb_cfdbtst20.bash
developer> db2 connect reset
As a pointless bit of trivia, Radio 620 means only one thing in these parts: one of the folk's favourite radio stations when in Regina, http://www.620ckrm.com/.
Both the PostgreSQL 9.1 and MySQL 5.5 regression testing via CFDbTest 2.0 has been successfully completed.
This release is worth downloading, as it incorporates several new details for enforcement of security and data isolation as well as tweaks to the auditing implementation.
I had made some copy-paste changes from the DB/2 LUW implementation for the indexes on the history tables. There were some changes that had to be made to comply with MySQL naming conventions and case sensitivity.
But it's working again. Now on to testing the JDBC layer.
The PostgreSQL 9.1 implementation and testing with the new features backported from the DB/2 LUW efforts and the MySQL efforts have been integrated and tested with CFDbTest 2.0.
That's one ready for the next beta release.
Next up: MySQL 5.5.
All of the database binaries have a clean build to go with their new stored procedure specifications configured into their respective test database instances, and I am ready to begin testing.
Wouldn't it be a kicker if it all worked?
Yeah, right. I believe that'll happen, too. :P
All three databases have been clean-installed to server instances, and their correspondingly refreshed JDBC implementations have been clean compiled.
Everything now awaits on that XML SAX parser remanufacturing.
Soon, I hope...
The MySQL 5.5 and PostgreSQL 9.2 JDBC integration code clean compiles and is ready for testing.
I'm still waiting on the main code refresh to get around to manufacturing the XML layers.
I can still work on the DB/2 LUW JDBC integration refresh. That's not ready to test yet.
The MySQL 5.5 support as tested on Ubuntu 13.04 installs cleanly for CFDbTest 2.0. Next up I need to refresh the JDBC code for MySQL interfacing to match the changes made to the stored procedure signatures.
The MOS 8502 was a follow up to the infamous 6502 and 6510.
The PostgreSQL JDBC enhancements clean compile, but I won't be able to run regression tests until my laptop finishes remanufacturing the XML layers to support the new objects. There are stale references caused by the object model changes that are resulting in errors compiling the old XML code, so I can't do a quick-and-dirty test right now. Give it a few hours.
Note that the JDBC layers for PostgreSQL and DB/2 LUW are not in sync with the stored procedures for those databases, so this release does not produce functional code.
It is a work-in-progress release in case you are depending on the PostgreSQL stored procedures for integration with another language or interface toolkit.
This version of CFDbTest 2.0 and the SME template have been modified to eliminate self-referencing objects, which imply recursion, which DB/2 LUW does not support.
Instead of infinitely deep project and domain trees, you are now limited to TLD.TopDomain.Domain.SubDomain and Project.SubProject.
I've resolved as many of the installation issues as possible for DB/2 LUW 10.1. There are still two errors being reported by the installation process, caused by circular invocations of a function by itself (i.e. recursion. Apparently DB/2 does not like recursion, and there is no way to work around that being needed for certain delete implementations, such as self-referential object hierarchies.)
I'm going to shift over to PostgreSQL for a bit, and get that code in shape with the idea of consistent arguments, and fleshing out the read restrictions to match those implemented for DB/2 LUW.
2013-10-07 06.10.33 create or replace procedure sp_delete_domdef(
2013-10-07 06.10.34 DB21034E The command was processed as an SQL statement because it was not a valid Command Line Processor command. During SQL processing it returned:
2013-10-07 06.10.34 SQL20481N The creation or revalidation of object "DB2INST1.SQL131007061034100" would result in an invalid direct or indirect self-reference. LINE NUMBER=136. SQLSTATE=429C3
2013-10-07 06.11.38 create or replace procedure sp_delete_sprjdef(
2013-10-07 06.11.39 DB21034E The command was processed as an SQL statement because it was not a valid Command Line Processor command. During SQL processing it returned:
2013-10-07 06.11.39 SQL20481N The creation or revalidation of object "DB2INST1.SQL131007061139200" would result in an invalid direct or indirect self-reference. LINE NUMBER=157. SQLSTATE=429C3
It turns out that the sp_delete() routines in my CFDbTest 2.0 model do not eventually resolve each other after multiple passes. Other cases did resolve themselves, such as the version hierarchy, but not the projects and domains. There is a circular dependency in their code that I need to untangle.
Both PostgreSQL and DB/2 LUW support are broken with this release, so I'm not flagging it as a default download. It's only for those curious about the work in progress, or who might need/want early access to the stored procedure specifications for the database in order to wire in other code.
Have fun with DB/2 LUW. It's a long, long install process. I'm not waiting for final verification of the theory that the deletes will eventually resolve themselves. I'm just going to assume that's going to be the case, and let it run.
Unfortunately for the 365 moniker, I missed a typo in a rule that caused last-minute problems on the way to a clean install of the DB/2 LUW 10.1 scripts.
It's been fixed.
The DB/2 LUW installation scripts for the reworks of the sp_next() procedures, the data modifier procedures sp_create(), sp_update(), sp_delete(), and sp_delete_by_index() have been brought up to fully functional implementations, ready for testing.
sp_lock() needs revisiting (it should require Update priveleges on the object at hand), as do the various sp_read() implementations.
This is a work in progress snapshot. The overall code for DB/2 LUW is broken at this time while I continue forward with the migration and enhancement of functionality based on the PostgreSQL template.
Bring the new object structure over from CFDbTest and prepare to flesh it out with the attributes of the old model to create a new 2.0 hybrid structure instead of continuing with the 1.11 model definition. The idea is to start off with a 2.0 "bridge" release that parses 1.11 models for expansion by the 2.0 engine itself.
Then I can work on the 2.0 engine implementation and debugging, based on the 1.11 code. But the goal this time is to capture all of the enhancements as model specifications, which may require extending the 1.11 modelling attributes a little more.
Specifically, I need a way to wire the custom rules and bindings that are built into an engine customization in the model, while keeping the implementations as standalone Java objects that are not touched by the manufacturing process, but which are expected to live in the same source directory as the MssCF customizations.
Alternatively, I may define another layer of customization model specifically for defining the customized rules and bindings of the MssCF layers that the factory itself implements as rules designed specifically to extend the standard MssCF layer. That way the factory could automatically adapt itself to the namespace changes that happen when shifting through release version strings.
The security support and audit history logging for the sp_create(), sp_delete(), sp_update(), and sp_delete_by_index() implementations has been coded, but not checked for a clean compile yet. I want to get the sp_read() and sp_read_by_index() enhancements coded yet before I attempt an install to a database instance.
This is very much broken code, posted just as a work-in-progress snapshot.
Just in time for 8286, I was able to get a clean install of the tables, indexes, and relationships for DB/2 LUW 10.1 under Ubuntu 13.04.
Most of the stored procedures install successfully as well, because I haven't started messing with them yet. You wouldn't be able to perform a successful insertion with the current client-side JDBC code, though, because even though the arguments haven't changed for the stored procs yet, the underlying database structures now tend to include mandatory audit columns that don't have default values.
So this is more of an informational work in progress release. The code does *not* work for DB/2 LUW 10.1 yet. No where near.
Both the MySQL and PostgreSQL code bases now tie in the audit columns from the tables to the in-memory object buffers.
Next up: I work on DB/2 LUW upgrades and refreshes to support the new features, additional stored procedures and functions, and so on until DB/2 LUW is in line with PostgreSQL and MySQL.
There were problems with the deployment of the PostgreSQL rules in the last packaging that were preventing the PostgreSQL layer support from producing the stored procedure definitions. I don't know how things like this happen; I certainly don't remember disabling them.
Ah well, it's fixed.
The MySQL 5.5 for Ubuntu 13.04 regression test suite for CFDbTest 2.0 has been passed with support for populating the recently added audit column attributes of a buffer with the contents of the database columns which were already being populated in the back end with the last beta release.
In other words, you now have access to that information from the object implementation layers for MySQL.
The newly created SchemaHPKey, SchemaTableHPKey, and SchemaTableHBuff objects now clean compile. The refresh of the SchemaTablePKey and SchemaTableBuff objects to support comparisons and equality with the new objects also clean compiles.
The objects are now in place for implementing queries of object histories from the history tables in the JDBC layers. Once coded, their accessors will be brought forward to the standard interfaces.
Eventually I'll add some fancy interfaces like getComponentHistory(), which will get not only the history for the object itself, but it's components and their sub-components all the way down to the lowest level of composed objects in the hierarchy spawned by this object.
That particular little function is likely to be a rather expensive operation, as it may probe the database for a rather large number of individual objects before merging those results into the larger result sets formed during the call invocation hierarchy/walk of the objects.
I haven't fully fleshed out the HPKey and HBuff objects yet, but this variant work-in-progress clean compiles for now, and begins to show the gist of what the objects will look like.
Note the implementations of toString(), which conveniently convert history objects to XML instance strings, regardless of the object involved. This will make it very easy to serialize a collection of history objects resolved under a given instance (by chaining the object containership hierarchy and creating a unified set of all the changes for all of the objects in the hierarchy.)
Wire in new HPKey objects, switch focus to generic Java for now while I add these in. Also going to add in the auditing attributes for the base table buffers, in preparation for being able to populate that data from the database where specified.
Before I switch focus to DB/2 LUW, I want to get this done and in place for MySQL 5.5 and Postgres 9.2+.
The CFDbTest 2.0 implementation for MySQL 5.5 on Ubuntu 13.04 runs successfully save for a couple of exceptions where MySQL's date range limitations aren't broad enough to capture the full range of valid Java date-time values.
The security exercises have been added to the test suite for MySQL, and also pass evaluation.
Feel free to go crazy with MySQL 5.5 as well as PostgreSQL 9.x.
Most of the CFDbTest 2.0 regression test suite now executes successfully for MySQL. Once I've nailed down the remaining bugs (most notably an incorrectly formatted or parsed time string issue, likely in the audit code), and have the regression suite running as is, I'll add in the new tests that are executed by PostgreSQL to exercise the security layers.
I've already had to debug the security layers, but I want the explicit testing to be in the test suite for MySQL 5.5 under Ubuntu 13.04.
In all seriousness, most of the MySQL tests now run successfully, though update interfacing is still broken.
The MySQL 5.5 support is installing cleanly again. Time to remanufacture the JDBC layer, update the builds, and run some more tests.
First up is to verify that is_system_user(), is_cluster_user(), and is_tenant_user() operate cleanly and efficiently. They're the core of system security, so they have to work before anything else will.
The enhancements to the MySQL JDBC layer now clean compile.
I did a quick test run, but I'm seeing exceptions being thrown by the JDBC driver when no records are returned. This is a change of interface, and not something I've seen with any other database.
Hopefully it's a trivial matter of setting some pragma or option when connecting to the database to disable this behaviour.
The CLI now takes a third optional argument which specifies the single toolset layer to be manufactured. This will save a lot of time for development, as the java and mysql layers won't have to be manufactured in order to get java+mysql manufactured.
The complete set of MySQL stored procedures has been updated and refreshed to support auditing, automatic history logging, and to enforce security restrictions. The stored procedures install successfully for CFDbTest 2.0 under Ubuntu 13.04.
Next up: Refreshing the JDBC implementation to support the new audit and security arguments.
The sp_update_table() implementations have been enhanced and installed for CFDbTest 2.0. They now implement automatic setting of the audit columns and automatic insertion of history table entries after the update has been processed.
The MySQL enhancements for the delete and delete-by-index implementations have been refreshed with support for auditing, history logging, and cascading delete-by-index of owned sub-objects.
Added sqlstate error code usage with detailed exception messaging to sp_create_table() and sp_delete_table() implementations, where sqlstate codes are mapped as follows:
45001 - Permission denied, not identified as system user
45002 - Permission denied, not authorized for current cluster
45003 - Permission denied, not authorized for current tenant
The specifications for sp_create_table() have been updated to throw the appropriate exception sqlstate values, and now use the messaging exception syntax provided by MySQL to provide relevant security feedback.
The specification of sp_create_table() has been enhanced with the audit arguments, support for audit columns in the tables, and support for history logging of instance creations.
CFDbTest 2.0 installs cleanly with the enhanced stored procedure code.
The MySQL 5.5 database creation scripts now install cleanly for CFDbTest under Ubuntu 13.04.
The audit columns, database bootstrapping, and security functions have all been defined.
The history tables have been created and are ready to be populated by the stored procedure enhancements to be migrated from PostgreSQL, including the addition of the security arguments to all of the stored procedure definitions.
But for now, it's been a long slog to bring things back into a functional line along with migrating some of the PostgreSQL history and audit features to MySQL. I look forward to taking a bit of a break now.
You must not enable history and auditing if you expect the current implementations of the MySQL stored procedures to work, including the tables flagged as having audit attributes and history data in the SME templates.
Enable the indexing of the history data for PostgreSQL as well. I didn't realize I had that code still disabled for edit only. It should be good to go. I'll test it some other time.
The MySQL tables now get instantiated properly. I'm doing final integration test against MySQL 5.5 under Ubuntu 13.04 at this point in time. It's almost ready to package for a release.
Implement the wiring of the new security scripts sp_bootstrap(), sp_is_system_user(), sp_is_cluster_user(), and sp_is_tenant_user().
Implement the JDBC bindings for all of the extended read and delete functions which are now implemented in the MySQL stored procedure layer.
All of the stored procedures to date for all of the databases have been modified to support read and delete by optional index as applicable. The JDBC has only been updated to support these changes for PostgreSQL so far, though.
The work focus will now shift to MySQL to bring it up to par with the PostgreSQL implementation.
The read and delete by index stored procedures have been enhanced to support optional index columns, and the client-side JDBC code now uses stored procedures for those functions all the time. The only time client-side dynamic SQL is still used is when inserting or updating BLOB tables.
It's a good thing that I took a break for the past few weeks to think about things instead of continuing to code like a fiend. I've realized that I can get away from the dynamic client-side code by using an weird form of SQL to handle nullable columns in the queries:
( ( ( argVariable IS NULL ) AND ( alias.column IS NULL ) )
OR ( ( argVariable IS NOT NULL ) AND ( alias.column = argVariable ) ) )
Apparently this syntax won't perform very well with MySQL, but I don't really care. It's up to the database vendors to provide proper statement optimizers, and this syntax means I can prune out the dynamically created SQL from the client side for reads and deletes.
Equally important, it means cascading deletes will no longer be restricted to cascades that join on non-nullable columns, giving the whole system a greater degree of flexibility and improving performance of deletes by nullable keys.
The PostgreSQL feature set is now complete.
The last step was to pass the ClusterId and TenantId into all deletes, because with cascading operations there is no way to predict which is going to be required by a subroutine delete. (It would be perfectly legal to have Tenant data that can do a cascading delete of Cluster data, for example. Weird, but legal.)
The code has been tested with CFDbTest.
Implemented the client-side filtration of the queries in similar fashion to what was done for the PostgreSQL stored procedures. This implies that delete-by-optional-index also enforces the filtration; only the stored procedure version of deletes still need to be updated to protect data you don't have access to.
Updated the sp_lock routines and corresponding client-side code to restrict their operation. You can still perform an sp_lock DOS when dealing with class hierarchy objects, but you can not use sp_lock to read data you don't have permission to access. You can just be a pain in the butt with intentionally or accidentally badly written code that tries to lock data you shouldn't be accessing. You can't perform a DOS against data that is stored in non-hierarchical tables.
The entire set of MSS Code Factory jars have been rebuilt with the latest jars from Ubuntu 13.04.
The Apache Commons Codec jar has been refreshed.
The Apache Commons Codec jar has been refreshed.
I spent Saturday upgrading my system to Ubuntu 13.04, seeing as I no longer need to maintain 12.04 LTS compatability to work with a company I used to do work with from time to time. Some time this week I'll update all the builds to use the latest .jar files for Apache Xerces, Log4J, and Commons Codec, and update the packaging as well to use those releases. (Coded 2013.08.09)
After that, I need to finish implementing the client-side filtration of the queries in similar fashion to what was done for the PostgreSQL stored procedures. (Coded 2013.08.16)
I've decided how to handle filtration of the deletes as well -- I'll be passing the ClusterId and TenantId into all deletes, because with cascading operations there is no way to predict which is going to be required by a subroutine delete. (It would be perfectly legal to have Tenant data that can do a cascading delete of Cluster data, for example. Weird, but legal.)
Seeing as I have that much effort to go to, I'll update the sp_lock routines and corresponding client-side code to restrict their operation as well -- just in case someone tries to perform a DOS by obtaining a non-damaging lock on another Tenant's data for the sake of being miserable. In practice, this is extremely unlikely as I expect in most cases people will be using the manufactured code in a web server environment, not for client-server applications. But I could be wrong, and even though you would need access to the APIs to perform such a DOS, I may as well try to mitigate the risks and have done with it. (Coded 2013.08.16)
Once that is done, all the pieces will be in place except for identification/login support, and I've decided against implementing that in the manufactured code and leaving it up to users of the code to implement their own identification services based on whatever tools they use for that in their environment. I had to implement the authorization restrictions in the database in order to be able to tie them in to the queries efficiently, or I would have made some assumptions about sites using LDAP with Kerberos. However, there are no APIs that I'm aware of for using such information from within stored procedures, so I opted for a simpler (and more familiar) approach as currently coded.
So once I'm done the tasks I've noted today, I think you'll be able to think of the PostgreSQL implementation as feature complete and ready for use. There may be bug updates after that, but unless people report bugs to me, I don't expect to do any more testing than what I've already done with DbTest 2.0 until after I've finished bringing the other database integrations up to the same feature set level as Postgres.
The performance tuning isn't really performance tuning in the end, as this version takes about 3 minutes longer to run than the old code did. However, it fixes a number of issues with the way the code worked (there were unknown bugs discovered during testing), and it's more flexible than interim versions of the code were (no more restrictions on duplicate names between cartridges/toolsets), so this is the version that will be retained.
It uses TreeMaps throughout instead of HashMaps. I must admit I'm disappointed that HashMaps suffer more from the performance penalty of calculating hashCode() values than it gains from "fast" queries.
This version runs the "work" job for CFDbTest 2.0 in 1h17m vs. 1h14m for the old code before I started all the work.
The pure TreeMap version of the code ran the fastest, so that's what I'll keep.
CFCore now uses a combination of HashMap and TreeMap, with TreeMap for string-keyed objects and HashMap for numerically keyed objects. This should provide the best of both worlds for performance.
This version took 1h18m30s to manufacture the code, so I'll revert the code back to the all-TreeMap version.
The class hierarchy map is now stored in a HashMap, as is the collection keyed by the ToolsetPKey. Those two objects use numeric key values, so hash maps should be quite efficient for them.
The rule names are keyed by TreeMap, because that way Java can short-circuit the string compares instead of having to go to the expense of a full hashCode() calculation.
Due to the still relatively large size of the hash maps in conjunction with the expense of calculating a Java hashCode() for a String, this build tries replacing the HashMap instances with TreeMaps instead. There is no other difference between 7739 and 7742.
Using a TreeMap, it takes 1h17m exactly to manufacture the "work" version of CFDbTest 2.0. That's nearly 4 minutes faster than the HashMap version of the code.
It turns out that Java has a pretty expensive way of calculating hash codes for strings. I suspect that when push comes to shove, a TreeMap should perform better because it can short-circuit the comparisons. I'll know in an hour or two.
Time to let the tool run a performance test to see if reducing the hash map size paid off.
As it turns out, reducing the size of the hash maps made no statistically valid difference in the runtime performance compared to the big hashmaps. 1h20m55s
I was short-circuiting the search before I should have, so for tools that didn't inherit other toolsets, the probe of "any" rules wasn't happening.
I forgot to probe the "any" rules during the expansion search.
I forgot to probe the "any" rules during the expansion search.
The rule hash maps are performing quite poorly due to their large size. I've attempted to reduce the size of the maps by eliminating the inheritence of rules and settling for the alternative approach of scanning the inherited toolsets using the same style of iteration over the toolsets that was used in the older code.
The rule hash maps are performing quite poorly due to their large size. I've attempted to reduce the size of the maps by eliminating the inheritence of rules and settling for the alternative approach of scanning the inherited toolsets using the same style of iteration over the toolsets that was used in the older code.
Several bugs and oversights with both the rule base and CFCore have been corrected with this release. The core java code and the PostgreSQL JDBC as well as the PostgreSQL database creation scripts all run correctly now, and the JDBC changes have been propagated to the other database layers although they have not been explicitly tested yet (that will happen overnight, and I'll update the status accordingly after it's completed the run.)
The timing tests have been completed. Where the old version of the code took 1h14m to manufacture CFDbTest 2.0's "work" project, the new version took 1h20m. However, the newer version corrected several bugs and errors in the implementation, so it's not entirely a fair comparison. Regardless, the new code stays.
I do have an idea for further performance tuning that will shrink the size of the HashMaps substantially by avoiding the direct inheritance of rule sets. It would seem that the large hash maps must be creating numerous linked lists due to hash collisions, resulting in the poor performance. Shrinking the hash maps should address that problem.
I'll be working on that now before I proceed with the overnight run.
If the same name is found in both the inherited and inherited first toolset collections, and the name is also present in the current toolset, then an infinite loop resulted along with corruption of the inherited first probe chain.
This has been corrected by explicitly checking to make sure that the tail rule is from the current toolset, and has not already had the inherited first chain attached while attaching the inherited chains.
The core of the engine has been altered somewhat because there was a problem with the population of the "All" queries.
The engine has also been integrated with MSS Code Factory CFCore 1.11.7707, implementing new code that should improve the performance of the code manufacturing process substantially.
Along the way I found a bug in the DefClass hierarchy for the UuidGenDef -- it was mistakenly derived from an Int16.
I also had to be more specific about the scoping of id generator rules in the rule sets, otherwise there was no predictability as to whether rules scoped by a table or a schema were going to fire. That really should not have been the case as far as I'm concerned, but it was a minor problem and easily fixed so I'll just chalk it up to gremlins. The changes have only been tested for PostgreSQL, though they were made to the other database's JDBC code as well.
I did a performance run to see how much of a difference the new engine code made. The old version of code took 1h14m (74 minutes) to manufacture the CFDbTest 2.0 "work" project on my Core i7 laptop.
Do not download -- I found more bugs in the rules.
The new code for pre-building the probe lists has been compiled, tested, and debugged.
When inheriting rules other than the any layer rules, I forgot to ensure that the rules of the pruned name are to be probed. This mistake has been corrected. However, be aware that if there is a conflict between the rules specified by that pruned rule cartridge and the names of the remaining rules from the "normally" inherited name, the "normally" inherited name rules will *not* be inherited as one would expect. This isn't a problem in reality, but it's something to be aware of.
Ready to rock and roll.
I've had some ideas churning around in my head for changes to the CFCore LGPL library that forms the heart of MSS Code Factory.
The core currently does an iterative probe of the rules that have been loaded by the engine, which is rather inefficient. What I'm thinking to do is after the rules have all been loaded, I'll build a custom data structure that "layers" the rule base.
First it will create a set of duplicate entries sorted by rule name for each of the ToolSets that have been loaded.
Next it will build an array of the rules for each name in the ToolSet, and sort those based on the defined object inheritance for the elements of the model to be analyzed. The ScopeDef will be considered first, then the GenDef. This will result in little lists of rules that are pre-sorted in the same order that the probes are done by the current code. These arrays won't be kept -- instead the rules will be linked by a ProbeChain reference, such that the narrowest possible interpretations of a group of named rules appears nearer the head of the chain than wider interpretations.
Finally, each ToolSet layer of rules will be stitched together. For a given ToolSet, the tail of it's defined ProbeChain rules will be linked to the broader scoped ToolSet's entries that are "inherited" by the ToolSet (e.g. "any" rules are inherited by "java" rules, which are inherited by "java+pgsql" rules.) Once the inheritance links are established, any rule names from the inherited ToolSet will be brought forward to the inheriting ToolSet, so that the inheriting ToolSet has name entries for all the names that can possibly be expanded by it.
Then I'll modify the GenContext implementation so that it just probes the ToolSet name entries for an expansion name, and does a linked-list walk searching for the widest entry whose ScopeDef and GenDef derive from the current item being expanded.
The net effect of this change of approach is to pre-compile the rule base so that minimal runtime effort is spent on trying to find an expansion to process.
I'm also thinking of precompiling the GEL expansion of the rules as I load them, rather than doing compile-on-demand as I do now. That would ensure that any compile errors are caught and reported near the file/line of the defining RuleSet so it can be reported in a more useful fashion than it is now. I might even modify the rule definitions themselves to capture the File/Line information so that when an expansion fails, I can report on where the expansion was defined, which might make it easier to fix certain classes of typos and syntax errors.
Note that I couldn't *quite* go so far as to pre-link the ProbeChain lists to the rule expansions themselves, because inheriting rule sets can override the definitions of the inherited rule set, which means they have to search different sets of rules than the inherited rule set does before performing an evaluation.
I'm not sure when I'm going to start coding this. I'm still working out the kinks in my head, and in the meantime I've been working away on actual rules for cleaning up some issues with the PostgreSQL prototype code.
The PostgreSQL stored procedures sp_read_dbtablename* have been updated to restrict the result sets by the security cluster or tenant ids as required. This prevents users of the APIs from "snooping" on data they don't actually have permission from the tenants and clusters to read.
A similar change still needs to be made to the sp_delete_tablename_by_suffix and sp_lock_dbtablename stored procs, but that will require client-side changes as well.
The client-side code for reads still needs to be updated to implement the same change as in the stored procs for read-by-index over nullable index columns.
As CFCore and CFLib have been rebuilt with JDK 7, the main project build has also been updated and repacked to use the rebuilt jars.
CFCore has been recompiled and repackaged using JDK 7.
CFLib has been recompiled and repackaged using JDK 7.
The PostgreSQL security enforcement has been implemented and tested using MSS Code Factory CFDbTest 2.0.7683.
During the debugging of this release, a serious bug was encountered and corrected. UUID values were not being properly considered by the equals, hashcode, and comparator logic. If your application uses UUIDs, you should upgrade to this release immediately.
Note that I've upgraded to the "Kepler" release of Oracle OEPE for doing builds, and now use Java JDK 7 for those builds. Previously I'd been using an Eclipse release that was built for JDK 6.
I've exercised the security code with the latest version of CFDbTest 2.0, but it highlighted a problem with the SAX XML parsers that I'll need to fix before I can do the next beta release.
Specifically, when the group members are being initialized by loading the information as the system user, only the first member element is being properly loaded and the others are being ignored. The odds are this bug is generic and has been affecting my tests for a long time unbeknownst to me.
There was a potential SQL injection attack vector in the PostgreSQL stored procedures for delete-by-index over indexes with optional columns, because it would have been possible for the 8000 character SQL statement formatting buffer to overflow. More likely the overflows would have simply resulted in the failure of the stored procedure to execute successfully than to allow actual damage to the database, but I opted to close the door to such attacks entirely by moving the dynamic SQL variants on the code to the client side implementation instead of the stored procedures.
The CFDbTest 2.0 regression test suite has been exercised and passed for this update.
The PostgreSQL security implementation has passed the regression tests via CFDbTest 2.0.
Next I need to enhance the SAX/XML loader mainlines to accept the specification of a user name, and create tests to validate that permissions are properly enforced.
I also need to remember to rework the delete-by-index code for PostgreSQL so that the stored procedures are only called if all the parameters are not null, remove the dynamic SQL from the stored procedures themselves, and use client-side code to handle deletions where null parameters are provided. Otherwise there is a risk of SQL injection attacks by passing long string values to the dynamic SQL in the stored procedures, because the statement buffers are limited to 8000 characters.
The compile errors from 1.11.7599 have been fixed.
The code for the sp_read_*_cc_* stored procs has been reverted to not requiring an audit user id, seeing as there is no way for those routines to resolve the potentially conflicting permissions of the different subclass tables directly. Instead, errors may be thrown later while reading the buffers for the subclasses.
The code clean compiles and fails when trying to create the SecSess instances, reporting that only the System user can create SecSess records. I'm going to have to implement some special case checks for the stored procedures for SecSess.
Although the stored procedures have been successfully created with this release, they have not been tested at all. The JDBC code probably doesn't even clean compile. Download at your own risk.
The reference bindings Table.ClusterIdColumn and Table.TenantIdColumn search the inherited Owner and Container relationships of a table for references to the Cluster and Tenant tables respectively. When such a reference is found, the FromColumn of the relationship is extracted and returned.
This is used to implement the table security.
The read functions needed to have the SecClusterId and SecTenantId values passed as parameters so the implementation can verify that you have permission to read the requested table. Note that the implementation does not filter nor restrict the result data set to values that are owned by the corresponding Cluster or Tenant; it's only used to determine if you have read permission on the table.
The deletes rely on the fact that the per-record delete implementation reads and locks the values of the table explicitly, so it can actually enforce the restriction that you can only modify data within clusters and tenants that you have access to. The same goes for the create and update stored procedures.
The cursors should work again, but they don't enforce read checks because there is no guarantee what so ever that a cursor's arguments include the ClusterId or TenantId, and for System secured tables they're always readable. While this is a "hole" in the implementation, it really is restricted to abuse by programmers writing batch jobs, which isn't something that is normally being run under any permissions other than the system user in the first place. Part of the reason for this is that the cursors don't know the names of the tables when they're re-opened, so I could only implement the check during the initial read if I could implement it at all. C'est la vie.
The group security stored procedures sp_is_system_user(), sp_is_cluster_user(), and sp_is_tenant_user() have been coded to support group include nesting up to 8 levels deep (the target group is level 1.) The PostgreSQL schema object has been updated with code to invoke those routines on behalf of the client-side code.
The code clean compiles at this time. There is no new functionality hooked in at this point in time, so there is no need to download this update if you've downloaded MSS Code Factory 1.11.7525.
Adding the foreign keys for the audit columns to reference SecUser exposed an error in the CFDbTest 2.0 execution. I was passing the wrong UUID as the user id for client-side inserts and updates for tables containing BLOBs.
CFDbTest 2.0 has been regression-run for PostgreSQL and Ram storage.
I am now ready to embark on the actual implementation of the security functions and to wire them to the stored procs and client side code. I envision 3 procedures: sp_issystemuser(), sp_isauthorizedbycluster(), and sp_isauthorizedbytenant(). The latter two will accept the SecUser id and a group name to be probed for in the relevant tables (Sec* and TSec*.)
In order to enforce foreign keys on the audit columns, I'd have to work some magic on the order in which tables are created by the PostgreSQL database scripts. Rather than do that, I've just abandoned the idea.
PostgreSQL now has a stored procedure, sp_bootstrap, which is used to bypass the table accessor procedures in order to populate the initial system data. This allows the system to enable audit columns and history tables for everything save the AuditAction and SecSession tables. The only downside to this approach is that the UUIDs for the initial system SecUser and SecSession are constant across installations, making them *very* easy to guess.
The Table.SecurityScope attribute has one of four values. If none is specified, "System" is used.
Reworked the security attributes and initialized them using the new syntax. You now need to set the security cluster explicitly, because it is no longer part of the SecUser key.
A user is keyed by a UUID, so it does not form a concatenated key with the cluster id that defines it. UUIDs are implicitly unique within the cluster because only servers in the cluster are updating the database and they're all presumed to be running sane UUID implementations. (Yes, there is a vanishingly small chance of collisions. So the system may burp from time to time while creating users.) Note that you also need to ensure that there are no duplicate MACs on the cluster network with some implementations of UUIDs.
Added the TSecGroup, TSecGroupMember, and TSecGroupInclude objects to the tenant.
I think I've got everything in place to proceed with the group-based security implementation, except I may have to add an engine/BAM attribute and verbs enum like:
You'll notice that getSystemSession() has a "flaw" in that it will reuse a system session if it's started in the same second as another system session. I don't consider this to be a big issue, because normally one is running system utilities by hand.
When someone logs in through the eventual user interface, a login function will check their password and will always create a new session once authorized to do so. Thus you could have more than one session starting in the same second when it's being caused by user interface behaviour.
Beta 11 implements audit columns for PostgreSQL and also corrects a defect in the production of SAX XML parsers by the tool.
All that remains for core features to be implemented in PostgreSQL is the security model. Once security is implemented and enforced, I'll resume propagating the functionality from PostgreSQL to the other databases. Once they're all fully functional, MSS Code Factory 1.11 will go production.
Note that CFBam won't manufacture right now because it has inherited the SME objects for Domains, Projects, etc. which conflict with the older BAM model. I'll need to finish the remapping/migration in CFDbTest and propagate it to CFBam before that project will be workable again. All in due time.
It's also worth noting this is a 420 release, brought to you via copious quantities of medical cannabis to suppress the migraines and allow me to work.
I've corrected a subtle bug in the way the engine was determining whether a component could be contained by a container, which has far reaching effects on the manufactured SAX parser. I've also added the inherited sub-objects to the SAX parser code; previously only directly specified sub-objects were parsed.
Hopefully this one will pass testing so I can issue a beta 11 later today.
I'm still working on the complex objects. The builds from this release might even work, but I want to get the mostly-working code out there in case anyone is waiting.
The last of the pieces of code to support the audit columns has been coded and clean-compiled. The code is ready to begin testing.
The core objects and PostgreSQL JDBC integration code clean compile. Once I've finished the client side code for populating the audit columns on updates, I'll be ready to begin testing.
I spent the day working on formatting niceties and getting the code to come closer to a clean compile. At least the PostgreSQL code clean installs.
Refactor SecUser as a subclass of Tenant. Thus a user always has their own data space by definition, including copies of all the objects defined for any other tenant of the cluster. So while you may be interested in your company's tenant data and would log in there, you always have your own private data as well.
Added the Tld, Domain, TopDomain, DomainBase, Project, Revision, MajorRevision, and MinorRevision objects to the SME template.
Made Project the container for SchemaDef instead of Tenant.
The goal is to leverage the XML parser capabilities of the tool as much as possible because writing XML parsers by hand as I've done in the past is a very labour-intensive process that I'm hoping to get away from for 2.0.
There is still some work to be done, in particular:
But I've added the following in to this work-in-progress update. Note that this build does not produce working PostgreSQL code yet. It probably doesn't even clean compile.
The following have been coded so far, but not test compiled:
That's about 35,000 lines of new CFDbTest 2.0 code today. That's good enough for tonight.
I finally got PostgreSQL 9.2.4 to install on my Windows 7 box.
It turned out McAfee AntiVirus had littered the registry with configurations it didn't undo during it's normal uninstall process. You have to run this little cleanup program they provide for download in order to get your system back to normal after uninstalling McAfee.
You can download McAfee's removal tool here: http://service.mcafee.com/FAQDocument.aspx?id=TS101331
In order to bring the Java and JDBC code back into sync with the PostgreSQL database creation scripts, I need to:
I'm not going to make this update the default download because it produces unintegrated, broken code. It will not run for PostgreSQL executables.
I've changed my mind about transaction auditing. It's far too expensive in practice, and only required for a very small subset of environments. Therefore I will only support session auditing. I'm remanufacturing all the projects 2013.06.22 to remove the SecTxn objects from the system implementations.
Now I'm working on the audit column support for PostgreSQL. To complete this task, I need to:
Beta 10 adds audit history support for PostgreSQL. By simply specifying HasHistory="true" for a base table in your model, complete audit trailing is implemented for that table and all subclass tables deriving from the base.
With the newer version of MSS Code Factory 1.11, the JavaXxx tags are compiled as GEL expansions, which means the GEL buffer has to be as large as those text attributes are. Formerly they were limited to 2000 characters in compliance with the expansion rule bodies.
CFCore had to be updated as well to implement the expanded GEL buffers.
Most of the CFDbTest 2.0 validation suite runs for PostgreSQL with the auditing support. However, there are still a few tests failing so this isn't quite beta-ready code. It's close though. At least it clean compiles and mostly works.
CFDbTest 2.0 clean compiles for all the databases again, so I can test the auditing for PostgreSQL now.
I've changed my mind about implementing transaction auditing at this time. I'm going to stick with session auditing by default, and make transaction auditing a sometime-in-the-future optional feature due to it's performance impact.
The attribute Table.HasAuditColumns has been added and the appropriate verbs wired to the engine for use via GEL.
The new Java tags for TableObj customization have been added and are in use.
The PostgreSQL code should clean compile with this latest round of changes.
Last time I put in the new code, but forgot to remove the erroneous exception throw so it was still failing.
It should be fixed now.
Note that the failure only occurs for the BLOB expansions, so there would seem to be something weird or special about that code. If this bug fix doesn't resolve the problem I'll have to resort to my old nemesis: the debugger.
There has been a boundary case bug fix implemented in the engine itself.
This build may well finally produce the CFDbTest 2.0 with the new activated Java expansions successfully. I should be close to the next beta with this release.
The Java tags associated with tables have been made "active".
That is, their contents are expanded like rules so that you can do things like specify $SchemaName$ to edit the resulting code to use the type-names of the schema being manufactured.
This is meta programming at it's finest. :)
Switched CFDbTest over to UUID generators for the SecSessionId, and SecUserId. I'm also debating adding and implementing another UUID generator object: The SecTran transaction identification and tracking object. May as well go whole-hog and identify individual transactions in the system via the audits. This is, after all, intended for corporate system requirements for security and auditing, not mom and pop shops.
Add the SecTxn object as a component of a SecSession.
In the near future, the transaction will be auto-logged and initialized by special case code in the beginTransaction() code for the SchemaPg8Schema implementation. Once that is accomplished and transactions are being logged, then I can modify the PostgreSQL stored procedures and tables (again) so that individual audit log entries specify their enclosing transaction instead of their enclosing session.
This untested code does at least clean compile with support for the extra argAuditClusterId and argAuditSessionId parameters that get passed to the sp_[create/update/delete] instance procs. It should work, but who knows? Shot in the dark for now, because I had to do Something for the Super Secret Spy release 007...
The updated and refreshed PostgreSQL stored procedures are ready for testing. However, the JDBC layer has to be tackled before I have code to test the new stored procedure implementations with.
The PostgreSQL history tables get created cleanly for CFDbTest 2.0 now.
This should be a good bundling for testing PostgreSQL with.
There are no longer indexes over the history tables, so they can be lean and mean, indexed only by the primary key of the history table (PKey + revision).
The CFDbTest 2.0 history tables for PostgreSQL are now properly created.
This is the first complete cut of the PostgreSQL history tables.
The HasHistory verb is now working and properly registered in the GEL interpreter.
If the columns are not nullable, then inline SQL can be used for DB/2 LUW. Otherwise, if the columns are nullable but all values are specified, then the same inline SQL can be used. Finally, if dynamic SQL is needed to express the index column set, then an exception is thrown.
Rework MySQL 5.5 delete-by-index code so that at least deletes over optional columns can be run in the RDBMS when all the index columns are specified. This will allow the implementation of reasonably efficient deletion of named hierarchies at some point in the future.
The MySQL code has been regression tested using CFDbTest 2.0.
Rework the PostgreSQL delete-by-index stored procedures to use precompiled SQL whenever possible, and only resort to dynamic SQL if any of the argument columns is null. This should improve performance a bit, and more importantly, it's a better template for dialects that don't support dynamic SQL because instead of evaluating the dynamic SQL limb, you throw an error for those dialects and implement a slow workaround at the JDBC layer.
CFDbTest 2.0 PostgreSQL regression tests passed for this code.
There is also the beginning of the Oracle delete-by-index stored procedures, but they're not ready for testing yet. Oracle database creation scripts may be broken as a result of this release including work-in-progress code for Oracle.
The DB/2 LUW 10.2 cascading deletes of complex objects have been coded and pass the test suite.
PostgreSQL 9.1 cascading deletes are now implemented and pass testing.
Sorry for the delay -- I was busy moving (again.)
The MySQL stored procedure JDBC bindings have been tested successfully. However, a number of the CFDbTest 2.0 test suite tests fail because MySQL does not support the full date range with it's datetime and timestamp data types that are required to properly store Java data. The code works, but the data ranges are limited by MySQL itself.
MySQL is also the first database to successfully perform a cascading delete of complex objects through stored procedures.
The MySQL 5.5 JDBC bindings have been coded and clean compile. They are now ready for testing. Note that the MySQL implementation is somewhat crippled because MySQL doesn't support dynamic SQL cursors, which are required for a complete implementation of the delete-by-index stored procedures when any of the indexed columns are optional.
One side-effect of authoring code that will at least sort-of work is that MySQL requires you to implement a manual cascading delete of objects in your code. The "Replace" option for the MySQL SAX Loader will therefore not work as expected for complex object hierarchies.
I am considering implementing an automatic manual cascading delete in the MySQL code layers, but I haven't committed to that yet. I really don't care if the MySQL implementation is crippled; I always expected it would be, even though it does continue to mature as a database, it's not yet caught up to other vendor's/project's offerings.
There are some limitations to the MySQL 5.5 stored procedures. In particular, MySQL does not support dynamic SQL cursors, so the delete-by-index implementations for indexes over optional columns signal errors because they can't be implemented properly with MySQL stored procedures.
In the meantime, there is a large population of application models that can be specified without the use of optional index attributes. It's a serious restriction on the delete functions, though, so be very careful not to have a parent-children, master-details, or container-components relationship that references an index over optional columns. The MySQL 5.5 code will raise errors if you specify such a relationship.
The sp_delete_xxx_by_suffix() routines still have syntax errors because I haven't looked into switching over to MySQL 5.5 cursor syntax from the equivalent PostgreSQL code yet. But the reads, locks, creates, updates, and main instance delete routines all clean compile on install to MySQL 5.5 under Ubuntu 12.04 LTS.
It's worth noting that MySQL 5.5 supports BLOB arguments to stored procedures, so no client-side SQL will be required save for dynamic SQL for nullable where clauses in some reads.
The new Java* tags in the Business Application Models have been implemented and tested. The CFBam 2.0 specification has been enhanced with the custom code to be wired in to the object model, and CFDbTest 2.0 was used to verify that all of the changes to the rules use the correct spellings for the new Java tags.
The Business Logic layers have been permanently excised from the manufactured code in favour of the custom code embedding approach.
Java*Import elements have been added to the 1.11 BAMs, tied in to the engine, and are about to be tested for CFBam 2.0.
The last of the bugs was induced by removing the BL distinguishing name part from some variables in the SAX Loader mainlines/CLIs. With that corrected, this should finally produce clean-compiling code for all of the application models under test (CFDbTest 2.0, CFBam 2.0, CFSme 2.0, and CFCore 2.0.)
Defects in the XML SAX Loader parser rules were corrected, along with a few other things. The database layer code hasn't been manufactured and compiled yet, because I've been stopping the jobs to repackage and update as the smaller projects have been compiling with errors. If I already know it's not going to be a clean build there's no point waiting for the job to finish.
This isn't even the beginning of migrated code, just a copy of the PostgreSQL stored procs in MySQL trappings. All I've converted so far is the .bash script syntax from invoking pgsql to using mysql.
So far as I know this also incorporates a clean-building version of the BL expulsion code.
The business logic layers have been expunged, and a test manufacturing of CFDbTest 2.0 is under way to verify whether this rule set release produces clean code or not. A number of errors have been corrected in the manufactured code.
This build is the first cut of removing the BL layers.
The *TableObj* code customization tags have been renamed to *Table*, because the custom code has to go in the database table implementations, not TableObjs.
There are now SchemaObj code bindings for the different layers of the architecture (Db2LUW, MSSql, MySql, Oracle, PgSql, Ram, and Sybase.)
Now I really am ready to purge the BL layers.
There were some missing bindings for the [Edit|Table]Obj[Interface|Members|Implementation] verbs. This was the last of the problems with the new functionality I've been working on to provide a means of manufacturing custom code for an application model at the various object interface points that had been previously serviced by the BL layer.
I am now ready to excise the BL layer itself in favour of the new functionality.
MaxLen was not specified for some of the new 1.11 attributes, resulting in a default MaxLen of 0.
The rules have been updated to reference the new bindings and some sample interface and implementation methods have been added to the AnyObj definition of CFBam 2.0.
Time for a test run.
The SME template was modified and propagated to all of the models so as to provide test data for the new parser. There were no errors running the parser for CFDbTest 2.0 with all of the new attributes specified.
Now I can finally get on with modifying the rules.
The new attributes are now wired as mixed-content elements of the SchemaDef and Table elements respectively.
Ready to modify the Java rule base.
I'm ready to modify the Java rule base to make use of the new code expansion members and their verb bindings. Once I've got all the rules updated to use the new attributes, I'll make the final change: defining the XSD attributes that match the new text fields, and parsing them in the SchemaDef and Table SAX element handlers.
Then this little foray into new functionality will be complete.
Once that is ready to go and tested, I'll be removing the BL layer definitions because instead you'll just define the code text fragments in your application model to write your custom code. This will make the code much easier to read and follow than the BL approach.
Remove the PgSql delete collision detection code. While collision detection is good for single-record deletes initiated by the client, the approach breaks down with object hierarchies and when updates of the deleted object have to be performed in order to prevent relationship-dependency loops (such as when the PrimaryKey reference of a table to it's sub-object Index objects would prevent the Index from being deleted due to the Table reference to it, while the Index object existence would prevent the Table from being deleted.)
Complex class hierarchy creation and updating has been implemented and tested for all of the databases.
The MySQL implementation had to invoke it's class hierarchy explicitly as that functionality is no longer implemented at the Obj layer. Each database implementation is expected to do it's work as the result of one explicit method invocation, with any super/sub class invocations wired in to the database layer code itself.
There was a common error in the MySQL creation scripts that saw a revision column being specified for each subclass table rather than just in the base class tables.
MySQL can only produce an update try block for subclasses that have data columns to be updated. MySQL doesn't like it if the only columns being updated are the index columns referenced in the where clause.
There was also a syntax error in the MySQL UPDATE statements -- a space was needed between the "?" and the "WHERE" keyword.
The complex inheritance tree insertion has been exercised and passed testing for CFDbTest 2.0 for the RAM, DB/2 LUW 10.1, Oracle 11gR2, Sybase ASE 15.7, and SQL Server 2012 persistence layers. MySQL and PostgreSQL have not been tested yet, so this isn't a beta release just yet.
The copyright notices have been updated for 2013, and firstname.lastname@example.org is now the properly attributed copyright contact instead of email@example.com.
Both the Oracle and RAM creation of complex objects now works correctly. There had been a common bug which was causing problems for both implementations (the PKey had to be populated after an insert for all objects, not just base class objects), but that has been corrected.
I expect the bugs in the remaining database implementations of the complex object creation to be fairly minor. There really weren't many problems with the Oracle inserts themselves; the problems were all with the shift in architecture from creating individual table records to having the leaf objects creating themselves as a complete whole using stored procedures. The only reason this hadn't been uncovered earlier is that the problems with this mental shift only show up when working with table class hierarchies, and I hadn't been testing those until recently.
Testing of the Sybase implementation of complex object creation highlighted similar errors to Oracle as well as a number of case-sensitivity issues regarding ClassCode columns. The same edits have been applied to the SQL Server rules as they were based on Sybase ASE.
SQL Server testing ran successfully on the first attempt, thanks to pro-actively applying the Sybase ASE edits to the SQL Server rules at the same time.
I am now out of database engines to use for testing on my laptop. The DB/2 LUW, MySQL, and PostgreSQL testing will have to be worked on after I get home tomorrow, as they all run on my Linux box, not my laptop.
In the meantime, I've applied known corrections to the DB/2 LUW and MySQL schema creation rules; PostgreSQL had already been corrected as it was originally going to be the first database I was to test for complex object creation.
Note that only PostgreSQL has the enhancements needed for object hierarchy deletion at this point, and that is untested code.
The object hierarchy support may well take longer than I had planned. While testing Oracle, I discovered I still had some old assumptions in the Obj layer code. Specifically, that code was trying to invoke the create/update/delete hierarchy rather than just directly invoking the backing store object and letting it's implementation deal with any hierarchy invocations that might be necessary. This means that several inserts were being invoked, resulting in duplicate key errors.
I've corrected that flaw and now the Oracle tests run ok for creating complex objects (as do the other Oracle tests), but I've broken the RAM implementation which had been relying on that erroneous/obsolete Obj behaviour. I'll have to fix the RAM behaviour tomorrow.
CFDbTest 2.0 now successfully runs the RAM tests for loading a complex object hierarchy. It was a long haul of debugging over the past week, but it finally works. I might not do any further work while I'm out of town just so that I can issue a release of the current code when I get back home.
I've also added a test to exercise the cascading deletes, 0035-ReplaceComplexObjects.xml. This test also runs successfully for the RAM implementation. It will exercise the new cascading delete stored procedures for PostgreSQL when I get home.
Today I learned that MySQL 5.5 added support for useful stored procedures. As a result, I'll need to plan on reworking the MySQL 5.5 support to use that ever-so-useful new feature. With any luck, MySQL 5.5 will be a fully functional database implementation once the migration to stored procedures is done rather than being a slower second-class citizen that has too many database I/O requests to match the performance of the other databases.
The CFDbTest 2.0 XSD and SAX Parser look good now. I'm ready to do a build and try running the tests again.
This release produces code that works for the older CFDbTest 2.0 tests, but the new object hierarchies aren't getting inserted properly yet. In particular, I need to decide on whether I need to modify my approach to the data model used by CFDbTest 2.0 or change the rule base to accomodate a rare and likely-never-used-again modelling exception.
I'm not sure I'm going to have time to finish fixing the problem today (Sunday 2013-03-24), and I leave for an Easter vacation tomorrow, so this may well be the last release for a week or so.
OnlyOwnerRelations is now a bit of a misnomer, because after it builds a list of the owner relations to iterate, it adds the inherited container relationships as well. That way when you're encountering situations where an object is contained by a dispenser table you still get correct code manufactured.
The alternative was to require that both an Owner and a Container relationship be specified in those situations, which would result in code duplication and probably cause errors at runtime if you didn't use the relationship attributes properly. (One variant would be resolved while the other is still null. On second thought, it would work, but it'd cause duplicate resolution requests for the same key. More importantly it was an ugly option.)
The switch to Container instead of Owner is causing a problem manufacturing the code for MySQL, but DB/2 LUW and SQL Server are ok so far, so I believe the problem is actually an error in the MySQL rules.
There was a 1-character typo in the rules for the XSDs, so they have been corrected and I'm hoping they'll be ok with this release. The SAX parser itself now manufactures correctly; there was an error in all the models highlighted by reviewing the CFDbTest 2.0 SAX parser code (ContactList wasn't properly specified as being Contained by a Tenant instead of Owned by a Tenant as is the case with most other objects in SME.)
The SAX parsers will now generate correctly. There was still a little glitch where document root elements weren't filtered properly -- only instantiable objects without containers are root elements.
I've also corrected some errors in the XSD rules; hopefully they'll be ok with this release, but I won't know for 4 hours or so. If they're ok, I'll just update these notes. If not, I'll make another fix and publish another release.
On the bright side, I'm very, very close to completing the object hierarchy SAX Loader/Parser extensions to the rule base.
The CFDbTest 2.0 model didn't specify that the Table is contained by a SchemaDef, so the Tables weren't being properly contained in the SAX parser. The SAX parsers should be ok with this release.
There was also a typo in the XSD specification generation, which is also corrected. That should work fine as well.
On the bright side, the new iterator worked right out the box. :)
When parsing Superclass relationships, the relationship constructed is not an XSD container, though the default for IsXsdContainer is true. That default really should be false, because it's rarely a true value in a specification compared to the number of relationships where it's false.
The rules have also been extended to make use of the new iterator, so this version will hopefully produce a much saner XSD and parser set for object hierarchies.
In addition to a bug fix for the new ComponentCandidates iterator, I've made use of the new iterator in the XSD generation for the SAX parsers.
If it works ok, I'll make similar changes to the SAX parser code as well.
ComponentCandidates presumes that you're looking at container-component relationship, and builds up a list of valid subclasses of the ToTable which can be contained by the FromTable of the relationship. It actually isn't as complex a filter as I'd thought, though it was a bit of a head-twister to code.
There were some issues with the CFDbTest 2.0 object model hierarchy specifications, so I fleshed out the container/component relationships, added in the *Col and *Type objects for the remaining Atoms, and generally gave the manufactured code a good think in light of issues with the CFBam SAX parsers and how that same problem is now staring me in the face for CFDbTest 2.0.
Fortunately, I have a solution: see programmer's notes for 2013-03-21.
Sure enough, problems were uncovered as soon as I tried to run the RAM tests with the newly defined rich objects to be tested by a new test case XML file. Specifically, I didn't use the extension syntax correctly for the XSDs, so every test complains about the errors in the XSD when it tries to load the XML file.
A fix is in progress. But I don't think I'll publish a release until I at least have the RAM tests working, if not PostgreSQL 9.1 as well.
Note that I'm not testing cascading deletes yet, just the creation of complex object hierarchies that rely on class/table hierarchies.
Later I'll add the test cases for the cascading deletes.
I've also been thinking about some issues with the way XSDs specify the object hierarchies, and the way the resulting SAX parsers are coded. I think I need a rather complex new iterator for the models, which take the tree-walk of a component relationship (specifically, Columns in tables and types in the SchemaDef) and filter it for only the subclasses which specify that they can be contained by the current table definition. For certain pieces of code, this new filtration will be used to iterate the subclasses instead of doing an explicit rule-expansion tree-walk. I just can't think of any way to express this relationship in GEL itself without adding a number of very specialized and rarely used constructs. I'm also not sure the relationship is something that it makes sense to generalize in GEL for use by other applications.
Another thought that comes to mind is that I might start specifying component relationships with Narrowed clauses. I don't know exactly how I might make use of that in the rules themselves, but it makes sense to me that you should be able to narrow a component relationship in a subclass, where you know which subset of objects this table can really contain. I think one thing I'd want to do is to rewrite the original relationship's accessors to rely on the more specific accessors of the subclass. That way the general accessors can still be used from the base class, but they'll implement the tighter restrictions of the subclass.
The more I think about it, the more I think this solves my old modelling problem of specifying "one of many" relationships, a rather thorny modelling issue I'd encountered once many years ago. In fact, I think this type of filtration allows modelling of a more general class of problem than the one of many relationships did.
I will need to implement some Java changes as well. Specifically, the JDBC layers only need to consider subclasses which can be contained by the current object. Eliminating the extra "ifs" will speed up the code slightly as well as making the actual intended relationships of the model more apparent. This is in addition to the obvious use of the new iterator for walking the XSD-component relationships when producing the XSD schema and corresponding SAX parser.
I've switched my Linux development box to OpenJDK 7 to match the Oracle JDK 7 on my Windows box, and both are now running Eclipse Juno (3.8.)
All future builds of software will be JDK 7 compliant. JDK 6 will no longer be supported.
Table and SchemaDef now derive from Scope.
Value is initially contained by a generic Scope, and the *Col objects narrow that relationship to a Table reference, while the *Type objects narrow that relationship to a SchemaDef reference.
My goal is to eventually rework the CFBam model in this fashion. I'm just using CFDbTest to sketch out the conceptual model with as little overhead and clutter as possible, and reuse that effort to exercise the full suite of MSS Code Factory capabilities (i.e. single inheritance implementations of ORMs.)
The CFDbTest 2.0 model has been fleshed out to exercise the code variants produced by different databases including client-side support for TEXT and BLOB data in an object hierarchy so as to verify that client-side code takes over for all sub-classes of a table that incorporates TEXT or BLOB columns.
Replace should be specified as a command-line option to the database loaders so that it's only in effect during the testing of the object hierarchies and cascading deletes.
The model enhancements for CFDbTest 2.0 to be used to exercise cascading deletes and object hierarchy code are now ready to manufacture and test for the RAM and PostgreSQL 9.1 implementations.
Give or take a multi-hour manufacturing run and code checkin, that is.
The JDBC bindings for the new delete-by-index methods of the table interface are fully implemented for PostgreSQL 9.1 and ready to begin testing.
With that in mind, I'll shift gears for a bit now to CFDbTest 2.0, where I'll get the new modelling objects properly wired into the Tenant ownership hierarchy, and manufacture the full layer suite for the new objects of CFDbTest 2.0.
Once that's done, I can add some test cases using the new objects to implement on-load Replace behaviours to exercise the cascading delete implementations in the PostgreSQL stored procedures and the RAM implementation.
When I'm satisfied that the PostgreSQL implementation works, I'll release a beta.
The CFCore 2.0 GenKb schema installs cleanly to a PostgreSQL 9.1 database server with the new delete-by-index stored procedures running clean as well.
Next up: JDBC integration of the new stored procedures.
The delete-by-index stored procs needed to do an "execute stmt" in order to loop through the results of a dynamic query. Other than that this should produce clean-installing PostgreSQL database creation scripts again.
Next I'll need to wire the new delete-by-index stored procs to the Java/JDBC code for PostgreSQL.
The delete-by-suffix stored procs all require the existence of the delete-instance stored procs, so the delete-instance stored procs are now created for all the tables before the delete-by-suffix routines are installed to the database.
Update all the rules to use the current object namings without relying on translation from the old naming conventions (largely an elimination of any use of a "Def" suffix unless it was required by the resulting compiled code.)
Correct typo in the manufactured where clause fragment of SQL caused by the shift to the use of string-returning fragments of code for binding the arguments to the dynamic SQL.
This version of the dynamic SQL support for deleting by indexes with optional columns generates clean for CFDbTest 2.0. That doesn't mean the contents of the files are valid, just that it finally ran without throwing exceptions.
The current RAM delete code shows how the stored procedure implementation in PostgreSQL is going to have to evaluate it's class code selections to invoke the appropriate sp_delete_tablename() method within the database.
This is also how future support for audit logging will be implemented for delete actions. The stored procs will just need to be updated to insert the audit records with appropriate audit action codes.
The new RAM delete code has the first cut of implementing cascading deletes of Details, Children, and Components.
However, it does not properly check whether an object has sub-classes before invoking the table delete while iterating through the objects. If the object has sub-classes, the code needs to check the ClassCode of the instance and invoke the appropriate table delete method on the instance instead of the "base" class of this table's delete method.
Think of it as hard-coded inheritance.
CFCore and CFBam both specify object hierarchies. The new methods didn't have the right signatures because I hadn't considered being nested into the hierarchy of classes during the rule expansions.
The changes have been applied and propagated to all of the layers.
The Business Logic layer manufacturing has been re-enabled for the CFCore and CFBam projects. It had already been turned on for CFDbTest and CFSme.
CFParseEN is on hiatus -- it's model needs refreshing for use with the current builds of MSS Code Factory 1.11.
The new delete-by-suffix methods have been stubbed in to the tables and added to the table interface specifications. There is a first cut of code for the RAM implementation, but it's far from ready for testing; it just clean compiles.
I've been thinking about what, exactly, is "wrong" (or missing) in the implementation of deletes. Updates should be ok, but deletes need some work for object hierarchies.
However, the current CFDbTest 2.0 regression test suite does not currently incorporate an object hierarchy definition, so I've decided to cut bait by limiting the 1.11 tested functionality to "simple" table hierarchy models that can be safely mapped to the common relational database concept of specifying a relationship with an "ON DELETE CASCADE" semantic.
Sybase ASE 15.7 and SQL Server 2012 need enhancements to their sp_delete_[table]() methods to implement a cascading delete. In order to make this code more efficient, I'll need to enhance the engine with a verb to query whether a table has any cascading deletes specified (Children, Details, or Components relationships from the table to any table, including itself, so you can model dot-hierarchies.) Perhaps I'll call it something blazingly obvious like HasCascadingDeletes.
Then if a table HasCascadingDeletes, the deleting tables need to implement the delete of the sub-objects as an iterative invocation of the cascade table's delete function (sp_delete_[table]().) Otherwise you can tweak performance by doing a group delete of the cascade table records directly. For the sake of sanity, I'm going to add the whole index set of sp_delete_[table]_by_[suffix]() methods to the stored procs of all the databases so that the deleting table can just invoke one stored proc to delete n cascade records, and perform the iteration or direct JDBC delete statement in the body of the new stored procs.
I'll start with PostgreSQL 9.1, as I do with all new features, and follow that with DB/2 LUW 10.1, Oracle 11gR2, Sybase ASE 15.7, and SQL Server 2012. In short, I'm going to expand the set of delete functions from instance operations to group operations by the domains of the indices. This feature is required in order to implement efficient cascading deletes in all of the databases that support stored procedures. Whenever possible, you want to avoid having any code that relies on instance-level operations unless they're required by the contract as they are for the sp_update_[table]() methods.
The MySQL 5.5 implementation will implement the "stored procedure" as a series of JDBC statements (ideally prepared statements), using Java as it's "procedural language."
That should pretty much max out the performance of deletes for the 1.11 release.
You'll notice a "HasHistory" attribute. The code for that won't be implemented until the 2.x releases. Auditing requires identification, and I haven't fleshed out the Authentication code yet.
With regards to Authentication, I'm thinking in terms of identifying sessions with a Kerberos ticket in-cluster, and a UUID string externally. I don't want raw Kerberos tickets flying around between clusters. And I'm ok with requiring remote users to log in to remote server explicitly in order to reference it's resources. If the client implements a decent Wallet function, it's even painless to the user.
I realized today that I have a whole lot of work to do on deletes yet:
In a nutshell, gobs of functionality to implement the correct behaviours has to be shifted from the ON DELETE semantics to stored procedures. Also, deletes need to make sure they're invoking the correct "version" of a delete in the object hierarchy for classes, and call any subordinate delete stored procedures instead of doing the work themselves if they're called as a super/base class delete of a sub-object.
Sybase ASE and SQL Server don't support cascading deletes in the first place, so they would have needed to have stored procs implemented anyhow.
Once that's done, I need to implement client-side cache cleanups after a delete is processed. I've always known that needed to be done, but I've been putting it off.
I also want to add more to the CFDbTest 2.0 test suite. I don't properly exercise client-side id generation for schemas nor tables. I also want to define a simple object hierarchy which also includes forced-client-side code by incorporating BLOBs in at least one subclass (BLOBs can't be passed as stored proc arguments, so you have to use slower client-side code in those cases. Even more fun is the blended code using stored procs for the base classes and client side code for subclasses to finish the job.)
The test suite has to exercise far more functionality than it does right now to "prove" the technology works, including some test cases of cascading deletes once I've got those coded (by specifying "Replace" for an XML loaded object with an initial "rich" instance hierarchy that defines sub-objects, then a repeat of the object with or without the sub-objects so that the "root" of that tree gets deleted and removed from cache auto-magically.)
Without stored procedures available, I'm going to have to simply flag the MySQL implementation as "severely restricted" and leave it at that. I have absolutely no intention of trying to replicate the complex stored procedure code through a series of explicit client-side SQL statements. Enough is enough. MySQL is not a "real" database.
And here I thought I was nearing production...
The defects in the table id generators have been corrected for all the databases. In the case of the RAM database, they hadn't even been implemented yet.
The CFDbTest 2.0 regression tests have been run successfully for all the supported databases and for the RAM implementation.
The rework of the table id generators (and actual implementation thereof for the RAM layer) has been coded for all of the layers and is ready to begin regression testing.
This means recreating each of the databases and re-running the CFDbTest 2.0 suite twice -- once with a "clean" database, and again with a "dirty" one.
This will take some time...
The changes for the table id generation have been propagated to all of the databases, and the unwrapped id generator APIs have been made part of the standard table interface so hidden casting is no longer necessary.
The RAM implementation still doesn't have table id generation coded, so this release isn't quite ready for a follow-up beta.
This release incorporates an attempted fix for the MySql compile problems being caused by references to Table id generators at the Schema level. An inverse of the RAM implementation fix was used.
The Tenant has to own the test data for CFDbTest 2.0.
This also means that the RAM implementation is going to stop working because the table id generators aren't implemented yet. They'd only worked because of code defects that produced schema-level id generators for all the table id generators. That was incorrect code. The fix is in progress.
There are no more exceptions being thrown during rule processing, so the SME template is finally in sync with the rule work I've been doing today. CFSme 2.0 should also clean-manufacture and clean-compile as well, as it's a subset of the model in CFCore.
CFBam 2.0 has been refreshed and clean compiles as well.
Too much code is breaking with me messing around with OwnerRelations, so I spun a new verb called OnlyOwnerRelations for the RAM rules I need.
Parent and Master relations are now considered as ownership candidates as well as actual Owner relationships. Hopefully this is enough to satisfy the oracle generation rule problems I'm seeing for CFSme.
CFDbTest and CFSme need to be retested as well, but I don't expect problems.
*sigh* Work in progress....
When OwnerRelations is expanded, it now includes inherited owner relations so that you don't have to keep redefining owner relations at each layer to satisfy the reference to the owner object instance. Only the base class should do that, otherwise you bog down the database when the owner relations is already validated and probably part of the primary key for the object (which means all the subclass objects are using a validated piece of information as well, because of the foreign key relationship to their superclass.)
The code layers have been stitched together by incorporating an extra layer of indirection between the primary key of a dispenser table and the arguments used to pass the primary key attributes to the relevent method. I had a far more complex and time consuming solution to the problem in mind until I realized the reason I had things coded for the databases was that there was an extra level of indirection where a stored procedure was defined taking the primary key attribute arguments instead of a typed structure.
However, it would seem I neglected to implement the RAM table id generators themselves, so this code still won't work.
'nuff said. Don't download this one, it's for my internal use.
I need to rework the id generation in the Ram layer to take fixed position arguments which are then wrapped into a PKey object for the dispenser def. That resolves the name translation issue that I'm having in the Address objects, for example.
The id generators for the schema level were not being properly checked for dispenser definitions. The schema technically owns all the id generators, at least as far as the model is concerned. But if it has a dispenser def, it's really "owned" by it's table and should not be included in the schema's list of id generators.
This became apparent while watching some of the Sybase stored proc code being written for the schema, and then rewritten for the table. Those cases should only have been written by the table.
It was mere fluke that I noticed the problem at all.
The layered configurations now incorporate java+xml, java+xml+db2luw, java+xml+mssql, java+xml+mysql, java+xml+oracle, java+xml+pgsql, and java+xml+sybase.
The references to the old xsd rulesets have been removed; they're superceded by the xml rules that generate an XSD for the loader files.
The CFSme, CFDbTest, and CFParseEN projects now use the layered rule sets instead of the msscfengine rules. Only the CFBam 2.0 project should be relying on msscfengine.
The various *layered* rule sets in the cartridges that are supposed to be used for generating applications now incorporate all of the databases as they should.
The database generation has been removed from the msscfengine rules now that I'm done exercising the rules with CFBam 2.0. CFBam 2.0 won't need a database, at least not until around release 2.3-2.4 or later. For now, the engine will be sticking with XML configuration files. Some day I'll bring back the database code and use it to write a distributed shared editor GUI.
Note that the double and float modelled types are now restricted to the range supported by SQL Server instead of the full range defined by Java for the types. While SQL Server is the only database that can't handle the full Java range, portability is enough of an issue that the overall system must comply with that restriction.
CFDbTest 2.0 now runs successfully.
I fail to understand how it is that SQL Server 2012 is the only database that cannot deal with a Java standard double precision properly... but I need portability so the value range of doubles is now compliant with SQL Server 2012 instead of Java and JDBC.
However, this build should resolve the problems with CFDbTest 2.0.
Unfortunately, we don't have a full working version of an ORM for SQL Server 2012 just yet. There seems to be a problem with mapping of doubles and/or floats from SQL Server to x64 Windows 7 Java 7.
About a half dozen or so of the test cases now run successfully, not including the trivial type underflow tests.
The CFDbTest 2.0 database now installs cleanly for Microsoft SQL Server 2012 Express Advanced Edition.
The Java 6/7 JDBC integration layer for the database is also ready for testing, so the next step is to install the JDBC jar file, wire up the run scripts, export the jars, and initialize the connection configuration file.
Once I have a connection established, I can proceed with testing the JDBC integration layer for SQL Server 2012.
SQL Server doesn't allow text parameters in the same fashion that most databases don't allow blobs. So it'll use a few more cases of prepared statements from the client side than the other databases do. In the end, the performance will suffer a little because a seperate id allocation invocation will usually have to be done by the creates, slowing down any data loading services written with the code.
There are still problems with the update and delete methods, because SQL Server apparently doesn't support raiserror or does so with a different syntax. I'll have to look into how to fix that and get another update out when I do. Once that issue is addressed, the stored procedures should clean-install. Right now the creates and reads clean install, as do all of the database owner, schema, rules, data types, tables, indexes, and relations.
This release incorporates the first cut of Java 6/7 support for Microsoft SQL Server 2012 Express Advanced Edition, based on Sybase ASE 15.7 as a template implementation.
The MSS Code Factory web site has been reviewed and refreshed as necessary.
This is the first cut of Microsoft SQL Server 2012 Express Advanced Edition support, with a fresh migration of the stored procedures from Sybase ASE 15.7.
There is still a lot of work to do. Microsoft's "convert()" has different arguments than Sybase's, if I recall correctly. That's at a minimum for the changes still required.
In the meantime, this version of the rules runs clean to create an untested set of scripts.
The Sybase ASE 15.7 regression tests for CFDbTest 2.0 run successfully.
There is still a problem for tables with a short/smallint id column, but I suspect the problem is actually with the invocation of the id generation stored proc. I'll know better after more debugging.
In the meantime, all the other tests run successfully, including the insertion of a BLOB to an IMAGE column.
The fact that at least some of the inserts via stored-procedure invocation are working does seem to indicate that the code is on the right track. However, it would seem that many of the stored procedures for doing inserts aren't getting an id value properly, so they're trying to insert null id column values.
The CopyrightPeriod verb/accessor for schemas has been added to the engine and is used in the rules for file headers, so code can now properly specify a copyright year range in the model.
All of the database I/O implementations were leaking JDBC Statement instances where dynamic SQL was used, and they were leaking ResultSet instances everywhere except in some of the Oracle code.
The leaks have been corrected across the board; the resources in question are now properly close()d.
In addition to the ASE 15.7 code, this version of MSS Code Factory fixes a ResultSet leak that was prevalent throughout all the code except for some of the Oracle JDBC code. ResultSets have to be explicitly close()d. This is now done in finally blocks to ensure that memory leaks do not occur.
The entire Sybase ASE 15.7 database creation runs clean, including the creation of the stored procedures and functions that will be invoked by the JDBC layer.
This is the first cut of the Sybase stored procs, ready to be run against the database. I'm sure bugs will show up in the stored proc code, though the syntax looks ok.
I've been informed that the download of MSS Code Factory 1.11.5661 is detected as containing an "Artemis" virus by McAfee. All builds are done using Linux, and the only code that isn't produced by Eclipse are the .jar files for PostgreSQL JDBC, Xerces, Log4J, and the Commons Codec package.
If you are still seeing viruses detected after this update, I suggest you contact your antivirus vendor, because if my box is infected, so would any other box using these .jar files from Ubuntu 12.04 LTS.
I have heard reports that the PostgreSQL installer gets reported as infected under Windows. Perhaps it's the JDBC driver for PostgreSQL that is causing the problem, so I've removed that and the old JDBC test suite program from the system.
The Oracle PL/SQL-JDBC integration layer now passes all the regression tests in CFDbTest 2.0. However, the Oracle implementation does not (and can not) use prepared statements for invoking the stored procedures because there is no way to bind the output parameters with a prepared statement (the methods don't even exist.)
There are some issues remaining with CLOB/TEXT support. I'll need to dive into the debugger yet again to see where the problem lies before I can determine approaches for fixing it or search for suggestions and hints on the internet.
I rarely find exact answers, just hints as to what kinds of things can cause a given error code that I'm seeing. After that it's a lot of SQL*Plus elbow grease to figure things out.
Most of the test cases are still failing, but at least a few are running so I know the code templates for passing back result sets work. Some of the failures are due to command-line parsing differences between Linux and Windows. For whatever reason, the colon separators aren't getting passed in by the DOS shell under Windows 7. It's been a long time since I used DOS, but I found no indicators anywhere that you need to escape colons. Maybe I'll change the separator to the splat (@) or bang (!).
The CFBam 2.0 Oracle code now clean compiles, so the rule set is "clean" and ready to begin testing. I decided I wanted to make sure CFBam clean compiled before I started tweaks and fixes during testing.
Oracle doesn't have a native boolean type like PostgreSQL, so I had to use the same Y/N flag mapping layer that I did for DB/2 LUW.
Clean compile. Ready to test.
The Oracle JDBC code to invoke the stored procedures has been coded, clean compiles, and is ready to begin testing.
There are nine more stored procedure invocation templates to be converted to the syntax Oracle requires for returning a result set cursor from a stored procedure.
The resulting Oracle-specific code looks nothing like that of the JDBC for PostgreSQL or DB/2 LUW, which just require changes to the syntax used to invoke procedures or functions that return result sets and share the rest of their structure and style in common.
The Oracle PL/SQL stored procedures use sys_refcursors to return result sets to the JDBC layer as OUT parameters of the procedures. They were originally coded to return the cursors from functions ala DB/2 and PostgreSQL, but it turns out Oracle and JDBC require some special handling for ref cursors so you have to use OUT parameters instead.
The DB/2 LUW 10.2 stored procedure tuning is complete and passes the CFDbTest 2.0 regression test suite.
A preview of the Oracle stored procedures is also included, as I was working on those while away from my DB/2 server today.
There is still a problem with deletes that I need to look into, but all the other tests pass with flying colours.
The DB/2 LUW tuning and testing are moving along quite rapidly compared to what it took to get PostgreSQL working. This time I don't have any variable-list argument-list mismatches between the stored procs and the JDBC code, because those issues were ironed out during the original PostgreSQL testing.
I believe the biggest issue is the implementation of the JDBC invocation of id generation stored procs. I'll have to do some digging to find out what the DB/2 equivalent to "SELECT storedproc()" with PostgreSQL is. I don't want to have to resort to the Oracle porting mode that DB/2 supports; I want to stick with pure DB/2 syntax if I can.
The JDBC now looks good for CFBam 2.0. Time to start working on migrating the PostgreSQL stored proc JDBC with the DB/2 LUW JDBC.
Both PostgreSQL 9.2 and DB/2 LUW 10.1 install cleanly for CFBam 2.0.
Note that there *are* some relationships that DB/2 refuses to install, and DB/2 is technically correct to reject them. However, to fix the problem I'd need to dynamically switch between "ON DELETE RESTRICT" and "ON DELETE SET NULL" for relationship constraints, and I'm not quite sure how I'd go about doing so yet.
But I will get those last niggling details fixed some day. In the meantime, the code should work. There are just a few cases where you could end up with broken relationship links with DB/2. I didn't say things were *perfect* yet.
CFBam 2.0 is my "acid test" that really does everything I can think of from a modelling perspective. As such, it takes a fair bit more effort to get CFBam 2.0 to integrate cleanly with a vendor tool than it does to get CFDbTest 2.0 to integrate. CFDbTest is much simpler.
Still, that's 383,208 lines of clean-installing code for DB/2 LUW in less than a week, for an estimated 54,704 lines per day, 6,843 lines per hour, or 114 lines per minute. Not bad for a "dinosaur."
There were some pretty significant changes to the code since the first cut, but they finally clean compile.
I'm ready to begin testing the creation of the DB/2 LUW 10.1 stored procedures. But not tonight.
The last of the bugs in the PostgreSQL stored procedures and JDBC code has been squished. PostgreSQL support is now 100% functional again, and ready for beta testing.
The communications for floats and doubles has been made consistent and now works. The problem with the deletes has been corrected.
Soon it'll be time for Beta 1.
I have several of the float/double errors with bad values (bizarre), two UUID string conversion errors, and a problem with the deletes.
Other than that things are finally working. Not perfect yet, but getting close...
With the underlying syntax error corrected and the new version of CFCore in place, it was time for a new release.
If the compile of a sub-object/statement failed, the statement compilers return null. This was causing a null pointer exception in the MSSBamCFGelCompiler.compileMacro() implementation because it was always trying to add the compiled instruction to the parents execution list, even if the compiled instruction was non-existent.
A subtle nuisance to this problem is that because of Java's buffered IO, the error message that was being logged before the null pointer exception was thrown was never seen, making it impossible to debug the actual error in most cases.
And an annoyingly large number of them are those float and double value errors, which makes no sense to me whatsoever.
Roughly half of the PostgreSQL tests for CFDbTest 2.0 now run successfully.
I am most puzzled by the errors indicating that 100.0 is not a valid value for doubles and floats. I could probably knock out another quarter of the failing tests by fixing that problem, but it's a head scratcher it is. :)
I'm getting closer to having the performance-tuned code working. One of the more bizarre problems I'm having seems to be with 16-bit id generators producing zeroes, and with an error message that +100. is an invalid value for a float/double.
The stored procedure signatures now match, but I'm getting some errors from the cast()ing that I'm doing now, as well as some glitches with [TZ][Date/Time/Timestamp] values.
Still, I made excellent progress today.
I've been doing a lot of debugging, attempting to reimplement the stored procedures using the record type approach suggested by Stack Overflow is to use record type specifications.
However, in their case, they're only returning a single record, not a recordset. I'm not sure you can autocast a query select the way I'm trying to do in this version of the code.
At this point, some of the inserts and updates work, as well as name resolutions. However, I'm having problems with the text columns being passed as varchar types in PostgreSQL, which may mean that like a BLOB, you can't pass a TEXT type into a stored procedure.
The PostgreSQL stored procedures now install cleanly to the database. There had been some problems with 64 bit table id generators, and with UUID keys for tables.
I am now ready to begin testing the stored procedure code.
With the shift to the new SourceForge platform, I thought it best to start with a rebuild of the existing code to make sure the releases are on the same page as Subversion after the migration.
There is only one minor fix since 1.11.5212, a change to getRequiredRevision() from getRevision().
There was one last bug found while building CFDbTest 2.0. Tonight or tomorrow I'll be able to do a test build of CFBam 2.0.
The new JDBC implementation using the stored procedures for PostgreSQL 9 clean compiles for CFSme 2.0, the simplest of the test cases.
There was one more error in the rules that had to be corrected in order for CFSme to clean compile.
This one has *got* to clean compile already!!!
There were a couple of exception handling cases to deal with and a couple of typos, but the code should now clean compile for the basic CFSme 2.0 and CFDbTest 2.0 test cases.
I'll have to wait until tomorrow to determine if CFBam 2.0 also clean compiles with this new code base.
This is the first cut of the last of the PostgreSQL stored procedure bindings that I know of right now.
There may be some _cc reads that need to be implemented yet; there is no sp_read_cc_by_tablename() for indexes whose columns are not all required. Those are *supposed* to use dynamic SQL, because the statement format changes depending on whether the arguments are null or not, so I may just have noticed a few of those in the code while I was reviewing things.
I really must look into that and form a definitive answer.
The PostgreSQL Schema JDBC now uses PreparedStatement buffers retained by the Schema instance for holding on to the compiled SQL for the sp_next_$lower Name$() id generation invocations.
An implementation of releasePreparedStatements() has been added to the PgSql Schema, which not only clears the Schema's prepared statements, but those of the subordinate table JDBC bindings.
Corrected delete parameters at invocation and removed old revision checking code in the delete implementation as that check is now done by the stored procedure.
Both CFBam and CFDbTest clean compile.
The updates have yet to be addressed, and I still need to look into the ClassCode query usage by the "Derived" APIs in CFBam.
There were a couple of typos remaining for BLOB code that had to be fixed.
It also looks like I need to update the "derived" APIs to use the stored procedures, at least for BLOB objects. I haven't checked other inheritance objects to see if the same problem is there. Of course for CFDbTest, which has no inheritance, there is no problem.
The reworked code compiles cleanly now. It's not done, but I figured I'd best make sure what I had so far would at least compile before proceeding with the remaining changes to the "update" methods.
The SQL statements for the PostgreSQL cursors have been reworked to use the buffer selects. That means that if you open a cursor, it will *not* expand sub-objects properly, but only map the attributes down to the cursored object, not it's subclasses. This is non-intuitive, but I can't think of any sane way to merge result sets whose column collections differ. In reality I expect cursors to be used primarily by older-style non-inheritance application models, such as accounting and banking systems that need to run overnight batch jobs.
Delete implementations need to be looked into. There is a disconnect between the stored procedures and the JDBC argument lists at this time.
Update implementations need to be reworked yet.
It turns out that when I fixed the resource loader to load the rules from bundled resources under Linux, I broke the code that used to work under Windows. Rather than figure out how to implement dual-mode code, I'm opting to simply bundle the rules in the distribution, remove them from the resources themselves, and let users configure their .msscfrc file to reference the rules they've extracted.
I still need to refresh the "update" method rules similarly to what I've done with the "create" rules, but the code so far clean compiles.
I also want to modify the implementation of the schema "next" methods to use PreparedStatements instead of dynamic SQL. It's a tweak that only really affects a small subset of the possible manufactured code, but this is my prototype on which all other stored procedure implementations will be built, so I want all the details taken care of.
It's all completely untested, of course. First I have to finish writing the new code before there's anything to test.
The typos that were causing problems with manufacturing the new stored proc support have been corrected. I'm almost ready to begin testing this new version.
This version of the PostgreSQL integration rules should be using the new stored procedures whenever possible for creating objects.
The BLOB branch of code for creates has been reworked to chase the inheritance tree and thereby populate the id from the base class allocation of an id, propagated through the call hierarchy.
I decided not to bother with re-reading the inserted object at this time, as we know it contains a BLOB which is going to be horrendously inefficient to re-read if the inserted BLOB was of any substantial size.
Next up: Apply the same structural changes to the update code.
I've been reviewing the manufactured code and correcting defects that I saw as I did so. For example, the tables are supposed to implement a read by primary index for the base table, which was only half happening until now. The DbIO side of the code existed, but not the stored procs the code invoked.
The table-dispensed id generators now use their PostgreSQL stored procedure implementations instead of inline SQL, a 3:1 reduction in the number of database I/O requests to be performed. Even the remaining 1 will be eliminated for objects which don't specify BLOB attributes in the base class table, as they will invoke the id generator stored proc directly during the create_dbtablename() processing.
The readBuff and delete methods of the tables have been updated to use the PostgreSQL stored procedures. The insert and update methods will take a little longer, as I need to implement dual-mode code that either uses the stored procedure or dynamic/embedded SQL to support BLOBs.
The "lock" procedures are like the instance "read" procedures, except that they specify the "for update" clause in the SQL.
InheritsBlobDef was looking for SuperClass relations when it should have been looking for Superclass relations. The tags are case sensitive.
There are not supposed to be "by" read procs created for the primary indexes of tables. You're supposed to use the sp_read_dbtablename() proc instead.
The new GEL verb Table.InheritsBlobDef has been added. You can't create stored procedures with BLOBs as arguments with any database that I know of, so it's necessary to use precompiled SQL in the client instead of a stored procedure to implement the Create and Update functions. You can't pass in the BLOBs that such objects have in their argument list, so you can't define such a stored proc.
The goal is to replace all of the SQL that is currently implemented as precompiled statements in the PostgreSQL client as stored procedures. If you can precompile it, you can make it a stored proc.
This will allow PostgreSQL to cache query execution plans for the reads, improving response time by eliminating the query execution planning expense that comes with dynamic SQL queries.
In the case of inserts, the cost of an insert goes down to one database IO instead of two (one to allocate the id, one to do the actual insert.) Even greater savings are realized by class hierarchies on inserts, as entire hierarchies of tables are inserted by a single stored proc. (Unless it's got a BLOB. Damn BLOBs.)
Updates for "regular" tables don't see a net gain for update performance, but class hierarchy updates reduce n update database IOs to one (with the exception of BLOB-involving tables.)
I also seem to recall that I neglected to make a precompiled statement buffer for "begin transaction". What's there will work, but it's not peak-performance code, so I'll fix it.
The DB/2 LUW 10.1 support code now works. I'm getting a null pointer exception for one test case that I need to dig into, but other than that everything is working.
I suspect the one test case may be one of the BLOB test cases. I'll look into it later.
The MySql database test suite now runs fine except for a bug that is unrelated to the database I/Os themselves.
UUID VARCHAR/CHAR mappings to the databases now consistently specify a 36 character UUID string buffer. I have no idea how a UUID of 20 characters crept into the Oracle and MySql rules.
Regardless, it's fixed now. For all the databases.
MySql 5.5 is about as ready to use as it's going to be until I can figure out some bizarre bugs with the test suite execution that have nothing to do with the MySql code itself.
The reason some of the database creates were working and others weren't is mostly due to typos in the copy-paste id generator code, where I forgot to update the table names correctly. That will squash most of the bugs, except for a bizarre range checking exception when running the MySql tests that doesn't show up in the Ram tests. As both sets of code are running the same core code, either both or neither should fail.
Rather than try to puzzle this out, this release is deemed FreeCode worthy.
The MySql 5.5 database logins work now. Some of the tests run successfully.
This update includes a fix to the id generators. It turns out MySql is case-sensitive about table names in it's SQL, so I had to make some of the code a little more consistent.
There were erroneous uses of to_char instead of date_format in the queries. The format strings had been updated, but not the function name.
With any luck this version will pass the database test suite. One can always hope.
The MySql 5.5 code now clean compiles and is ready to begin testing.
I'll be starting by working through the registration of the JDBC driver and establishing a connection to the database in client-server mode.
I thought I had it with the last update. Maybe I've got it this time.
There were a couple of minor oversights in the rule base from copying over some DB/2 LUW 10.1 functionality for the global id generators that replace the sequence generators used by PostgreSQL and Oracle.
This code should be ready to test, except for me not having looked into the actual use the MySQL Java JDBC connector/driver. For some bizarre reason, each database seems to take a slightly different approach to registering a driver for use.
The PostgreSQL template code has been modified to use the MySql 5.5 syntax for date/time conversions. Fortunately the formatting style I'd used for PostgreSQL is compatible with MySql, so I didn't need to tweak any of the formatters or parsers in the Java schema code.
If the resulting code clean compiles, I'll be testing MySql for compliance with the database portability tests in CFDbTest20 tonight.
The MySql database creation scripts now work for the database test suite.
The MySql 5.5 code template is now complete, including the SAX Loader mainline for MySql.
Note that MySql had to be forked from PostgreSQL at this time because MySql does not support stored procedures. As a result, this is the most advanced version of the PostgreSQL code that can be ported to MySql easily.
DB/2 UDB 10.1, Oracle 11gR2, and Sybase ASE 15.5 all support rich stored procedures that can be used to implement similar code to that which has been done for PostgreSQL.
There is obviously going to be a fair bit of work migrating from the PostgreSQL template to proper MySql 5.5 support, but here's the initial baseline code, including the database creation scripts.
All completely untested, of course. :D
The initial set of stored procedures for the PostgreSQL performance tuning have been completed with the addition and enabling of the "delete" procedures.
The performance gains from precompiled SQL and cached query execution plans that comes with stored procedures will net some gain for "traditional" non-object-hierarchy application models, but for object hierarchies, the use of the stored procedures will bring the database-request cost down to 1 for each create, update, or delete of objects which do not involve BLOBs.
For objects which involve BLOBs, the create and update procedures cannot be manufactured properly because PostgreSQL does not support BLOB procedure arguments. This bit of cleanup has not been implemented yet, so there *are* stored procedures manufactured by this ruleset which are not valid and which are guaranteed not to compile into the database model.
The full suite of read_table() (by pkey), read_table_all(), and read_table_by_suffix() stored procedures have been created for all inherited indexes which specify only mandatory attributes (i.e. "not null.")
The read procedures restrict by ClassCode for inheritance trees, as they are intended to replace the readbuff fragments of the JDBC code in the current PostgreSQL implementation.
The create and update stored procedures for PostgreSQL look correct to me. I haven't tried loading them into the database yet. I'm in mid coding binge and will test later.
There were some missing join clauses for the final query select, as the original version of the stored procs was a crude single-table create/update whereas the current version does a wide join of the entire table hierarchy of data attributes.
This code should bring the create and update efficiency down to O(n) database network IOs regardless of the depth of the class hierarchy, provided that there are no BLOBs in the hierarchy.
Dealing with BLOBs is not going to be fun.
I've decided that I hate BLOBs.
And I wouldn't be at all surprised if I end up hating TEXT for the same reason with some database other than PostgreSQL.
The PostgreSQL "create" stored procedures have been coded and look correct. Next I'll need to actually make sure they load into the database ok.
I will not be implementing a check on the ClassCode after giving it much thought. You can't pass BLOBs to stored procedures, so when stored procedures are in the class hierarchy of an object, it has to invoke the stored procedures of the narrowest class specification it can, then resort to direct SQL for the BLOB-containing table and any subclasses of it. That means you can have not only the object's ClassCode, but those of it's subclasses being passed to the stored procs.
The stored proc id generator manufacturing has been re-enabled.
The relationships between dispenser tables and their id generation tables has been corrected to specify ON DELETE CASCADE.
The stored procedure for creating a table is actually a multi-table insert from the base table on through to the specified subclass table.
The ClassCode must always be provided, even for tables that don't actually persist a class code. This makes it feasible to implement the list of arguments for the stored procedure without having to determine whether this is the first column being specified or not (and thereby whether a comma-newline-double-tab is required.
The net effect of this stored procedure is to reduce the cost of inserting an n-level hierarchy from n inserts to a single stored procedure invocation.
For models which don't use object hierarchies, the net cost is 0 -- it's one database invocation per object insertion the same as it used to be.
A very tidy bit of tuning for insert-complex-heavy applications like a database repository of Business Application Models.
Note that it's not done yet, but I uncovered a bug in the BaseModelAtomClass binding of the engine itself (it wasn't properly detecting and reporting UuidGenDef types, so they were showing up as UuidDefs incorrectly.)
I still need to modify the result select to union the inherited tables for class hierarchies.
The DB/2 LUW code now connects successfully to the database and creates the ISO Currency records. However, it turns out that a DB/2 LUW VARCHAR(1) cannot store a cent sign. I need to look into that -- there must be an NLS-enabled VARCHAR type I can use in the schema creation scripts instead of a VARCHAR that would correct this problem.
The DB/2 database creation scripts for CFDbTest 2.0 now run cleanly, with only expected informational and warning messages.
The DB/2 LUW (Linux-Unix-Windows) 10.1 Java JDBC support and database creation scripts have been refactored from the old UDB naming from 9.7. A quick and dirty implementation of table-record based id generators has been coded, which will work but won't scale or perform as well as tuned code will in the future.
But I think I'll hold off on tuning until I rework PostgreSQL to use stored procedures, and bring that stored procedure support forward to DB/2 LUW.
A quick and dirty "it should work" implementation of id generators for DB/2. In the long run, this will be entirely replaced by something like:
DECLARE @NEXT_VAL INTEGER;
SELECT NVL( MAX( IDCOL ) + 1, 1 )
INSERT INTO SOMETABLE( inherited-keys, IDCOL, data-cols )
VALUES( ?, ?, ..., NEXT_VAL, ?, ?, ... );
The DB/2 10.1 database creation scripts for DFDbTest 2.0 now run. However, DB/2 is complaining about some of the relationships, and some of the indexes are too wide for DB/2's limitations. I don't think there is any way to work around those restrictions other than limiting the value sizes, which I am loathe to do.
However, I probably will trim the length of the email column so that it can be indexed properly. I'm not sure what to do about the X.509 certificate index, though. Maybe I'll have to index by a hash of the X.509 instead of the X.509 itself.
The Oracle schema creation scripts kind of sort of work now, but as I've lost my Oracle Linux partition to a bad update from Oracle themselves (either the latest kernel or the libc updates they pushed), I'll have to shelve any further work on Oracle for now.
The prepared statement support has been propagated to the Oracle and DB/2 UDB interfaces.
Next up: I tackle the killer rabbit of Oracle (which I have roughly 20 years experience with.)
The CFDbTest 2.0 PostgreSQL test suite runs successfully with the timezone-aware code.
Tidy up serverTimeZone with a getServerTimeZone() API so I can bury the resolution of the PostgreSQL server timezone below the transaction.
Rework the way [TZ]Date/Time/Timestamp are unpacked so that optional columns check if the fetch was null before trying to unmarshall the value.
Improve error reporting when date-time values are in an incorrect format to help diagnose an apparent SQL problem.
I've decided not to make the basic Date/Time/Timezone columns fully timezone aware. When persisting them, their timezone is not adjusted, and the raw attributes are applied. When reading them, the server timezone is applied so that the values are consistent, but that's it. No timezone adjustments are made when reading/writing the basic date-time types.
Ready for final test of the manufactured code. There were a lot of places where I forgot to adapt the PostgreSQL code, and a bug in CFLib.
The Min/Max/Init constants for [TZ]Date/Time/Timestamp columns are now initialized to GMT values.
For this reason, I suggest that you not apply Min/Max constraints to Time or TZTime attributes. What you really want for Time is a comparison that does a To-From range with a wraparound at 24:00 back to 00:00. I have no idea how to express such a constraint in database terms so I just won't go there.
So min-max constraints on [TZ]Time columns are verbotten. The code will *not* work as you expect if you try to specify such constraints.
Next up: I need to modify the way PostgreSQL specifies and maps the date-time columns and rework to allow for the timezone adjustments that are being done in the Java code. Specifically, Date, Time, and Timestamp will now all have to be stored as Timestamp values, defaulting to the Server timezone when being read from the database. I can't use DATE any more, because it doesn't capture the time component of a UTC-aware date, and all the Java code expects the Calendars to be timezone-compliant.
The PostgreSQL date/time/timestamp string conversion functions and the XML parser functions are both timezone-aware now.
The Init/Min/Max values need to be reworked to always use the GMT/+0000 timezone instead of the local runtime timezone as they do now. (What can I say -- it's a work in progress...)
The XML TZ parser functions have been updated to make proper use of the specified timezone when constructing a Calendar.
The PostgreSQL SAX Loader runs all the database I/O tests successfully, including when rerun against a populated database, requiring a merge of information (either Update or Replace.)
Interestingly enough, the Ram "database" does not pass the Replace test, due to a delete bug. I suspect the problem is actually to do with the realize/forget code not populating the primary key index properly if it's not already initialized.
Just a hunch...
In the meantime, you can now create/update/delete/read to your heart's content with PostgreSQL. :D :D :D
Before I go beta, I'll want to finish fixing the Ram database and implement cascading deletes in both the Ram and PostgreSQL implementations so that sub-objects are properly cleaned out of the cache when their container/master/owner/parent is deleted.
There are still issues with stale object reference exceptions being thrown by the delete/replace test. Once that's fixed, PostgreSQL will be passing all the tests that the Ram "database" already passed.
The TZ types are now partially supported so CFDbTest 2.0 can load it's data. However, I want to modify the XML parser at some point so that it calculates the offset between local time and the specified TZ and adjusts the data while it's reading it (i.e. At least do the adjustments before hand.)
I'll also want to switch all the TZ types to datetime values in the database, and always persist and read them using full date-time format values. This will be required because to have meaning TZ data applied to a date, you have to store the time as well. Time needs the date, because once TZ adjustments are made, a given time could well be more than 24 hours long, requiring a day field.
This approach won't be quite to the point of treating everything as UTC, which is where I want to go in the long run.
Blobs still don't work.
The CFTypes 2.0 test scripts were successfully executed with the expected responses by the PostgreSQL persistence layer. Inspection of the database would seem to show that data was successfully loaded and updated in a few cases. No deletes have been exercised yet, as the test framework does not explicity force the loader to exercise deletes yet.
I'll need to add that test case next, then verify that deletes work for at least one of the tables, presuming they work for all tables if they work for one.
Then I'll have to revisit the TZ types and Blobs in the CFDbTest code before I can release a beta. I'd rather have broken or invalid TZ support than have the runtime blow up if TZ types are used for PostgreSQL.
As long as you aren't using BLOB or TZ data types, the current PostgreSQL implementation should work for your data model, provided it conforms to the general pattern of a CFDbTest model give or take an inheritence hierarchy. (Data merging breaks down for hierarchies if dot-naming refences are to be expected, but as long as you're dealing with contextual names you're ok. You'll have to implement dot-naming functionality in the BL layer manually)
The basic templates for inserts and updates have been exercised; deletes are only trivial deletes at this point, not cascading as they need to be. Part of the reason for that is I intend to implement Chains at the same time, and that's going to be a multi-day exercise so I'm submitting this release as-is for now.
I need to make another pass at the TZ data types because they're causing 'SQL Exception ERROR: "TZ"/"tz" format patterns are not supported in to_date' errors when I try to persist data for inserts and updates of the full range of data types, so none of the data type checks have really been fully validated yet because the TZ problems are preventing the final persistence of the instances.
The specification of old and updated revision values was reversed in the manufactured code for PostgreSQL JDBC updates using prepared statements. I've tested one case by manually making the change and reviewing the code in the debugger, and the fix I've made to the rules will correct the problem with the updates.
The delete code was already using the correct reversion codes in it's SQL bindings, so it should execute ok as-is.
There are new getQuoted[TZ][Date/Time/Timestamp] methods in the schema which are used for dynamic SQL, and the get[TZ][Date/Time/Timestamp] methods have been repurposed to produce unquoted strings for use in prepared statement implementations (where a string binding is required instead of embedding a quoted string in the dynamic SQL.) This should resolve the date format problems for the Date/Time/Timestamp implementations. Note that for now the TZ variants are hardcoded to use CST (my timezone.) More details to work out at some point in the future.
I realized there is no predicting how many pops are needed because the entire context stack of a new element is pushed when it's referenced, so I switched over to the verbose form of the code with a different form of "popto IdXXGenDef" for each one.
The CFDbTest 2.0 support for PostgreSQL sub-object id generation now clean compiles.
I've encountered a bug where I have null GenDef references in a popped context, which should not be possible at all.
I've added a workaround for now that reports the problem and recovers from it instead of throwing a NullPointerException as had been happening, but it may be some time before I find and fix the root cause of this problem with the pop directives.
With the pop directive working as planned, this version should produce a clean-compiling version of CFDbTest 2.0.
The pop directive now works as planned.
Switch all the "pop pop pop pop" directives to "pop 4".
Rework pop directive as taking number of levels to pop as an argument instead of a goal context name. A bit of a hack, but at least it's a "clean" hack.
The new "pop" directive is being used for the sub-object id generation. Actually what I ended up doing is "pop pop pop pop" instead of "pop popto Id16GenDef" because the latter would mean duplicating rule variations for Id32 and Id64 gen defs as well, which is far uglier than "pop pop pop pop" is.
While working on the sub-object ids for PostgreSQL, I realized I have a case where I need a pop directive that is always executed so I can do:
$pop popto Id16GenDef Name$
while binding the primary key attributes of the dispenser table. There is a very good chance the columns of the primary key are themselves defined as id generators, so it's necessary to ensure they aren't considered in the code as I envision it.
It didn't take much testing of the PgSql SAX loader to realize I needed to add in the establishment and teardown of a server connection and an enclosing transaction for the loaded data.
I'm currently addressing a naming conflict for id generators between the database layers and the manufactured JDBC code.
That will let me test an initial insert and a read of the Cluster, but after that I run into an issue -- I haven't finished coding the JDBC for sub-ids that are used to build concatenated keys.
An InitValue has been specified for the UuidGenDef test case in the model, used by OptMinMaxValue instances.
The construction of Gregorian Calendars for [TZ]Time were specifying a date of 0000-00-00, not the valid zero date of 0001-01-01.
Revert the use of else clauses for the manufactured Min/Max and null checking done by the manufactured setters. Because the code is shared between intrinsic and object setters, not all code templates check for a null value (intrinsics are not nullable arguments). As a result, there isn't always an opening "if" block as there is with the optional attribute setters, which apply a null value as a special case before any instance checking and validation is done.
Based on the most recent code I've checked in Eclipse that was manufactured by the rules, I think it's time to sit back for an hour or so and let CFDbTest 2.0 finish manufacturing so I can check whether it clean compiles. It should, I think.
But I've been wrong many times before. Ever the eternal optimist a programmer must be when squishing bugs and completing features like object initialization.
Note that if an attribute is nullable, it should not specify an InitValue attribute in it's definition.
If an attribute is required, it should always specify an InitValue attribute in it's definition.
Based on the most recent code I've checked in Eclipse that was manufactured by the rules, I think it's time to sit back for an hour or so and let CFDbTest 2.0 finish manufacturing so I can check whether it clean compiles. It should, I think.
But I've been wrong many times before. Ever the eternal optimist a programmer must be when squishing bugs and completing features like object initialization.
Note that if an attribute is nullable, it should not specify an InitValue attribute in it's definition.
If an attribute is required, it should always specify an InitValue attribute in it's definition.
It would seem the remaining invalid failures and NullPointer exceptions are being caused by the fact that I haven't implemented the initialization of Date/Time/Timestamp properly yet. Seeing as I got the template of how to do so done with the Min/Max values, it really shouldn't take too long to add the initializers.
I expect to be done today or tomorrow and back to running manufactured code.
CFDbTest 2.0 was failing it's test cases. I accidently deleted one to many "else" limbs so when setting a null string value, it was *always* doing the MaxLen check, which shouldn't be done when setting a null value as it causes a NullReference exception.
There were some rule syntax errors (extraneous dollar tags) that were causing exceptions when doing the GEL compile of the Date/Time/Timestamp range checking. This has been corrected. The code now looks the way it should, so it's time to wait and do some testing to make sure it works.
It was also necessary to stripLeadingZeroes from the year values because when dealing with values under 1000, leading zeroes are produced that would have forced the Java compiler to interpret the constants as invalid octal values.
Because there isn't a native range limit enforced for the UInt and Number types, it's necessary to ensure that there are always Min/Max values produced and checked to prevent illegal data from getting into the buffers.
There were some remaining issues with the optional Date/Time/Timestamp values *always* checking their ranges instead of only when Min/Max values were declared.
There was one remaining problem where the optional Date/Time/Timestamp values weren't properly checking for a null value before doing the Min/Max checks, resulting in extraneous "else" limbs in the code. This has been corrected.
The code should be clean now, though I won't know for sure for another hour or so.
Another error with some extraneous "else" clauses has been corrected in the new MinMax code. Max checks were only compiling properly if preceded by a Min check, and that shouldn't be a requirement.
There was one more typo of a "lowercase" directive in the rules that needed to be an "upper" instead (there is no "lowercase", though "lower" would still have produced incorrect code.)
There were some syntax errors in the Min/Max code due to typos such as "MAXUTE" due to global replaces of MIN with MAX, and also due to floating type conversions that needed an explicit typecast on initialization.
The formatters are now working. It turns out the last few exceptions were caused by invalid format specifiers in the rules, not a problem in the code itself.
The formatter runtime exceptions were caused by me missing a couple pieces of the business logic layer when manually adding the code for the GelFormat objects.
Formatter-using rules have been added and the way that Min/Max values are enforced has been completely reworked with new MIN/MAX constants added to the *Buff objects.
In order to support the month 09 and the day 09, it's necessary to be able to $stripLeadingZeroes format MaxValue %1\$M$ so that you can avoid the resulting string being interpreted as octal by the Java compiler in the manufactured code.
The getObjectValue() overloads for non-string attribute bindings have been implemented.
Everything is in place to begin testing formatters and using them to initialize Min/Max values for dates, times, and timestamps.
Rather than implement a seperate Format binding, the basic Bind objects of the engine has been extended to provide access to the values to be formatted via a new getValueObject() method. The GEL expansions for a GelFormat specification will take the form:
$format BindingName format-specification-string with whitespace allowed$
The BindingName is resolved in the current context to a Bind object, whose getValueObject() method is invoked. This object is then passed as the sole value argument to a Java formatter call, along with the format specification string.
Fortunately the GEL compiler already supported the use of backslashes to escape dollar signs in a macro, because that's needed for Java format specifications, though there is only one argument passed to the formatter in all cases.
The Min/Max values of a number are implied by the digits and precision if no explicit values are specified.
Unfortunately it seems the reason I don't have proper range checking in place for Date/Time/Timestamp values is that I need to implement the Formatter verbs for the engine first, and I had let that slide. For now I'll just ignore the fact that the range checking isn't implemented for those types and move on rather than letting myself get distracted from the boring job of writing test data.
The code has all been recompiled with the latest OpenJDK 1.6 to see if the performance has improved at all compared to the disastrous last update that hit the project with a 1400 percent performance penalty.
Rather than risk incompatabilities with the RHEL code base, I've reverted to Xerces 2.9.0 as shipped with Oracle Linux.
I've spent several days trying to figure out how to get validating parsers working with a pre-loaded XSD, but to no avail. I'm going to give up on that for now so I can get back to the core task of exercising the database APIs using the SAX Loaders.
In reality, the validation is completely unnecessary as I validate all the parsed data, checking sizes, optionality, and the structure of the document as I parse it. Trying to get validating parsers working was just a personal "I want it" feature.
I've been working on implementing proper caching of the grammars. There are changes to CFLib and CFCore at this time, so a full rebuild will be required. I won't be checking in this batch of changes until I've at least run it under Eclipse to see what it does.
I'm having difficulty correlating the online documentation for Xerces with the libraries I'm using. I'm going to try switching over to the Ubuntu 12.04 delivered version of Xerces.
It would seem the techniques for doing so have changed a lot since I first wrote that code for MSS Code Factory. I have a fair bit of rework to test before I can check anything in. I'm impressed that the SAX Parser Factory approach works at all, as it would seem that style of code is only useful for non-validating parsers. You have to explicitly construct the SAXParser instance with a configuration object parameter in order to get cached schemas working.
On the bright side, I've reworked a sample of the code for loading a schema definition such that it uses the new code style. I just need to figure out how to implement an XMLGrammarPool implementation for CFLibXmlCoreSaxParser.getGrammarPool() to manage.
The engine now uses validating parsers for loading it's configurations. For some reason they're resolving against the URLs instead of using the explicitly located local resources. Perhaps the way those APIs work has changed since I first wrote the code.
Nonetheless, I've got validating parsers. Now I can focus on resolving the issues with CFDbTest 2.0.
I now have the validating parsers working for the RuleSet, the ToolSet, but not the RuleCartridge parsers. For some reason, the rule cartridges aren't resolving their XSD properly. I need to look into this further -- it could be something so simple as a typo in either the code or the rule cartridges themselves.
I am, however, most pleased to see the RuleSets loading cleanly with the validation. It's worth noting that the load time is visibly higher with validation enabled.
That latest change has caused the MSS Code Factory itself to start issuing the error/warning messages that DbTest does at the command line. Now that I can reproduce it in Eclipse, I know that I have a valid problem.
It seems there is an alternate syntax for specifying the attribute column types than what I'm doing right now (global names within the namespace.) I'm not enthused at the idea of rewriting what are, according to the XML validation tools, perfectly valid schema specifications.
Plus there is still the issue that I use this style in the parsers for MSS Code Factory itself without issue, so I know it has to be some niggling little difference between the specifications. Maybe the use of a uri: instead of an http: URL; I don't know. I'm guessing.
In the meantime, the interpretation of an empty but specified BLOB as being an empty byte array instead of a null has been corrected.
The models were not properly referencing the ISOTimezone object from the Contact object.
The XSD specification has been tweaked slightly in hopes of correcting the errors being reported at runtime. Clearly there is some niggling difference between the relationship of the XSD, Xerces, and the parser for the MSS Code Factory implementation itself and the one being manufactured for the SAX Loaders. I just need to comb through the code and figure out where the difference is. I decided to start with the XSDs.
Each of the loaders needs to reference a seperate configuration file for the appropriate database. They have been imaginatively named .$schema$db2udbrc, .$schema$oraclerc, and .$schema$pgsqlrc in the home directory of the user running the loaders.
There were some TODOs around the XML unmarshalling of Base64 data to be coded. As far as I'm aware, all of the critical TODOs from the main code base are addressed with this release.
There was also a bit of indentation cleanup for the SAX Loader parsers.
The named lookup support has been repaired and tested for at least one case. Provided the definitions in the model are sane, it should work for all named lookups.
DB/2 UDB support is going to be fun.
DB/2 likes to use something like:
SELECT MAX(ID) + 1
INSERT INTO Schema.SomeTable ( Id, Columns )
VALUES( NextId, ?, ?, ?... );
I forget the details of the syntax. I'll have to check the online manuals.
I'm going to extend the concept to the sub-object id's (concatenated keys) like this:
WHERE InhId = ?;
INSERT INTO Schema.SomeTable( InhId, SubId, Columns )
VALUES( ?, NextId, ?, ?, ?... );
Except, of course, the rules have to deal with an n-part inherited key, not just a singleton InhId, so it can turn into some pretty complex SQL and binding code.
When using the SELECT MAX() + 1 approach, DB/2 likes that particular index to be the primary key, sorted in descending order. This lets it get the MAX value in a single database page probe for the end of the data page list.
Think of it as the DB/2 approach to an IDENTITY column, but with inherited key potential as I've shown (I've never deployed the inherited key concept before, but it'll work, of that I'm certain. It just might not perform as well as the SELECT Max() without an inherited key. At least it'll still be able to rely on index page values after a BTree search for the inherited key.)
One advantage of my tool taking so long to run is it gives me a good hour of think-time to plan out what I want to do next with the code.
Sybase support will be largely based on DB/2 UDB, except that Id's which don't specify an inherited key will be implemented as SEQUENCE columns.
For Sybase, the PostgreSQL approach of a sub-id allocator table will be used instead of the DB/2 SELECT Max() approach because DB/2's approach only works if the database supports the ATOMIC syntax that DB/2 UDB has (and possibly DB/2 MF and DB/2 400.)
The records of the sub-object id generation tables with the PostgreSQL approach to inherited id's sees them acting as semaphores allowing only one database thread at a time to perform an insertion. This is not a significant concern because the target model is a distributed request-response approach where requests are processed quickly without waiting for further user input during their processing, meaning lock-contention time is minimized.
The code has been tweaked at the fundamental levels to deal with Enum-keyed table definitions properly. I hadn't really considered this requirement when originally defining the behaviour of the Buff and PKey objects.
The build has also been updated to use the latest versions of org.apache .jar files from the Eclipse Indigo installation.
The build has been refreshed using the org.apache jars from the Eclipse Indigo installation, as they're newer than the default system jars under Oracle Linux.
It's supposed to be xsd:NMTOKEN[s], not xsd:nmtoken[s].
There were some typos in the new rules. Those have been corrected.
I hate hacks. So I implemented a proper interjection class to join between the document and the top level elements of the schema, and named it "SaxDoc", appropriately enough.
The parser still doesn't run in command-line mode, only under Eclipse. An initialization race condition, perhaps? Something that isn't going to occur in a reliable order from one JVM to the next?
Properly implemented, there should be another level of parser code generated, but for now I just recursively define the Schema node as being the root document itself. I don't like it, but I need to get on with testing the databases.
You should be able to successfully run CFDbTestRunAllTests now.
An initial migration of the PostgreSQL JDBC implementation as the base of IBM DB/2 UDB 10.1 support has been completed. Once again I need to check and update the date/time/timestamp to/from string conversion code to map in UDB syntax instead of PostgreSQL syntax. Unlike Oracle, I'm quite certain there are differences.
In other words, the code will not work, though it should clean compile.
The IBM DB/2 UDB 9.7.2 Community Edition rules have been updated and refreshed to test against the IBM DB/2 UDB 10.1 Express-C edition.
The Oracle framework with an implementation of an Oracle-specific CFDbTestSaxOracleLoader20 execution script is complete.
Oracle will allow for a full implementation of TZDate/TZTime/TZTimestamp, but that hasn't been done yet. Conveniently enough, PostgreSQL date-time-timestamp to/from string conversions use the same syntax as Oracle. Oracle just has an additional TO_TIMESTAMP_TZ function that will let me provide the extra timezone support.
The Oracle rules have been copy-paste-edited from the PostgreSQL rule base. This code will not work because I'm pretty sure Oracle uses a different syntax for dates.
Pretty much every database has it's own syntax for specifying date-string conversions.
The bug with loading the default rules from the distribution jar file has been fixed. I found it about a week ago, but hadn't tested the fix until now.
The exceptions thrown during the SAX Loader parser processing is now wrapped with location information and rethrown so that the user has some idea where the source of the problem is located.
The 2.0 code base has been refactored back to a net.sourceforge hierarchy and CF naming instead of ca.singularityone with S1 naming.
The CFCore20, CFSme20, CFDbTest20, and CFBam20 projects are refactored versions of the S1* projects.
Note that the parse objects have been purged from the BAM. I've decided to go with a simpler and cruder solution to dealing with the custom method support I want to add. Specifically, there will be <JavaObjPreamble> and <JavaEditPreamble> tags to support specification of variables in the Obj and EditObj specifications. The BL layer will go away entirely, as you should be able to specify everything as a custom method.
The <JavaObjMethod> and <JavaEditMethod> objects will be used to parse out the specifications for the custom methods themselves. For now, all methods must be public so that the signature string can be automatically produced in the interface and used to structure the method entry in the implementation.
I suspected that the later in the classpath this particular class is found, the slower the logging code runs. I can't imagine why this would be the case with cached jars, but there you have it.
When I tested that theory, the performance did not improve.
I went through the Subversion logs for CFLib, CFCore, and MSSCodeFactory, and found nothing of significance. Seeing as moving the log object back to CFCore did not improve performance, that wasn't some sort of weird glitch or bug that I'd unearthed.
It would seem that some "bug fix" for the Java core jars or the Linux system NLS string libraries has resulted in a serious performance hit for my code, regardless of deployment packaging. That sucks -- a 1500% runtime hit! Ouch!
I've finished moving over the builds for MSS Code Factory to my shiny new Oracle Linux system stack, and have done a test run.
Performance still sucks. I have a couple of ideas as to what might be going on, but if I'm right about any of those ideas I'll be disappointed with someone's implementation of introspection Also if I'm right, I have workarounds that might not only recover performance, but improve it a bit.
MSS Code Factory 1.11.4321 has been repackaged. There were two copies of the CF jars included by accident, greatly bloating the installer download.
Changed the S1SME license to the BSD 3-Clause License so that it can be used as a template for any other license on a customer's source code, provided that credit is given for the authorship of the template itself.
The message log wrapper has been moved to the LGPL CFLib package so that it can be used by non-GPL SAX Loader parsers manufactured by 1.11.
Since installing the past two weeks of updates on Ubuntu 12.04, I'm finding that the code takes over 10 times as long to run. I've tested with the 188.8.131.52 Oracle JDK as well as the current OpenJDK, and both suffer the same problem. The problem therefore must be with the way the kernel or core libraries of the system are implemented, and they're affecting Java badly. Very badly.
The loaders now run and log messages properly.
They're ready to start using test data to exercise the loaders, so I've started working on the test framework/binary deployment of S1DbTest20.
The PostgreSQL implementation now uses PreparedStatements everywhere that it is feasible to do so. Only the queries whose keys optional/nullable columns and the cursor APIs still use dynamic SQL.
Implemented the PreparedStatements for readAllDerived(), readDerived(), and readAllBuff().
Implementations of readDerivedBySuffix() and readBuffBySuffix() now use PreparedStatements if the index is comprised of mandatory columns, otherwise they use dynamic SQL.
Note that the cursor methods will always use pure dynamic SQL because they need to be able to buffer the SQL statement and re-execute it without having explicit access to the key variables of the query.
The readBuff(), lockBuff(), create(), update(), and delete() methods all use PreparedStatements now. These accessors all are keyed by the primary index, which never has optional columns, so the statements can always be prepared and then have runtime values bound.
There is no package for this release; it's subversion-only.
The update() implementation now uses PreparedStatements.
There is no package for this release; it's subversion-only.
The lockBuff() implementation now uses PreparedStatements as well.
The PostgreSQL implementation of the readBuff() methods now use PreparedStatements. Any statement which is keyed by the primary key can use a prepared statement, because the attributes of a primary key cannot be nullable.
The PostgreSQL implementation of the create() methods now use PreparedStatements. This should improve the performance of the SAX Loader substantially while loading data.
Added releasePreparedStatements() to PostgreSQL Table objects. The schema will invoke these methods when it releases a connection.
Subversion release only; distribution package not prepared -- there's too much work to be done yet.
The PreparedStatement attributes of a table are now emitted. Queries which have nullable attributes cannot be done as prepared statements, as you need to be able to switch between "IS NULL" and "= value" syntax on a per-query basis.
AllColumnsRequired will be used to determine whether it's possible to produce static SQL for a read by index. If it is, then support for precompiling the static SQL will also be provided so that the statement buffer can be reused (which I know from experience back in the '80s can provide an 80% performance improvement for DB/2 UDB.)
The insert and update statements can always use the parameterized form with precompiled statements, because primary keys can not incorporate nullable columns (because in the perverse world of SQL, NULL != NULL which means you can have multiple NULL values in the key set, which violates the uniqueness requirement.)
The loading of an enumerated type lookup requires that the id of the enum be specified in the loaded data.
The schema and the parser now expect those attributes for tables that are keyed by constant enum definitions (i.e. where GenerateId="false" for a single primary key index column which is a TableCol referencing an EnumDef.)
I've also eliminated the partial support of being able to specify the primary key attributes for tables provided they aren't "Id". That's no longer the case. The only time primary key attributes can be specified now is for the enum lookups.
I've also realized I've been coding to the unwritten assumption that the primary key of a table is an artificial id or concatenated key of artificial ids. Under no circumstances should a data attribute appear in a primary key if you intend to use the manufactured SAX Loader to populate that particular table. Primary key attributes are completely ignored by the SAX Loader unless they comprise a single column constant enum.
There were still some file headers that had the MSS Code Factory version number in them. These oversights have been corrected.
The new GEL binding Table.PrimaryKeyIsConstEnum returns true if the primary key of a table is comprised of one column that is specified by a SchemaEnumDef and whose values are not generated. This indicates that the table's primary id values have to be specified at insert time, so special case code can be manufactured for such tables, allowing the SAX Loader to be used to initialize the global lookup tables with data.
Too much code isn't actually changing, but gets "touched" when the MSS Code Factory version number changes. I've decided I don't like that, so the version number is no longer included in the Java source code headers.
The XSD was using the $Name$ of the named lookup relations instead of the $Suffix$ as it should in order to match the parser code.
The CLI for the PostgreSQL SAX Loader has been coded and clean compiles, but has not been run yet.
The RAM Loader package has been renamed to SchemaSaxRamLoaderCLI. It, too, clean compiles but has not been run yet.
The core code for evaluating the loader configuration options is in SchemaSaxLoader, and a sample per-database specialization is in SchemaSaxRamLoader.
All the pieces are in place to spin Pg8 and Oracle variations on the loader (SchemaSaxPgsqlLoader and SchemaSaxOracleLoader.)
Note that the RamLoader is in it's own project so that Eclipse can partially seal the jar.
Subversion release only; build not packaged.
The CLI skeleton is complete and should properly instantiate a RAM schema to load the data into.
The RAM implementation can't be used to run tests that require the presence of data loaded by a previous test.
Next I need to split out the RAM details into a seperate project so Eclipse can partially seal the jar containing the mainline. When I do that, the main() will move out of the CLI skeleton to the RAM main, which will then be copy-pasted for the Pg8, Oracle, etc. mains.
The CLI now includes the code for parsing the first argument to the program, the loader options.
The second argument is expected to the be file name, and then any remaining arguments will be expected to be parsed by the database-specific command line interfaces as a means of specifying command line database connection arguments. There are, of course, no connection arguments for the default Ram database version used by the SchemaSaxLoaderCLI itself.
The CLI provides the command line instantiation of a parser with a backing store interface. The default CLI uses Ram storage, and then there will be CLI variants for each of the databases.
The goal is to have a common set of driver data that gets loaded by the CLIs for the different databases using a common scripting architecture. I believe it's possible for the high level script to use some sort of command line tag to switch between the different runtime CLIs on the path so that one set of driver scripts can exercise multiple databases consistently.
Oops. Kind of useless without constructors, isn't it?
The Structured XSD is produced to match the SAX Loader parser support.
There are a couple of niggling naming conventions that I want to change, possibly including switching over to using the Suffix of a relation instead of it's Name for the lookup name attributes. Some of the names are uglier than their suffixes in the model.
The general structure of the XSD should correlate to the current parse of the SAX Loader now.
Now I have all the pieces needed for the next level of the loader parser -- the main that ties together a pre-loaded XSD syntax specification to the documents being loaded by an instance of the SAX Loader parser, which evaluates it's arguments for the loader options and the name of the file to be loaded.
I think in Unix convention if no loader file is specified, the program will attempt to load stdin.
The structured XSD attributes have been pruned to match those expected by the SAX Loader parsers.
Next I need to add in the reference name attributes for the lookup relationships.
The XSD now gets manufactured cleanly as I've added support for BLOB/base64Binary types as well as verified the Enums.
However, the attributes of the buffers need to be reworked to incorporate the named lookup attributes supported by the parser, and to remove the Id attributes of those named lookups, especially if they are required attributes.
The BuffType is going to get renamed ObjType as well, because we're dealing with structured objects in this XSD, not RPC buffers and index keys.
The GEL iterator EnumTags is required in the structured XSD rules because we need to be able to list the enum tags as a restricted value list for an xsd:token.
Working on the strucutured XSD specification for a schema to define the documents parsed by the SAX Loader. Once that's done, I can move up to the next level of coding for the SAX Loaders and have them properly referencing the XML schema specification the way they do for the hand-written parsers in MSS Code Factory.
The rules have been enhanced to produce AlternateIndex support when a LookupIndex isn't specified for a modelled object table.
Added AlternateIndex specifications to SME template and propagated it to all the models so that they now produce proper SAX Loaders for the SME data objects.
Only object tables which don't specify neither a LookupIndex nor an AlternateIndex will be blindly inserted by the SAX Loaders when I'm done with this change. Although that won't produce a loader capable of dealing with something as complex as the BAM without some serious editing, it will someday.
The GEL bindings "HasAlternateIndex" and "AlternateIndex" have been added. The BAM XSD specification and parser now support the Table.AlternateIndex attribute.
The AlternateIndex is used by the SAX Loader parsers if the LookupIndexId is not specified, on the assumption that the elements of the composite key index will have been set by resolving named references of the object being parsed.
It's time for me to pause and read through the code being produced to see if it makes sense now that I've run out of little ideas that needed to be fleshed out for the code to be "complete" for the SAX Loader.
Note that the loaders produced can only deal with structured hierarchies of data, not recursively named objects or dot-naming hierarchies. Yet. But I do know that is doable, it's just not needed right now so it's being put on the back burner.
With the editBuff approach to the instantiation and rationalization of an object instance by the SAX Loader code, it becomes possible to also implement a more generic AlternateIndex specification for objects, with the caveat that there can only be one alternate index within a branch of the object hierarchy. Unlike relationships, I can't think of any way one could narrow an index.
In particular, the GroupFormMember and similar objects define named references, but they don't have names themselves. If anything, their name might be some annotated concatenation of the components of the alternate index or a hash of those values.
For now I'm shelving the idea of being able to produce a complex dot-name-hierarchy as components of an object, which is needed in order to be able to code an S1Bam SAX Loader. Focusing on that capability would be a diversion from my primary mission of creating a facility for priming and maintaining a business model database, which does not normally require the kind of dot-naming hierarchy that the BAM does.
The core logic for applying the valued edit buffer as an inserted object or by copying it to an edition of an existing instance has been coded, except for fleshing out the details of which aspects of an object to copy.
For example, the update code should not copy the Owner reference, the Container reference, nor attributes of the primary key.
Once that is coded, I think I'll almost be ready to write a test main for this beast. :)
The 1.10 model of 1.11 erroneously specified the Prev/Next relationships referenced by a Chain as narrowing the Scope of the definition. This was incorrect. The Prev/Next links do not affect the scope at all.
There was also a bug in the SAX model parser that incorrectly re-probed the Prev chain relationship instead of the Next relationship. For the sake of 4 characters were so many bugs caused...
Corrected some errors in determining whether a relation participates in a chain or not.
Still, for some perverse reason the Next links are getting emitted as named resolvers while the Prev links are being properly hidden. Something's not right...
On the bright side, there are fewer than 200 errors in S1Bam20 now, and S1DbTest20 clean compiles.
Fixed the 2.0 models so they don't fail to resolve the scoping Tenant any more. I'm not sure why this was being reported as a failure to find a ClusterId type, though. Nothing was referencing a ClusterId that I could see.
S1DbTest 2.0 clean compiles now.
The GEL verb SatisfyWidestLookupColumn is invoked while iterating the index columns of a targetted lookup. The verb does a hidden "poptop Table" to find the table to be probed for matching columns.
The shallowest definition of a required Owner, Container, Master, Parent, or Lookup relation whose DataDef matches the indexed column is probed first. If not found, then the probes are repeated for optional columns.
It is possible for the reference to fail to resolve if the model isn't "clean".
Currently it's failing for a number of the SME template tags and for the SecGroupForm of the SME template. Still, for a first cut at solving the problem it addresses a surprisingly wide set of cases.
In order to implement the named lookups, what I need to do is iterate through the columns of the lookup, then for each one, pop to the tabledef and search for a matching inherited mandatory column type and use that to pass the argument.
If no mandatory column is found, then an optional one will be searched for.
I think I'll call the GEL verb SatisfyWidestLookupColumn for a scope of IndexColumn, on the assumption that you're iterating through the columns of the LookupIndex when invoking this specialized verb. The widest specification is the one closest to the base class, not the first one encountered in the inheritance tree. Thus this will need to be a recursive function that only considers the current table's columns if the superclass could not find a match.
The first pass should only consider columns which are in the Owner relationships of the table hierarchy. Then a second pass which considers columns of the Container relationships should be performed. The required passes should be made before any optional passes are made.
Only if we can't resolve the column using an Owner or Container relationship should we widen the search to general columns.
This is going to be one complex and expensive evaluation.
It would be possible to explicly model relationships and indexes that would allow the explicit specification of which index set of columns to use to probe the name, but I really don't want to put that onus on the designer if I can infer the information successfully in most cases.
I'll only add explicit specification of the relationships if I find I absolutely have to do so.
I want to explicily check for the NullValue of the id attributes before attempting to resolve the reference (i.e. Avoid a database probe that I know will fail because there is never an instance 0 for component objects. The id generators all start with 1.)
This really should be implemented as a fast-return case in the DbIO code.
I really need to allow an explicit read that bypasses the cache, similar to the way there is always a read for update at the start of a beginEdit(). Maybe I'll call the operation "pin()" or "refresh()".
I want to add a reacquireLocks() method to the object layer, a reversal of the process that compacts the cache prior to JEE serialization, removing all but the edited objects from memory.
The optionality of the relationship is irrelevant.
I just have to do a release for the 4000'th Subversion checkin!
S1DbTest20 clean compiles with this release, though the relationship resolution is just getting started. It's an example of evolving code, not a complete specification of how I want to do things. Ideas come, ideas go, and the code follows the flow of ideas as best I can.
The Owner getter now properly includes the "Owner" part of the relation accessor name.
The naming of the methods for the Owner and Container reference setters include the RelationType in the name.
I really need to allow an explicit read that bypasses the cache, similar to the way there is always a read for update at the start of a beginEdit(). Maybe I'll call the operation "pin()".
I want to add a reacquireLocks() method to the object layer, a reversal of the process that compacts the cache prior to JEE serialization, removing all but the edited objects from memory.
The editBuff is an instance of the object for the table parser. It gets populated with the parsed attributes and resolved references in a long-winded route to being able to query the database by the named lookup index of the object during the load/merge processing performed by the loader.
In the event that the object doesn't exist, we have a ready-made instance for creating.
In the event that the loader is to to replace an object, the existing instance can be deleted, then the editBuff instance is Created as it's replacement.
If the loader is to update an existing object, the editBuff is applied to an edition of the existing object using the copy() method (which may need to be written -- I forget whether it exists or not.)
Only for insert-only objects is the construction of the editBuff a "waste of time."
There were some missing pieces to the implementation of the HasContainer, HasOwner, ContainerRelation, and OwnerRelation GEL verbs.
I also made use of those verbs in the SAX Loader Parser rules.
Added some notes about how to implement the Owner and Container reference resolutions.
Note that in order for this to work, I need to instantiate and edit a buffer instance so that the cross-propagation of references and keys are applied.
By strictly implementing the order of lookup resolution as Container, Owner, and finally Named Singletons, the buffer object will acquire the various shared ids that are populated by resolving the references.
Trying to replicate the data knowledge that is built into the objects in the loader code would be crazy. Fast and efficient, but definitely a duplication of effort and of maintenance.
Rather than analyzing the attributes of the owner id, I think what I'll do is modify the read by primary key implementations so that they check for the NullValues if specified and do a fast return of null without actually probing the database for the null key.
The Chain specifications were using optional prev/next relation ids, but mandatory relationships. This inconsistency was uncovered while working with the Chain objects for 1.11.
Test runs for the 2.0 models have been executed, but not compiled and validated.
There is also a change to the way relationships are considered for owner/container/named-singleton relationships. Relations which are referenced by a Chain of the table are also included in the set now.
This change should only affect the SAX Loader Parser manufacturing.
There are some issues with the S1Bam20 model which I believe are caused by the fundamental error of the 1.11 engine to consider inherited lookup indexes when determining whether a column affects a singleton relationship which references a named lookup.
The manufactured S1DbTest20 code clean compiles.
There are problems with the S1Bam20 code, however, due to failing to do a "popto TableDef" before emitting some variable references in the rules. That will be corrected later.
The XML SAX Loader now converts the Date, Time, Timestamp, TZDate, TZTime, and TZTimestamp attributes using the CFLibXmlUtil services.
The InvalidArgument exception is used to wrap the runtime exceptions thrown by attempting to parse an invalid Date, Time, Timestamp, TZDate, TZTime, or TZTimestamp when converting an XML format string.
The InvalidArgument exception is normally used to wrap and rethrow a formatting or value checking exception thrown by a subfunction, adding in the detailed information about the parameter number, name, and value to the general exception that was caught.
I need to add a LookupResolverRelation that searches out the relation which defines the same attributes as the first len-1 attributes of the lookup index for a table.
If len == 1, we have a trivial case where there is no relation.
If len == n, we search the table's Owner relation, Container Relation, Parent, Master, and Lookup relationships (in that order) for a relationship whose from index columns match the columns and order of columns in the first n-1 columns of the lookup index.
If no match is found for the current table, we recurse and try the superclass table, and keep trying until we reach a null superclass.
There was a bug in the GEL Compiler which was using the full macro body as the macro name for certain expansions. This led to an argument overflow exception.
I'm not even sure the name should be mandatory as I've specified it in the GEL runtimes, but for now I've just put in a hack to truncate the macro body as the macro name.
The SAX Loader Parser now declares a clean set of relation variables and applies them to the created or updated objects as it should. The data attributes which are not hidden by relationships are also applied.
I really should prune the hidden and primary key attributes from the XML object specifications and eliminate them from the parsed attributes.
The SAX Loader parser now applies the declared references to the edit objects.
Attribute optionality no longer evaluates primary index attributes of the base table, as those attributes are not normally present in a structured document.
The packaging scripts and referenced jar files have been updated to use those provided with Ubuntu 12.04.
The S1Bam 2.0 model has been corrected and now manufactures properly.
All four projects (S1DbTest, S1Core, S1Bam, and S1Sme) have been remanufactured with 1.11.3858.
S1DbTest clean compiles, so the changes to the SME template have been propagated from there to the other models, so they should all be in pretty good shape at this time.
The SME Template has been updated in S1DbTest20 and propagated to S1Bam20, S1Core20, and S1Sme20.
The SAX Loader implementation still isn't fully coded, so I won't be doing a broadcast release for a day or few yet.
The verb RelationDef dereferences the containing Relation of a RelationCol.
The verb OwnerContainerOrNamedLookupRelationCol does a superclass-following search for an Owner, Container, or named Lookup relation whose FromCol references the current column.
These have been used to attempt a proper implementation of the arguments used to search for a named lookup relation during the SAX Loader processing.
The rule base has been updated to support Owner relations, which are now used in the S1DbTest 2.0 and S1Bam 2.0 models.
I extended the test coverage of the S1DbTest model by using both schema-level and concatenated id forms of primary ids for the various tables of the test model. I need to add in a UuidGenDef test yet to replace one of the schema Id64GenDefs, but I did uncover a bug in the Table Id16GenDef rules.
While working on the SAX Loader, I realized I need to distinguish between the Container relationship of an object and it's Owner relationship, because the Owner (usually a Tenant) is not necessarily the same as the Container.
In particular, with a naming hierarchy owned by a Tenant, the Container of the root objects are null, but the Owner is still specified.
The GEL binding HasOwnerRelation and the reference tag OwnerRelation have been implemented as well.
Note that it may take a while to update all the rule sets so they don't barf when I add Owner relationship definitions to the models.
Added implementation of the configurable LoaderBehaviour attributes and accessors to the SchemaSaxLoader implementation.
The Table relationships didn't incorporate the TenantId, which resulted in compile-time errors.
The Relation didn't properly reference the RelationType, which is a global and not owned by the Tenant.
One of the references to a TypeSpec neglected to incorporate the TenantId.
The S1Bam 2.0 Tenant now owns it's component data properly, with concatenated-key identifiers that combine the TenantId with the object id allocated within the scope of the tenant. The object ids only have meaning within the scope of a tenant.
The extra optional TenantId columns have been added to implement the optional relationships, and the indexes have been updated to include the appropriate TenantId in their evaluation.
The model relationships have been updated to specify the additional TenantId composite key attributes that replaced the previous Id64Gen based id's.
Model consistency errors are likely to crop up once I'm ready to start running and debugging the manufactured code.
The resulting code has not even been test compiled yet.
None of the other database naming conventions include version-specific naming, so I eliminated that oversight for PostgreSQL.
I'm in the midst of working on the S1Bam20 model, so it can't be manufactured by this release. Have patience -- it's a major edit.
The new 1.11 attributes for table definitions have been fleshed out for the S1DbTest20 model.
There were still some outstanding references to LookupColumnName which have been replaced by LookupIndex references.
The 1.11 models and rules have been updated to use the LookupIndex instead of the LookupColumn.
Treat the LookupIndex name attribute of the XSD as a late-resolution reference used to populate the LookupIndexId and LookupIndex relationship as part of the EndElement handling of the Table.
Remove the LookupIndexName and LookupColumnName, as they can be inferred from the LookupIndex after the EndElement processing is completed by the SAX BAM loader.
Stub in resolution of named singleton relationships. I need to implement the GEL bindings for the Table.LookupIndex relationship first, and wire it's population to the SAX parser.
There will be two branches of code depending on whether a relationship is required or optional (the required branch relies on the previous check of the attribute so it's code can assume the attribute value is non-empty; the optional variant has to check if it's null/empty before trying to look up the name.)
Treat the LookupIndex name attribute of the XSD as a late-resolution reference used to populate the LookupIndexId and LookupIndex relationship as part of the EndElement handling of the Table.
Remove the LookupIndexName and LookupColumnName, as they can be inferred from the LookupIndex after the EndElement processing is completed by the SAX BAM loader.
Stub in resolution of named singleton relationships. I need to implement the GEL bindings for the Table.LookupIndex relationship first, and wire it's population to the SAX parser.
There will be two branches of code depending on whether a relationship is required or optional (the required branch relies on the previous check of the attribute so it's code can assume the attribute value is non-empty; the optional variant has to check if it's null/empty before trying to look up the name.)
The corresponding LookupIndex relationship from the Table to it's component Index has also been specified. The idea is to specify the name of the LookupIndex in the opening table element, with the index itself instantiated as one of the sub-elements of the table.
At runtime, if the LookupIndexId is not null, it's used to resolve the Index and bind it to the Table. If LookupIndexId is null and LookupIndexName is specified, a name resolution is used to locate the index, and the LookupIndexId is ignored. i.e. Either can be specified, but the Id form is given preference for performance reasons (it's always faster to key a record by primary key than by alternate index.)
Next I need to wire the XSD enhancements required, the SAX parser changes for the new attributes, and update the models to use the new attributes so I have test data for the 1.11 rules to exercise.
The ultimate point of this is to be able to resolve lookups in the SAX Loader parser.
The attribute values of the element are now saved as named attributes of the current parse context so that they can be referenced by the endElement() implementation if necessary.
Code has been added to verify that required attributes have values, including required named singleton relations.
Move the SchemaXml package to SchemaSaxLoader. Just because the cartridge is named XML doesn't mean I need to include XML explicitly in the package name, and with the number of table objects being produced, each SAX parser should be in a seperate package.
At this point I have enough of the grammar defined to deal with the S1DbTest 2.0 model, although there is a lot more work to be done with Chain definitions and such before the S1Bam 2.0 model could be properly defined and parsed.
The short term goal was to produce enough of an S1DbTest parser to be able to exercise the full range of atomic data types by implementing a SAX Loader Parser that can Insert, Update, or Replace instances keyed by name attributes.
In order to do the same for S1Bam 2.0, I'll need to enhance the model to track dot hierarchy names. The Scope relationship of an AnyObj is more important than a general relationship. In conjunction with the UName index on the AnyObj, the Scope specifies how to build a dot hierarchy name.
Postulating beyond the AnyObj hierarchy, the Container relationship of an object in conjunction with it's naming attribute configuration hold the core information needed to generalize a unique-within-container naming implementation and a corresponding dot hierarchy name to identify named objects within the overall object container hierarchy.
But I don't need to do that right now. I can finish fleshing out the SAX parser as is, refactoring it to SaxLoader as I go.
I won't be comfortable taking 1.11 out of Alpha until I have the SAX Loader parser completely coded and working. Without it, I don't have the code in place to fully exercise the insert and update behaviours of the S1DbTest20 model that exercises all the atomic types.
Once the loaders are in place, I can work up artificial S1DbTest20 test cases that will ensure all possible combinations of values are exercised by the loader, possibly including intentionally bad data rejection.
The S1Sme20 (Small-Medium-Enterprise) template model has been extracted from the S1Bam20 model and propagated to the S1DbTest20 and S1Core20 models.
All of the models now specify some reasonable first-cut values for LoaderBehaviour. However, I've also realized I should refactor the SAX Parser as a SAX Loader because it's intended to load a structured XML document, not deal with the request/response parsing that will be specified for the XML form POST requests to the SHTTP server that I intend to use for servicing client sessions.
I'm just not comfortable with restricting myself to the single-class arrays of a WSDL interface. I need and expect to receive mixed hierarchy lists, and I can pass those around easily with customized XSD specifications and parsers.
Of particular concern are the read methods, which can return anything that derives from the table as well as instances of the table objects themselves.
The GEL binding for Table.LoaderBehaviour has been implemented.
I need to update the S1Core20 and S1DbTest20 Business Application Models to specify the new attribute. I could have made it optional, but I really want the specification to be explicitly stated for every table of a BAM.
You can expect several configuration attributes to become mandatory with the 2.0 release.
The engine initialization now populates the LoaderBehaviour table properly, and the BAM parser has been updated to handle this new mandatory attribute.
The S1Bam20 model has been updated to specify the new attribute as test data, but S1Core20 and S1DbTest20 need to be updated as well. However, I'll do and check in those updates when I wire the GEL binding for the new attribute on the next update release. In the meantime, this release lets you (indeed, requires you!) add the new attribute to your models and verify that the syntax is correct.
The HasLookupColumn GEL binding was not iterating properly, and was invoking iterator.next() more than once per loop body, which caused it to invoke .next() more times than was valid. The results would have been horribly wrong even if the bug hadn't caused a stack trace.
The table element object parsers have been updated to handle the named singleton relationships, which hide the key column attributes of their FromIndex.
This release was not posted to the files download section because it would not run -- the incomplete code for the new LoaderBehaviour attribute was causing a traceback during the BAM load.
There is no installer for this "release" because there is still more code to write before there will be any functional enhancements completed.
Added LoaderBehaviourEnum and LoaderBehaviour lookup object. Wired a mandatory reference from the Table object to the LoaderBehaviour, with a default of "Insert".
The LoaderBehaviour is one of Insert (insert the instance if it doesn't exist, but don't update or replace it if it's already in the database), Update (insert or update the object, but don't replace it), or Replace (perform a cascading delete of the object and recreate it from scratch.)
The new LoaderBehaviour attribute has not been wired to the element handler for the Table objects, nor has it been wired to GEL yet. Only the basic object model enhancements are done so far.
The binding IsColumnInContainerOrNamedLookupRelation evaluates the complex conditions used to determine if an attribute is "hidden" in a structured XML document. If the attribute participates in the from index of a relation which references a unique index and the to table of the relation has a LookupColumnName, then the relationship is a candidate to be evaluated. This implicily includes container relationships, though the single explicit container relationship is always included even if it doesn't have a LookupColumnName.
Once the set of candidate relationships is built, their columns are checked to see if they reference the column being considered.
The manufactured code has been split up into the sub-projects schemaVersion, schemapg8Version, schemaramVersion, and schemaxmlVersion. This makes it easy to prepare deliver .jar files with Eclipse.
All sub-projects other than schemaVersion depend on the schemaVersion project. Only the schemapg8Version project includes the references to the PostgreSQL JDBC implementation library.
Unrecognized, not Unrecgnized. :)
Working on the SAX parser as well. It should be just about clean-compiling though still non-functional code at this point. At least the basic structure of the objects has been fleshed out pretty thoroughly.
There were some defects in the code for the PostgreSQL Table implementations caused by the shift to iterating through lists to build the result arrays explicitly. The code for S1Bam20 now clean compiles.
The rules have been enhanced with the same modifications as have been applied to the 1.10 rule base, so with a clean compile, I expect it to function equivalently well, though I have not tested that assumption. Use at your own risk -- it is, after all, an alpha release.
Remanufactured with 1.10.3620 to correct a major widespread defect.
The singleton relation setters were checking to see if the value had changed before applying a change. However, this doesn't work after a read of a non-null value unless the relationship is resolved after the read. Otherwise when you set the relationship to null, it would incorrectly fail to clear the relation index attributes.
The PostgreSQL database test now does a cascading delete of the old SchemaDef and replaces it with one created from the BAM document being read. Both the database reads and deletes have now been tested.
I do believe I'm ready to release a beta by Thursday, before I take the weekend off.
Remanufactured with MSS Code Factory 1.10.3611.
In particular, LinkedList instances are used instead of ArrayLists, because they're less expensive to grow while reading the SQL result set into buffer instances.
Also, this way we know that the elements of the final list returned to the caller includes the proper subclasses of buffers in the case queries for table objects that have subclasses.
I'm not entirely sure whether this will fix my problem yet -- I'll be resuming the debug of the 1.11 PostgreSQL database reader shortly. It doesn't seem to be fetching all the rows that I know are in the database. The initial load/insert to the database and the update statements work fine, but the readers need some debugging.
In particular, I noticed this when I discovered that the RelationColumn elements of a Relation only had one element, when I know very well that many of the relationships in the application model being tested have multiple columns.
I will need to propagate similar changes to the Oracle implementation.
Note that because of the way the Ram implementation is structured in memory, this problem does not arise for the Ram implementation. It's an issue specific to the way that I handle merging together the result lists of distinguished collections of objects.
Implementing the UUID generators also required adding the DispensedUuidGenerators iterator to GEL.
The S1Bam20 model now compiles cleanly, except for the new work-in-progress SAX parser code.
The PostgreSQL and Oracle code now clean compile when using UuidGenDef instances. However, I still need to actually implement the rules for the UUID generators so none of the PostgreSQL, Ram, or Oracle implementations clean compile right now for S1Bam20.
S1DbTest20 and S1Core20 compile ok because they haven't inherited the template model from S1Bam20 just yet. I want to get the missing code implemented before I "broadcast" the new SME template.
The S1Bam20 model did not properly specify a Tag relationship, using an old style key instead of the new composite keys.
There were no GenDef branches for the generators, which was causing problems manufacturing code for the newly added UuidGenDef types in the S1Bam20 model.
Remanufactured by 1.10.3568.
Instead of returning the internal SortedMap that is maintained by the object cache, the APIs now return a copy of the SortedMap to prevent concurrent modification exceptions while iterating through the results to update the database.
I've been working my way through the debug of the reloading process. I've fixed a number of issues, so now it's properly trying to delete the old SchemaDef.
And failing, of course. But at least now it unlinks the GenDefs from their Dispenser successfully.
However, this has highlighted a deeper problem -- the duplicate readers for PostgreSQL and for the main system should be returning COPIES of the matching maps, not the maps that are stored internally by the instance cache. Otherwise when you try to update the data while iterating, you'll get concurrent modification exceptions as I have been.
I just wanted to work around it and make sure that the piece of code surrounding it wasn't a problem. The whole business of building a list of GenDefs to be unlinked will go bye-bye again once I fix the core DbIO code.
The Oracle and PostgreSQL table id generator scripts weren't being deposited in the correct directory, and there was a naming error in the top level file expansion that was preventing the Id64 generators from being properly produced (an occurance of "id64Gen" instead of "Id64Gen".)
The BAM parser has been modified to properly unlink and remove the SchemaDef when it's reloaded from an XML file. This is untested, but it should work.
Reworked Business Application Model template section significantly. This still won't quite work because all of the AnyObj derivatives still need to be modified to use a component key referencing the TenantId as well as their existing id components.
The mssbam-1.11.xsd specification was incorrect. It had some typos in the naming of the AccessFrequency attributes.
This was discovered by adding some first-cut access and data scope specifications to the S1Bam20 model, and converting the ISOTimezoneIdGen to an ISOTimezoneEnum.
The 1.11 Business Application Model has been updated to add the GroupOperatorEnum, and RelGroupValue components of the Relation definitions.
The GroupOperator specifies operations available over a group of objects, normally indicated by a relationship targeting a duplicate index.
The type of the referenced column affects which group operators can actually be used for the referenced columns, and straying from that supported set may result in code manufacturing errors.
The RelGroupValue is an optional detail of a relationship which targets a duplicate index. There are no RelGroupValue instances expected for relationships which target a unique index, though I suppose it would be possible to manufacture degenerate cases to allow RelGroupValues to be specified for uniquely indexed targets as well.
The UAltIdx specifies a unique multi-part key comprised of the RelationId, GroupOperatorId, and ToColId, effectively forming a sparse binary matrix of the operations to be supported and cached by the manufactured code (the presence of an instance implies a true state on the appropriate matrix cell entry.)
Wired in the Oracle 11gR2 schema creation script support.
There were some bugs in the creation of the Oracle stored procedures that needed to be corrected. That's not to say the PLSQL is valid yet, but at least it manufactures cleanly now. There were also missing references to the EnumDef types, which have also been corrected.
I've removed the references to the proprietary cartridges from the default manufacturing configurations, and created variants on the command scripts that only use the GPLv3 rulesets (*Pg8Only scripts in the bin directory.)
The default that I use specifies to use the internal proprietary configuration that incorporates Oracle support (as well as other databases in the future.)
As a result, if you want to modify the data structures used by MSS Code Factory itself by altering the S1Bam20 model, you'll have to take a full branch of the code to eliminate the Oracle support when you manufacture and build your custom variant. It's easy enough to do, but you'll no longer be able to manufacture a full set of support for the S1Bam20 and S1DbTest20 without a support contract.
The Oracle 11gR2 database creation scripts now manufacture cleanly, with reasonable support for the Blob, Number, and Uuid types I added since I last worked with the Oracle schema creation scripts.
The Oracle 11gR2 rules have been refreshed and brought in complaince with the 1.11 2000 character rule body requirement.
The rules load and run cleanly for S1DbTest20, but have not been run against an Oracle instance yet to create the database.
From here on in, the Oracle creation scripts will be part of the engine build process primarily so that users have the option of running with an Oracle 11gR2 Express or commercially licensed database engine instead of PostgreSQL should they so desire.
Not that which databases are supported really matters at this point in time, as there is no client to use a shared model repository yet.
Added the new outline for an XML SAX Parser based on the manufactured XSDs. There will be a substantial amount of work before that's functional, but it's going to be needed to implement database initialization loaders.
The values returned from GEL for the default MinValue and MaxValue of a FloatDef and DoubleDef weren't producing correct string values. This has been corrected, but is unlikely to be back-ported to 1.10 as the problem only showed up with the S1DbTest20 PostgreSQL database creation scripts.
The S1DbTest20 PostgreSQL database now instantiates cleanly.
This is a repackaging of the 1.11.3486 release, with an updated version of the S1DbTest20 model. The model now includes both Req and Opt variants on all the combinations of full-range, min-constrained, max-constrained, and min-max constrained values.
This should provide a complete exercising of all the combinations of attributes supported by the manufactured code.
Corrected some typos and errors in the rules that were caught by the full data type exercising performed by S1DbTest20. The manufactured code now clean-compiles.
However, I've decided I'll also be adding an "Opt" variant of the current required columns, just to ensure that the full range of expected variants is exercised. I'm half way to complete rule/code coverage, so why stop part way to the finish line?
I also corrected an error in the new build process. As the Package utility script extracts the subversion version of the rules for distribution, it's necessary to make sure the rules are checked in before building a package. Now that I'm aware of the issue, the rules and the distribution package should be in sync from here on in.
There were some underlying issues with incomplete expansion specifications for some of the core data types, which were showing up during expansion of the SchemaTableBuff.java production. There may be bad code in some of the manufactured files (I haven't test-compiled it yet), but the manufacturing no longer throws exceptions due to unhandled types, leaving the detection up to the compile phase for the code instead of the manufacturing phase.
The constraints on Number.Digits and Number.Precision that limited them to a subset of the Short.MaxValue potential has been removed. There are algorithms for unlimited digits, so for the sake of argument let's say you can have up to 32,000-odd digits in a number.
The ca-singularityone-msscodefactory-s1dbtest.xml model has been added, along with scripts for manufacturing and loading the model. The purpose of this model is to exercise the full set of options for defining and persisting atomic data to a relational database, including optional and required columns, columns with no value constraints, minimum values specified, maximum values specified, and both minimum and maximum values specified.
Next I need to update the SAX parser to accept the new specification syntax.
The ManufactureToolSet object is a component of the Project, MajorVersion, and MinorVersion objects.
This allows the specification of the tools for which source code is to be manufactured for a project. While I have relied on cartridge specifications in the past, this information really should be part of the Business Application Model specifications, not a command-line option.
ToolSetName is unique in the context of the manufactured object which contains it.
I couldn't use the existing Name attribute for the ToolSet names because ToolSet names are a concatenation of up to 8 Tool names, separated by "+" signs.
It may be necessary to come up with some creative AnyObj.Name values for long toolset names.
The BAM XSD and the parser do not implement support for the new object yet.
The database creation scripts have been exercised, but I won't be loading a new instance of the models until the BAM XSD and SAX parser are updated to support the new specifications and they've been wired in to the S1Core20 and S1Bam20 models.
The first official test case is to create a raw PostgreSQL database using the manufactured scripts and to apply the MSSBamPg8Loader to that database using the S1Core20 and S1Bam20 models as test data.
This does not completely exercise the API suite, as the Min, Max, Null, and Init values for several of the data types are always null.
After the initial load of the BAM into the database, the parser needs to do a delete-and-reload on the most detailed of the Project, MajorVersion, and MinorVersion specifications for a loaded Business Application Model.
It also needs to synchronize linkages with existing TLD, Domain, Company, User, and License definitions as they exist in the database. I'm thinking about providing a command line option as to whether each of those items is to be referenced or updated by the loaded model. If they don't exist, they're to be loaded, of course.
I will be creating a special case BAM that specifically provides values for these attributes in order to ensure that the valued persistence works as well.
But for now I at least know that the core SQL syntax being used for inserts is valid with the new data types in place; that means the new dynamic SQL statement fragments are ok for the null case.
Once all three of these test cases are passed, I will have the core facilities needed to maintain a customer database of active project models that I need to manufacture and deliver to them on a weekly basis.
Creating variations on the command line manufacturing facility that manufacture models from the database instead of XML specification files will be quite trivial.
Note that I am not creating a versioning model repository, just a shared editing environment with XML import/export facilities. I consider this a critical first step towards creating an environment that a client GUI application can actually be used for.
Note that I'll probably be looking at resurrecting the web form prototypes as a first step to collecting my thoughts about producing GUIs. The RAD feedback of being able to instantiate a database and having an instant (if crude) GUI for navigating and editing that database is very valuable in the design phase of applications.
The rules have been brought forward to 1.11 and will subsequently be maintained with this release. 1.10 will be updated to use this copy of the rules instead of the 1.10 rules, so that work can continue on testing the PostgreSQL implementation with 1.11.
Note that there is a bug in the code which is preventing the rules from being loaded as resource files within the jar file. As a temporary workaround, the rules are included as a cartridge-1.11 directory in the installer.zip extract, and the path to that directory has to be manually configured as a msscf.cartridgedir entry to be searched explicitly.
Annoying, but it works for now.
Support for NumberDef.Digits and NumberDef.Precision have been added.
The code has been remanufactured by 1.10.3438, which produces a valid PostgreSQL database creation script set (i.e. no errors during the database creation of the 1.11 BAM database.)
Support for the AnyObj.ShortDescription attribute has been added to the BAM XML parser and the GEL bindings.
The new verbs HasShortDescription and ShortDescription have been added to the BAM GEL.
As far as I can think of, this is the last of the new attributes from the 1.10/1.11 series that needed to be added to MSS Code Factory.
Next up: PostgreSQL schema creation testing.
7,118 lines in CFLib 1.11
230,053 lines in CFCore 1.11
1,075,393 lines in MSSBam 1.11
1,312,564 lines total for 1.11
The GEL bindings for the Table.TableChains iterator, extensions for Chain.Suffix, and the references Chain.PrevRelation and Chain.NextRelation have been coded and wired to the runtime.
The mssbam-1.11.xsd has been enhanced to properly support TableChains specifications, their use has been incorporated into the ca-singularityone-msscodefactory-2.0-s1bam.xml model, and the SAX XML Parser has been upgraded to parse the TableChains and the Chain specifications within the TableChains elements.
The GEL bindings have been coded and wired in place for Description, DataScope, EditAccessFrequency, EditAccessSecurity, ViewAccessFrequency, and ViewAccessSecurity.
I still need to wire in the Chain support.
And I noticed that I defined an OptionalShortDescription as an attribute of an AnyObj as well, so I'll need to hook that in the same way I hooked in Description. But I'm not in any particular hurry to do that at this time.
Generally, a ShortDescription will be used as status line help or a tooltip, while the regular Description is used as F1 popup help.
Remanufactured with the latest 1.10 packaging, which would correct any potential errors in the manufactured relationship handling. But as MssCF bindings are not actually used by 1.11, the point is moot. I'm just being consistent about the manufacture-forward process.
The new attributes Description, DataScope, ViewAccessFrequency, EditAccessFrequency, ViewAccessSecurity, and EditAccessSecurity have been added to the model, the XML parser, and are now properly populated during a BAM load/parse.
Next I need to add the following accessors for working with the new attributes in the rule base:
I also need to codify the higher level rules that will actually be used in the rule base. For example, the Value implementations of these rules try to get the attribute to use from the Value itself, the Table it's a member of, and finally it's defining SchemaDef (which always has a value set by the loader.)
The mssbam-1.10.xsd has been brought forward to mssbam-1.11.xsd, and the SAX parser updated to reference the new schema specification.
The 2.0 models have also been brought forward and are compliant with the 1.11 XSD.
The 1.11 XSD and parser have been enhanced to implement and support the DataScope, ViewAccessSecurity, EditAccessSecurity, ViewAccessFrequency, and EditAccessFrequency attributes of the SchemaDef, Value derivative, and Table specifications.
The data stores for the enumerations have not been initialized yet, so the runtime aborts because it can't find the default value for one of those attributes.
As with 1.10, the handling and application of the Description attributes for the BAM models has been implemented.
While working on the MSS BAM parser enhancements for the DataScope, ViewAccessFrequency, EditAccessFrequency, ViewAccessSecurity, and EditAccessSecurity attributes, I discovered that several lookups didn't implement proper unique names. There were a number of attributes corrected in the 1.11 model as a result, and a full regeneration was applied.
The real work was in MSSBamCFBamParser.java, though.
The PostgreSQL scripts were corrected to be compliant with the 2000 character body limit imposed by 1.11, so the code base is being refreshed with 1.10.3384.
MSS Code Factory 1.11.3387 now successfully manufactures S1Bam20 with PostgreSQL scripts and integration, throwing no exceptions during the manufacturing process.
The 2.0 code manufactures as follows:
279,802 lines in S1Core20
1,306,657 lines in S1Bam20
1,586,459 lines total
1.11.3382 weighs in at 1,265,321 lines including the hand-written components. That's a 321,138 line difference.
Thes 1.11 model includes enhancements which have not been brought forward to 2.0 yet, so will grow at some point in the future.
In order to support multi-user access to most relational databases, it is necessary to use the "FOR UPDATE" clause to refresh and lock an instance when initiating an edit.
The only database I know of that doesn't use "FOR UPDATE" syntax is Microsoft SQL Server. I'll have to implement soft locking or completely rework the code to use whatever scheme is supposed to be used to achieve concurrency with SQL Server when the time comes.
Just because SQL Server is based on Sybase ASE 10 does not mean it's been a stagnant database by any means.
With the extra lock support code, MSS Code Factory 1.11 now weighs in at:
1,028,150 lines of code for MSSBam111.
230,053 lines of code for CFCore111.
7,118 lines of code for CFLib111.
1,265,321 lines total.
At 100 lines per page, that would be just over 25 reams of paper, printed single-sided.
The ISO Country object has been fleshed out with the various codes and search indexes that can be used to correlation nations using different standards.
The reference documentation at the University of North Carolina's website provided the list of international and national standards which I felt should be included in the model, but when the data is populated by initialization scripts, I'll be going to the sources and using that as input data, not the UNC data set. The UNC data set is published as reference material only, if you check the copyright at the bottom. They neither claim to own nor authorize the reuse of the data itself.
In compliance with the accepted standard of a 2000 character VARCHAR by the commercial database products, MSS Code Factory will no longer rely on PostgreSQL 8000 character support.
There will be a lot of work and editing of the 1.10 rules before 1.11 will accept them with the new restriction, but the compressed rule bodies and instruction source will work with the older 1.10 engine as well, so I don't need to bring the 1.10 rule base forward to implement this design correction.
The 1.11 scripts now run properly as well, though of course they reject the current 1.10 rules because many of the rule bodies are too long for the 2000 character constraint and need to be broken up into sub-expansions.
In compliance with the accepted standard of a 2000 character VARCHAR by the commercial database products, MSS Code Factory will no longer rely on PostgreSQL 8000 character support.
There will be a lot of work and editing of the 1.10 rules before 1.11 will accept them with the new restriction, but the compressed rule bodies and instruction source will work with the older 1.10 engine as well, so I don't need to bring the 1.10 rule base forward to implement this design correction.
With this refresh of the manual edits to the BLObj layer and the entirely custom MssCF package, this should be a working version of CFCore 1.11 and should take care of the remaining compile problems for the main project.
With the refactored rules in 1.10, CFCore now clean compiles.
There is no reason to expect the main code won't as well.
The CFCore and main code do not clean-compile because it still references the 1.9 library packages instead of the 1.11 packages. This will be resolved when the rule base snapshot for 1.10 is taken from the current 1.9 beta release.
The code has been brought over from 1.9 and updated to it's new version number, the license headers have been refreshed, and new contact information provided. The library .jar files have been packaged and are being versioned as well.
The only thing I expect to add to this library in the near future is support for Overflow, Underflow, and Range argument exceptions taking BigDecimal and BigInteger as arguments if I haven't done so already.
This is the oldest revision for MSS Code Factory 1.11.