Add an attribute for tracking the generated email confirmation UUID in the database with an optional ConfirmationUUID for the SecUser object, and a non-unique index by that attribute (it has to be non-unique because multiple rows have a NULL value after their email address has been confirmed). Once an email address has been confirmed, this new attribute will be cleared to null.
There is a critical error in the way hash values are calculated and keys compared in the core Java object code. All projects need to be remanufactured to apply these fixes for the new Map/HashMap implementation to function correctly.
All of the 2.1 projects have now been refreshed and built with the List interfaces instead of the SortedMap interfaces. The hand-written code has been modified and is also building cleanly.
This is a major update -- the contract for duplicate index queries has changed from a SortedMap to a List (which just happens to be sorted by ascending natural-order primary key.) There will be code that needs to be modified in your applications.
The internal relationships used for the GEL runtime have been restructured to use independent relationship key attributes instead of sharing the TenantId and CartridgeId of the object. With the latter, some of the attributes were non-null so the ORM code would try to resolve the null relationship against the database, finding no matching lookup. (I really should check for that and throw an exception.)
The code for CFCore 2.1 now uses the List interface, which replaces the old SortedMap interfaces for returned result sets when reading objects by duplicate index. The lists are sorted in natural order by ascending primary key, but the SortedMap implementations Java provides were proving too heavy for a general purpose interface. As it is, I really shouldn't bother with sorting the result sets at all, but I do hate a disorderly data world. I'd make the interface a sorted set, but then I couldn't use the lightweight ArrayList implementation that is supposed to provide a performance boost over the heavy TreeMap implementation I was using.
The changes for the general purpose Java code have been written, but are completely untested. I'll be working through getting CFDbTest 2.1 into a clean-building state before issuing the next refresh of MSS Code Factory 2.0, which will incorporate the fixes for the general purpose Java ORM interface. At that point, I'll be ready to manufacture the code for the affected layers of all the 2.1 projects and distribute the refresh.
The Java core now uses Map/HashMap wherever possible for maintaining the cache and for maintaining the RAM storage. The penalty of using SortedMap/TreeMap grew to be too much, and I had to do something about it to move forward with some plans of mine.
MSS Code Factory 2.0 saw some modelling changes to the CFCore 2.1 project, so I'm repackaging and republishing 2.0.
The sp_bootstrap implementations had neglected to address the new required column clus.Description in the stored procedures. This has been corrected. The PostgreSQL database for CFDbTest 2.1 installs cleanly and the Java code builds properly. I'll be posting an update of that code shortly.
I've tweaked the MSSBamCFGenFileObj to overload expandBody(), invoking the superclass implementation and then running System.gc() every 100 files produced. With any luck, I'll see some performance improvements, especially on large projects.
The CFSecurity 2.1 model has been updated to add a Description to Cluster, password reset attributes for the SecUser object, and the addition of a DevName attribute to a SecSession. Last but not least, a ProxyUser parent has been added to SecSession, so that when artificial jumps into a system.system.system session occur, there is tracking as to *who* is responsible for that leap into the system data space.
The device name of a SecSession is *not* a lookup key, because it will be set to "web" for security manager interface logins, and it will not be legal to specify "web" as a device name in the Add Device form in the future. (There are a lot of rules and checking I need to go back and implement once I'm done fleshing out the skeleton of the interface. If you allow garbage in, you get garbage out.)
A test run of the CFBam 2.1 manufacturing took only 21 hours for the longer job instead of 26 hours, so the garbage collection has made a substantial improvement to the memory defragmentation situation and the performance issues that used to result on big jobs. MSS Code Factory 2.0 no longer slows down part way through large jobs, but continues on at a merry and consistent pace throughout the run.
The issues with SAP/Sybase ASE 16.0 support have largely been addressed. There is one test in CFDbTest 2.1 which is not passed due to a bug in either the database engine or the JDBC driver (a client side update of the OptFullRange table, which has BLOBs, fails with a range-check error indicating that only 24 arguments are allowed, but the statement defines 30. The parse buffer also indicates 30 arguments, so I suspect the problem is in the jconn4.jar driver.)
All features and functionality are now available to SAP/Sybase ASE users, including Chains.
The core objects have also been tweaked so that cache misses are registered and do not require re-querying the database for future resolutions. Otherwise if you have a cache miss, the same query would result in re-probing the database unnecessarily, reducing performance. The price to pay is that cache misses now consume memory to store the key with a null value, so I expect the runtime size for CFCore 2.1 processing to increase substantially after the 2.1 engine is rebuilt with this code change.
In the end, it turned out all I had to do was declare my cursors "insensitive". The "Replace Complex Objects" test now passes.
However, there is a bug in the prepared statement processing for the OptFullRange updates. There are 30 parameters to the SQL statement being evaluated, and Eclipse shows that the prepared statement instance returned by the Sybase ASE driver agrees with that. However, when setting parameter 25, an exception is thrown about an array index being out of bounds. What makes it really interesting is the statement invokes six functions to obtain values for some of the columns, which would explain the blow-up point of 25. The same style of statement is used by the insert, and that runs successfully, so I do believe I've found a bug in 16.0's processing of prepared update statements.
As a developer, I do not have the time or patience to deal with SAP/Sybase for bugs in their engine. Let them fix it when they get around to it. In the meantime, all other tests pass, so good enough: I can move forward with implementing the Chain support for SAP/Sybase ASE.
After over a day's work switching over to the use of #tmp tables for Sybase ASE in order to resolve a database problem, I encountered a new problem with using #tmp tables within transactions -- you're not allowed to. Coming up with a uniform non-transactional #tmp table model would be onerous, to say the least, so Sybase ASE is being dropped from the support list again. I'm checking in this version of code in case I ever come up with an idea about how to make it go that I haven't thought of or tried yet today.
There is a currently a bug affecting updates of OptFullRange for CFDbTest 2.1. I'll need to do an Eclipse build to trace through to debug that one, and I don't even have the project created yet so it'll take a bit of time. This affects test 0032-UpdateOptFullRangeWithValues. It is the only test case for which Sybase ASE is still failing.
The results of all stmt.execute() invocations are now processed, so the exceptions raised by the Sybase ASE instance for failed invocations of sp_delete() (including permission denied) are now propagated through the Java exception hierarchy as they are for processing other statements.
The ReplaceComplexObjects test for CFDbTest 2.1 is now passed by SAP/Sybase ASE 16.0. I had removed SAP/Sybase ASE from the support list out of frustration, because it raises exceptions if you modify any of the tables behind a complex query, invalidating your cursor. Without that ability, I could not implement DelDep and ClearDep support.
A few days ago I realized this was a situation that could be addressed through Sybase #tempdb transaction tables. Instead of cursoring over the complex query, I now select the complex query into a statement-specific #tmp table, then iterate through the contents of the #tmp table. It's been a long time since I encountered an issue like this, so it took a while to remember how to work around it for Sybase. It has been a long debug session to get this approach working, fraught with many runtime errors pointing me to further code tweaking of the database scripts each time.
That leaves the Chain support to be ported from PostgreSQL to SAP/Sybase ASE after I've debugged this OptFullRange update issue.
The Sybase support is being restored by 2.0 for the 2.1 projects, because I remembered how to deal with the mutable complex join issue in Sybase ASE. The rules will get forward ported from 2.0 to 2.1 after testing is done with the missing functionality that needs to be added and enhanced. Everything builds ok and the database creation scripts have no unexpanded tags, but I won't be testing the database install scripts until I've done my modifications.
I think I know how to work around the limitation of non-modifiable complex queries in Sybase ASE. What I need to do is declare a transaction-specific temp table for each such modifying loop, select the query into the temp table, and then iterate through a selection of the temp table to perform the processing. That way there is no cursor open on the tables being modified. It'll be big and ugly, but it will work.
Once I'm done with getting the RAM tables supporting create-only chains, I'll resurrect and port the 1.11 Sybase ASE rules to 2.0. As to the RAM chain unlinking for deletes, I won't be implementing that because the primary purpose of the RAM storage is to load a snapshot image of a document, which doesn't normally involve delete processing unless you force the situation to occur with a poorly-written document.
The ChainContainerComponents reference obtains the inherited container relationship for the current table instance. It then resolves the inherited chain specification of the table. It then searches inherited relationships of the referenced container table for a Components relationship targetting the table that defined the chain.
ChainContainerComponents is a specialized construct which is being created specifically to support the coding of attach-to-tail chain behaviour during the creation of a RAM instance. Currently RAM storage doesn't implement chain linking, and that needs to be done before I can make the changes to 2.1's engine code base that I have planned (specifically, explicitly using established chains instead of the implication that primary keys sort in ascending order as objects were added later to a model. This will enable processing against a database-persisted model that has been modified by the user since it was first imported or created.)
This is really what should have been the initial production release, but I was in a rush to try and get out that 666 release just for giggles. There are no unresolved tags, the code for CFDbTest 2.2 looks good, and CFDbTest 2.2 passes its regression tests.
CFBam 2.0 has been refreshed by MSS Code Factory 1.11.12683. This is the full refresh of the code incorporated by MSS Code Factory 2.0.12684.
The CFBam 2.0 code has been partially manufactured by MSS Code Factory 1.11.12683, and the refreshed code used to prepare the CFBam jars and src.zips found in this installer's bin directory. You can use the Eclipse-prepared jars and zips in this bin directory for debugging if need be. Ant-prepared builds do not include debug information (which is how the full bundle of CFBam 2.0 is prepared.)
The CFBam 2.0 attributes SchemaDef.JXMsgRqstSchemaXsdSpec and SchemaDef.JXMsgRspnSchemaXsdSpec were needed to implement the custom JavaXxx bindings. The bindings are already present in 1.11, so there is no need to back-port to 1.10 and remanufacture 1.11.
MSS Code Factory CFBam 2.1 is affected by the same issue and is being similarly rushed through manufacturing so MSS Code Factory 2.1 can be refreshed and distributed while awaiting the full manufacturing and distribution of CFBam 2.1. The changes to the CFBam 2.1 model have already been made and are included in this distribution.
The CFBam 2.2 model is similarly affected and has also been updated in the 2.1 code base.
The generation log file has to be created and deleted to avoid sucking back memory with the backlog buffer, but seeing as I've been running MSS Code Factory in parallel, the log files haven't even been glanced at for a couple of years because I know they're garbage.
Use tee to save the stdout and stderr streams of the manufacturing process instead.
The CFCrm 2.1 model did not properly specify IsXsdContainer="true" for four relationships of the model. This was discovered during the debugging of other issues with the 2.1 engine producing 2.2 code, and re-running the manufacturing of the 2.1 code base. Fortunately only CFCrm 2.1 and CFAcc 2.1 had to be remanufactured after discovering this oversight.
The flags for IsXsdContainer weren't being properly carried through by the SchemaRef imports, resulting in problems with some of the manufactured code. Quite frankly, I don't see how the CFDbTest 2.1 cases could have passed with this bug in place. CFDbTest 2.2 failed right away from the problem, and it was just a carry-through of the same code recompiled with a different version of the CFBam library, so both of them *should* have been failing.
Regardless, it's fixed now. Time to remanufacture everything. *sigh*
I may have mispackaged 2.0.12666, so I'm going to delete it and replace it with this build, which also corrects an oversight in the CFCore 2.1 model (the custom code wasn't specified for GEL Boilerplate instances, resulting in runtime errors.)
This is the initial production release of MSS Code Factory 2.0, and includes migrations of all of the project models that I intend to bring forward. CFGCash and CFUniverse have been abandoned.
The installer also includes a handful of utility scripts that I rely on for day-to-day workflow automation.
In the java/cartridge-2.0 directory of the GIT repository, you'll find a utility script and an ex editor script for automating the bulk of the changes that have to be made to ruleset configurations, should you have created your own custom rulesets. There is no similar automation process for the business application models; you'll just have to compare the 1.11 models with the 2.0 models and apply similar changes. If you're good with an editor like vi, you can make most of the changes to any reasonable BAM in about 2-4 hours.
And yes, I specifically tried to make sure it would be release 666 just to thumb my nose at people who believe in the wives tales of 2000 year old shepherds telling ghost stories around a campfire.
There were missing rules for resolving the Precis and Digits tags from TableCol and IndexCol specifications added to the "any" rules. The problem was discovered while building CFAcc 2.1.
The model for CFBam 2.1 needed to specify the correct SecScope attributes on some tables. The CFCrm 2.1 and CFAcc 2.1 models have been added as well.
The DB/2 LUW rules weren't considering a SecurityScope of None, so manufacturing the newly migrated CFBam 2.1 model produced an error quickly. I've checked the entire rule base for any other cases of this oversight, and I believe the bug is squashed.
CFBam 2.1 is manufacturing on the laptop right now; there is a relatively empty project to receive it at GitHub.com (net-sourceforge-MSSCodeFactory-CFBam-2-1.git).
A rebuild of MSS Code Factory 2.0 with the latest OpenJDK environment for Debian has also been performed, though the CFLib and CFCore jars have not been rebuilt. (What can I say? I'm lazy...)
Alpha 1 completes the migration of the rule configurations from MSS Code Factory 1.11 to MSS Code Factory 2.0. Given an appropriately updates business application model, 2.0 now produces code equivalent to 1.11, give or take a few tweaks I made along the way (such as removing range constraint checking from the database column specifications, tidying up code formatting, and correcting a couple of defects I found during the migration efforts.)
Note that the models have not been migrated yet; when they are I'll issue Beta 1.
The creation of 2.0 began on 2014.11.18, 15 days ago. Since then, a total of 302,158 lines of code have been customized and migrated to form the 2.0 release. That's an average of 20,143 lines of code per day, 839 lines per hour, or 13 lines per minute. Remember that while some of that code was just migrated from either a manufactured code base or from the 1.11 code, it all had to be hand edited and tweaked. This was *me* working, not the code factory.
The migrated SQL Server 2012 Express Edition database creation scripts and JDBC implementation passed their CFDbTest 2.1 regression tests.
All of the databases have now been migrated to 2.0. That leaves the Swing implementations to be migrated yet, then I can shift to migrating the CFBam 2.0 model to CFBam 2.1 so I can forward-port the core of the Code Factory to 2.1 and exercise the CFCore 2.1 implementation with it.
I expect bugs in CFCore 2.1. The changes were too drastic for there not to be bugs.
The migrated MySQL database creation scripts and JDBC implementation passed their CFDbTest 2.1 regression tests.
It's rather late to the game, but I just realize I don't cache misses in the client. I'll need to do that to get peak performance and avoid hitting the backing store, which is highly desirable in a client-server or web application environment.
I'm debating whether to add a Tomcat Servlet server for receiving the XMsgRqst messages, and a corresponding client stub that you point to the Servlet's URL and perform puts to send a message and receive an XMsgRspn message body for processing. If I do so, I'll also need to do something about thread-locking the client stub for the Swing prototype GUI. Until you get a response from the outstanding request, the prototype will just crudely block.
I want to add a request message class that sets a semaphore when it queues a message for sending, and then blocks the current thread on that semaphore being cleared. Some form of process latching construct anyhow, so that the invoker of a client stub blocks before returning from the send-receive processing.
When the response is recieved, the body of the response is set in the request message instance and the semaphore is cleared so that the invoker can accept and process the response to the request.
The request messages will be tagged with a client-generated UUID. It's only going to be assumed that that UUID is valid within the session. I haven't decided whether to audit requests and responses or not. Doing so would slow down the server quite a bit, but I could see the possibility of extreme security environments wanting to audit all requests and responses in the raw. Maybe I'll make that a configuration option for the SchemaDef, and let you specify whether you want message logging or not.
A single client-side thread will be spun to process the queued messages. Because of the semaphores on the request message instances, if this thread dies, the client GUI would freeze up with Swing.
When the message service thread is about to send out a request to the server, it invokes a callback to the Swing desktop to set the "busy" cursor to display and client events to act accordingly. The custom widgets may need to have a few more classes added, so that all of the custom widgets can check if the desktop is displaying the busy cursor before they act on input events and actions. Maybe I'll even make it play the alert ding, if I can.
I want a status bar on the main Swing desktop, with a message area and a "flashing light" icon that can be toggled on some chunk processing events in the message send and receive calls. I'm presuming I can find an HTTPS request/response processing client package that doesn't require a Tomcat jar. I don't want to embed such heavy objects in the clients if I can avoid it; I just want a formatting and transport API for building a single page post that has the XML request embedded as a text field attribute of the page. I *might* break out common "header" attributes to form attributes as well. I'd also like to hardcode a check to refuse operation over a non-HTTPS/SSL connection. I want "secure communications" to be the default.
I forget whether Tomcat supports JNDI connection pooling or not. I think it does. I hope it does. I also need to look into my options for ensuring only a single request is processed from a cookie-identified client at a time, because that have the same effect as multi-threaded queueing of requests on the server side. I might also wire some sort of "Cancel" message that is allowed to be processed at any time, if I can. Then all I need to do is add a volatile "cancelled" attribute to the schema cache instance, and modify the server-side code to check for whether that "cancelled" attribute is true or false before proceeding with each step of a database service operation. That's very much optional, though. I'm quite content to block clients on a single stream request/response conversation.
The Code Factory binding rules have been migrated and the resulting code builds cleanly for CFDbTest 2.1.
The core functionality encapsulated in the MssCF package of CFCore has been migrated from 1.11 to 2.1, along with the migration of the business logic from 1.11's GenKbBLObj layer to the Java customization elements of the CFCore 2.1 model itself. The changes made to the MssCF code were relatively trivial compared to the modelling effort.
The custom version of the rules for CFCore 2.1 now produce clean-building code. When I get around to it, the code is ready to receive the migration of the custom portions of CFCore 1.11. But for now I'm shelving CFCore 2.1. I was just bored tonight and couldn't sleep. :P
Life, the Universe, and Everything...
The CFCore 2.1 project will be manufactured by the dedicated cfengine rules, not the generic Java implementation. This version of the rules strips out all of the history objects, and moves the RAM storage module under the same directory tree as the core Java objects. The reason for this is that the .jar file for CFCore 2.1 should be a single jar with a single source zip, the same as it was in 1.11. So the code needs to be structured in a special way, resulting in the need for a dedicated rule set.
The migrated Oracle database creation scripts and JDBC implementation passed their CFDbTest 2.1 regression tests.
The DB/2 LUW JDBC, SAX Parser main, and X(ml)Msg loader mains all passed their regression tests. DB/2 LUW is now ready for use.
The PostgreSQL and X(ml)Msg layer testing for CFDbTest 2.1 has been passed. The PostgreSQL migration is done. Four more databases and the Swing layer to go, and then I can work on migrating the CFCore model from 2.0 and shifting the CFCore code from 1.11 to 2.1.
Once I've got a clean build of CFCore 2.1, I'll be able to migrate the rules for the java+msscf layer. After that, I'll migrate the CFBam 2.0 model to 2.1, get it manufactured for 2.1, and port the main engine code forward to run on CFCore 2.1 instead of 1.11.
Migrating the models and rule bases to 2.1 should be trivial compared to the shift from 1.11 to 2.0.
The code is now synchronized with the latest attributes added to the SchemaDef for Java customization.
The CFDbTest 2.1 PostgreSQL database creation scripts run properly, and the JDBC layer along with the mains for SAX and X(ml)Msg loaders compile as well. I just need to migrate the launcher scripts so I can exercise the PostgreSQL database implementation to make sure it works correctly, then I'll do the runs for CFSecurity 2.1 and CFInternet 2.1.
The PostgreSQL database creation scripts for CFDbTest 2.1 ran cleanly after being fully manufactured by 2.0.12628. The Java code is being remanufactured now, along with the first runs of the PostgreSQL JDBC rules and the mains for SAX and X(ml)Msg loaders for PostgreSQL. When those build and run correctly, I'll post another release.
The PostgreSQL database creation scripts for CFDbTest 2.1 look like they're ready for testing. If testing passes, I'll shift over to the JDBC layer, add the mains for PostgreSQL SAX and X(ml)Msg loaders, and give them a run before I post an update to CFDbTest 2.1.
CFBam 2.0 has been completely remanufactured to capture the additional attributes of SchemaDef.
The java+xmsg layers are now manufactured and build properly for CFDbTest 2.1.
CFDbTest 2.1 now passes the RAM SAX Loader test suite. The SAX parsers now function correctly.
This release produces clean-building structured SAX XML parser code with the appropriate main for running the parser with a RAM storage module. This is sufficient for me to be able to bring over some of the CFDbTest 2.0 test data and scripts to verify that the 2.1 code actually runs properly, at least for in-memory data.
It took all day (give or take a few hours break) to get it working, but MSS Code Factory 2.0 finally produces a clean-building version of the CFDbTest 2.0 model.
The reference Chain has been added to support CFDbTest 2.1 manufacturing.
There were a few changes required before CFInternet 2.1 would manufacture cleanly and build properly, but those issues have been addressed. Next up: a test run of CFDbTest 2.1.
The rules have been updated so that the manufactured code relies on CFLib 2.1 instead of CFLib 1.11. CFSecurity 2.1 has been rebuilt with the updated jar files to verify that the changes are valid.
CFSecurity 2.1 as manufactured by MSS Code Factory 2.0 now builds properly.
All of the expansion tags in the manufactured code have been resolved for the CFSecurity 2.1 model. The code is now ready for a test build.
CFBam 2.0 has been refreshed by MSS Code Factory 1.11.12604 to avoid verb naming conflicts in MSS Code Factory 2.0.
The Java* custom code tags are now properly expanded using the factory engine so that the resulting code can be adapted to the name of the schema being manufactured. The ProjectDescription verbs are also properly expanded by this release.
I'm almost ready to do a test build of the manufactured code, but I'm still in the process of manually checking for unresolved expansions in the code. So far I have yet to deal with the various DbXxxName expansions at a minimum. There is also some work to be done on the rules themselves to deal with numeric Min/Max specifications.
With today's changes, the main command line code has grown to 114,807 lines, an increase of 8,771 lines in about 4 hours of intense manual coding. Granted a lot of it was copy-paste-edit code, but that's still 2,192 lines per hour or 36 lines a minute -- by HAND.
The license processing has been corrected along with a number of other issues. However, along the way I discovered a few things that need to be addressed:
1) The various Java* customization tags need to be expanded before being embedded. As the tags are using the existing Java* names, I'll prefix the new verbs with "Exp" and modify the rules accordingly after I've done so. Alas, this means a big pile of custom verbs that I'd hoped to avoid.
2) The various MinValue/MaxValue specifications for numeric types need to be used differently in the rules. If a value is specified, it should be used. Otherwise the rules need to default to the min/max for the data type. Previously that logic was in the custom verbs of the engine; it'll have to move to the rules. I think I'll name them "EffMinValue" and "EffMaxValue" for the numeric types, and stuff them in the any/Verbs.xml file.
3) The DbSchemaName, DbTableName, and DbColumnName tags need to be emulated by rules which check if the object defines a DbName and uses that if specified, defaulting to the objects generic Name if no DbName was specified. These rules don't need an Eff prefix because I don't use DbName directly in the rule base, so I have a "hook point" for the rule logic.
4) ShortName needs similar processing to DbName.
The 2.0 projects have been remanufactured by MSS Code Factory 1.11.12598 and repackaged with the latest version of CFCore 1.11, correcting defects in the manufactured MssCF GEL engine support.
Since 2014.11.09, 1,106,336 lines of code have been added to the 2.0 projects, including MSS Code Factory 2.0. Over 15 days, that works out to 73,755 lines of code per day, 3073 lines per hour, or 51 lines a minute. Sweet!
|2.0 Project||Prior Total Lines||New Total Lines||Lines Added|
With the getValueObject() overloads added to the CFBam 2.0 MssCF binding objects, MSS Code Factory 2.0 now runs through the CFSecurity 2.1 model without throwing any exceptions or logging any error messages. The code produced is not valid yet, but the core runs properly.
There are still going to be a number of custom verbs required before the whole of the 1.11 rule set could be migrated; the Java rules are the simplest of the bunch and use the fewest features of the engine.
In order to correct runtime errors with MSS Code Factory 2.0, the msscf jar has been remanufactured by MSS Code Factory 1.11.12598 Service Pack 8 to produce the missing getValueObject() overloads for the bindings.
There are fewer and fewer errors being thrown or reported, but it's not there yet.
MSS Code Factory 2.0 now weighs in at 105,637 lines of code, an increase of 20,738 lines in the past 24 hours, which works out to 864 lines per hour or 14 lines a minute. Note that this is all hand-migrated code at this point, not increases due to manufacturing.
A lot more code gets successfully manufactured now, but I'm not done yet. The main reason for this release is to snapshot the update from CFCore 1.11.12399 to 1.11.12594, which corrects a bug in the creation of contexts. MSS Code Factory 2.0 was exercising that particular code case, though it has never cropped up with the 1.11 engine to date.
The CFSecurity 2.1 model manufactures without throwing exceptions, so it creates a file for each of the defined members of the Java set. However, the contents of the files are far from valid at this point as there are many unreseolved verbs at this point in time. Some will be addressed by renaming to manufactured verbs from CFBam 2.0, others will be addressed through custom code. But it's 02h00 right now, so I'm not going to worry about the details at this time.
The Java code is now trying to produce the proper file names for the CFSecurity 2.1 test model, but due to later exceptions the files don't actually get created in most cases, just logged to the console as being manufactured. This change from the old engine's behaviour means that in the case of severe errors, the old versions of files will *not* get overwritten.
With the addition of some custom bindings and an iterator, the manufacturing tests now create a directory full of garbage files. The directory names are still wrong, the files are full of errors and aren't even named properly, but it *runs* to a degree.
The migrated Java rules don't process correctly yet, of course, but they do load successfully and the core engine/parser have been updated to attempt the processing of the loaded model.
The CFUniverse 2.0 code was too big to build again, so the schema references to CFAsterisk and CFFreeswitch have been removed. I'm seriously considering dropping the CFUniverse 2.0 project entirely due to the limitations imposed by Java itself. Were I using C++ on a 64-bit platform, I wouldn't be encountering 32-bit address space limitations (even with 64-bit JVMs, the Java specs use 32-bit sizing on things like the string constant spaces.)
With the removal of those two projects and the addition of the new CFBam objects, CFUniverse 2.0 now weighs in at xxx lines, for an additional 37,538 lines of code since 2014.11.09. As it only took two days to prepare CFUniverse after the CFBam changes (two runs, one per day), that's 18,769 lines of code per day, 782 lines per hour, or 13 lines a minute.
CFBam 2.0 has been refreshed by MSS Code Factory 1.11.12585 and is now completely in sync with the code used for MSS Code Factory 2.0.12586 CLI.
With the cleanup of CFBam 2.0 by MSS Code Factory 1.11.12585, the initial release of MSS Code Factory 2.0's command line interface is now ready. There will be enhancements to it as custom verbs are migrated from the 1.11 code base over time, but it's ready for me to start migrating the Java rules from 1.11 to 2.0.
The tasks ahead are a pretty big list of big items, but it's fun for me to work on this beastie of mine. :)
Next on my plate is migrating the 1.11 Java rule base to 2.0, and adding any customized verbs to the 2.0 engine that I can't implement in 2.0 using GEL rules. That will maximize the number of function points that derive from the manufactured GEL code rather than from hand-written code. All of those function points in 1.11 are hand-written verbs, so a lot of the modelled attributes aren't even accessible in the 1.11 rules, despite being defined in the models.
That 2.0 rule base will initially target a CFLib/CFCore 1.11 code base, as do the 1.11 rules. Once I'm at a testable clean build of the 2.1 code, I'll start the implementation of and migration to CFLib/CFCore 2.1.
After migrating the code to CFLib/CFCore 2.1, I'll migrate the engine and CLI from 2.0 to 2.1, and migrate the core any and java rules from 2.0 to 2.1. This should be a relatively painless rule migration, because it's the underpinnings of the code that will be changing, not the GEL XML syntax. i.e. The implementation is changing, not the interface. Or at least not enough to be significant (ICFLibAnyObj2 will become ICFLibAnyObj; I don't want to do that now because it would break the 1.11 CFCore code by mandating a bunch of extra methods that I don't want to implement in the 1.11 code base.)
I expect testing at this point to take a fair amount of time, because the structure of the code is changing significantly, so I'm expecting bugs in the implementation. Things *will* be missed and broken along the way. But in the end, I'll have 2.1 producing 2.2 code, which should match the 2.1 code if the engine in CFCore has been successfully migrated. One thing I'll have to take care of along the way is adding Chain link establishment to the 2.1 manufactured RAM persistence code that the engine runs on. I want the core of the engine to be able to run directly against models that are persisted in a database, not just loaded from text files.
Then comes the fun part -- a real 2.1 GUI, heavily customized from the code manufactured by 2.0. I won't even *try* to migrate the 2.0 GUI rules to 2.1 until I've got a 2.1 GUI I'm happy with. Among the features I intend to support with the GUI are proper logins/security through custom code and messaging, import of text models (by redirecting the RAM backend to the appropriate database engine, so there won't be *that* much code to implement for imports), and the ability to export a model from the database.
Although I intend to allow the engine to run against the database directly in order to manufacture code, the recommended mode of operation will be to use the GUI to edit the model in the database, export it, and then run the 2.1 CLI against the exported model. That way you can easily store the model in a code repository as part of the "standard" work flow.
Once I've got the GUI implemented, I'll migrate the GUI rules from 2.0 to 2.1 so that it can produce everything that 1.11 does, but on a 2.1 core and with a GUI for editing models.
That will leave me one more significant task to complete: the implementation of a JEE server to receive and process the XML messages that are used internally by the existing GUI code, and an appropriate client messaging implementation for the 2.1 GUI. (I will not be allowing the engine itself to run over the web. At least I don't *think* I will.)
So that's 8 major phases to the 2.1 project. Given that some of them will take months to complete, I'd say 2.1 one won't be released for 1-2 years. It'll keep me busy and out of trouble for a while. :)
The MSS Code Factory 2.0 CLI now successfully parses the CFDbTest 2.1 model, so the parser for 2.0 is done until I need to add new features or discover bugs that need to be fixed. Instead of the parser, I'll now be focusing on migrating the Java rules to 2.0, and migrating any custom verbs from 1.11 that I can't emulate through GEL rules.
The changes to the rule base will be dramatic and widespread -- 2.0 is emphatically not compatible with 1.11, even though the same XML format and GEL syntax are used.
The hand-migrated and template-manufactured-and-edited code for the 2.0 CLI now weighs in at 86,899 lines. If you compare that to the 11,000,000+ lines of CFBam 2.0 manufactured code, which runs unmodified, you can see that the amount of customization required to leverage the manufactured code in a relatively complex applicaton is quite small -- less than 1% of the total code base. Most code production tools seem to target a 10% customization cutoff ratio. I think I beat that.
It's worth noting that it only took 4 days to copy-paste-edit the structured SAX XML parser code produced by 1.11, migrate the processing logic code from the 1.11 parser, and produce a working 2.0 MSSBam model parser, test it, debug it, and ship it.
With a successful parse of the CFInternet 2.1 model, SuperClassRelation elements and SchemaRef elements are now properly parsed. That leaves the DelDep, ClearDep, and PopDeps to be exercised by a refreshed CFDbTest 2.1 model.
The latest code has been manufactured by MSS Code Factory 1.11.12577, and incorporates all of the model changes that were needed by the 2.0 CLI.
In the past five days, 865,711 lines of code have been added to CFBam 2.0, bringing its total up to 11,831,989 lines. That's 173,142 lines per day, 7,214 lines per hour, or 120 lines per minute
Combined with the CLI code, that's a grand total of 950,747 new and hand-migrated lines of code, or 190,149 lines per day, 7,922 lines per hour, and 132 lines per minute.
That's a new productivity record, even for the code factory!
All of the parser elements from the MSS Code Factory 1.11 MSSBam model parser have been migrated to 2.0, along with support for the additional attributes of a 2.0 model.
A full manufacturing run for CFBam 2.0 has been started to capture all of the model changes that were made in order to get the MSS Code Factory 2.0 CLI to compile.
Note that there is a lot of untested code -- SchemaRef, DelDeps, ClearDeps, and PopDepChain code has not been exercised yet, and won't be validated until I've successfully migrated the CFInternet 2.1 and CFDbTest 2.1 models from 2.0. SchemaRef will be tested first, because CFInternet references CFSecurity, but the remaining features have their test cases in the CFDbTest model.
That brings the 2.0 implementation up to 85,036 lines. That's an additional 13,378 lines migrated in about 12 hours, or 1,114 lines per hour, and 18 lines a minute. Considering how drastic the changes were, that's pretty good progress for the day.
This is yesterday's code remanufacturing. I already have found some additional attributes that needed to be added to support DefSchema (defining schema) relationships required by the SchemaRef element parser.
I renamed the CFCli 2.0 project to MSS Code Factory 2.0 because it is my intent to include all of the eventual executables in one distribution bundle, rather than making each executable a seperate project.
With the CFBam 2.0 core object code manufactured by MSS Code Factory 1.11.12574, the parser now successfully processes the CFSecurity 2.1 model. There is still a long way to go; schema imports haven't been migrated from 1.11 yet, and the Chain and PopDepChain support has not been tested yet. Equally important, most of the atomic types have not been parsed yet.
I'll be migrating CFDbTest 2.0 to CFDbTest 2.1 next in order to provide test data for those remaining features of the parser.
The command line processor and its supporting code weigh in at 71,658 lines, which were coded in 3 days with the assistance of MSS Code Factory 1.11 (I copied and modified portions of the CFBam 2.0 XML structured SAX parser code for the CFBamXmlLoader.) That's 23,886 lines per day, 995 lines per hour, or 16 lines a minute.
With the remanufactured support for the Table container of a Relation, the DelDep processing now works properly. There currently is no test data for the PopDep processing, because there are no dependant objects owned by Cluster or Tenant specifications in CFSecurity. I'll have to wait to migrate CFDbTest from 2.0 to 2.1 to test that feature, and to exercise the parsers for the remaining atomic types.
Similarly, I've enhanced the code for the Chain support, but there are no chains in CFSecurity to use for testing.
At this point, I'm encountering problems parsing GenDef types which specify a Dispenser relationship. I neglected to model the Dispenser table relationships for those objects, so I'm waiting for another manufacturing run of CFBam 2.0 before I can do any further testing. However, at this point I've parsed up to line 2322 of CFSecurity 2.1's model, so I should be close to being able to parse the whole file. Hopefully I'll be there in a few more hours, and can start migrating CFDbTest 2.0
The Relation and RelationCol parsers now work, as does the TableAddendum element I'd mentioned replacing the TableDelDeps and TableRelations elements with (as compared to 1.11.) I'm at a holding point waiting for the CFBam 2.0 object model to be remanufactured to correct a modelling flaw with the container table of a Relation, which is preventing the relations from being resolved by the DelDep chain resolver code.
I've also coded the SubPopDeps, but haven't tested them yet, either, as they don't appear until later in the code. They'll have the same problem that the DelDeps do, though.
The latest code as manufactured by MSS Code Factory 1.11.12570, and used by CFCli 2.0.12569.
Sexy 69, a release that gives as good as it gets. :)
The tables and indexes now parse successfully, along with the "extended" customization attributes (the various Java* text elements of the SchemaDef and Table specifications.)
Of course I'm avoiding the hairiest migration so far -- the schema imports. But first I have to get basic schema/model parsing working.
Good progress was made on the new MSSBam parser today. It can now process the sample CFSecurity 2.1 model up until it encounters the first JavaObjInterface specification. I haven't coded the parsers for those optional elements of the Table and SchemaDef yet, so the parser reports that there is no such mapping for the element handler and the parse dies around line 647 of the model.
Still, that's a lot of parsing that's been accomplished so far. I know there are other issues which will be coming up after I add the Java* parsers as well. One step at a time. I've gotten enough done for the day; I'll tackle more tomorrow.
CFBam 2.0 has been updated and refreshed to capture some missing attributes required by the MSS Code Factory migration of the 1.11 code to the 2.0 framework. The jars from CFBam 2.0 are used by CFCli 2.0, which provides the command line interface for running the 2.0 version of MSS Code Factory.
CFCli 2.0 successfully loads its sample rules, but the parser for the Business Application Models is only a rough copy-and-rename at this point with minimal enhancements applied in order to merge the manufactured SAX structured parser with the MSS Code Factory framework. It does *not* come anywhere near to working at this point.
I've also created some draft rules and a first-cut migration of the CFSecurity 2.0 model to CFSecurity 2.1. It shows some of the changes to the modelling syntax that I plan on implementing. My goal with 2.0 is to reuse as much of the manufactured SAX structured CFBam parser as possible; right now I've just copied and renamed it, but there is a long way to go between that and what I envision delivering.
So this is it, then. The start of 2.0.
No changes to the source code, just a rebuild and repackaging with CFCore 1.11.12399 for consistency across all of the 1.11 and 2.0 projects.
It's worth noting that with the fixes to the TZTime handling, MySQL now passes all of the CFDbTest 2.0 tests.
I do not expect to release any further MSS Code Factory updates in the near future.
This is the remanufactured, rebuilt, and repackaged code as produced by MSS Code Factory 1.11.12558 Service Pack 6.
Since 2014.10.29 when Service Pack 5 was released, an additional 1,328,714 lines of code have been added to the 2.0 projects, bringing the grand total up to 45,241,856 lines. Over 11 days, that works out to 120,792 lines per day, 5,033 lines per hour, or 83 lines per minute -- presuming I'd been banging away 24x7.
|2.0 Project||Prior Total Lines||New Total Lines||Lines Added|
With SQL Server now passing the Chain move up/down tests, Service Pack 6 has been released. Refreshed builds of all the 2.0 projects can be expected in the next 48 hours or so.
I had already edited the rules while waiting for MySQL to be manufactured, so it only took two hours to produce an additional 37,354 lines of SQL Server database scripts, install them, run my tests, fix the JDBC bugs, and package up the release for delivery. That brings the total lines of CFDbTest 2.0 code up to 5,726,783. That works out to 18,677 lines per hour, or 311 lines per minute.
The migration of the PostgreSQL support for Chain MoveUp/MoveDown operations to MySQL has been completed and passes all tests.
Over the past four hours, another 70,126 lines of MySQL and some SQL Server JDBC code have been added, bringing the total for CFDbTest 2.0 up to 5,689,429 lines of code. That's 17,531 lines per hour, or 292 lines per minute. Not as fast as the Oracle migration went, but still none too shabby. :D
The Oracle migration of the MoveUp/MoveDown support was about as pain-free as one could hope for while banging on a keyboard. I got it done in under three hours, including writing all the rules, manufacturing the code, installing the database, and running the tests.
In that three hours, 56,072 lines of new code were added to CFDbTest 2.0, bringing the total up to 5,619,303 lines of code. That works out to 18,690 lines per hour, or 311 lines per minute. I'm pretty sure that's my personal best, even working with the factory. Whoop!
This is the test suite for DB/2 LUW Chain support. All of the tests pass, including replacement of complex objects, updates of complex objects, and moving of chain data up and down in the list.
The errors for updating PostgreSQL Chain data have also been corrected -- the problem was in the XML SAX parser for structured data, not in the database or object layers.
In the past 12 hours of a long, long night, 48,071 lines of code have been added to CFDbTest 2.0, bringing the total up to 5,563,231 lines of code. That's 4,005 lines per hour, or 66 lines per minute. Not too shabby. :D
The deletes leave the database corrupt, resulting in the creates being unable to locate the head and tail of the Chain, such that 0034-CreateComplexObjects produces null values for prev/next links on the second run. 0035-ReplaceComplexObjects fails as well.
This is the test suite for the PostgreSQL MoveUp/MoveDown Chain implementation. It also corrects some serious object layer caching bugs.
In the past 24 hours, 39,511 lines of code have been added to CFDbTest 2.0, bringing the total up to 5,515,160 lines. That's 1,646 lines per hour, or 27 lines per minute. Now that's what I call a good day of coding and debugging -- considering I *did* get about 9-10 hours of sleep during that period.
The Swing prototype GUI has been fleshed out and debugged to the point where it can successfully invoke the MoveUp/MoveDown stored procedures, but they are buggy and corrupt the database. More debugging is required.
The Chain relationships are now hidden in the object list panels, and the reference widgets for chain relationships in the attribute panels are now permanently disabled.
A couple of bugs in the GUI were also corrected, which were causing tracebacks when bringing up the detail editors for class hierarchy objects.
The list boxes in the Swing prototype GUI are now sorted by either the Chain or alphabetically by the qualified names of the displayed objects.
If you install the PostgreSQL database and run the CFDbTestRunPgSqlTests script, the schema for P0035, TableB values show data being sorted by chains. The list of tables for the schema shows sorting by qualified name.
In the past 24 hours, another 32,042 lines of code have been added to CFDbTest 2.0, bringing the total up to 5,475,649. That's 1,335 lines per hour or 22 lines per minute.
The MoveUp/MoveDown JDBC bindings for PostgreSQL have been added and clean compile.
The MoveUp/MoveDown hooks have been coded, but the JDBC layers, RAM storage, and X(ml)Msg Client implementations are just "Not Implemented Yet" stubs at this point.
The stored procedures sp_movedown_dbtablename() have been tested with the dbcreate scripts for PostgreSQL. The stored procedures have not been executed yet and are likely to incorporate some runtime errors.
In the past two days I've added 45,406 new lines of PostgreSQL code to CFDbTest 2.0, bringing its line count up to 5,443,607. 22,703 lines per day, 945 lines per hour, or 15 lines a minute.
This CFDbTest 2.0 release adds the crsp_moveup_dbtablename.pgsql scripts to the PostgreSQL dbcreate directory, and invokes them. There is nothing to call these new routines, so they've only been clean-installed, not actually runtime-tested.
All of the 2.0 projects have been remanufactured, rebuilt, and repacked with the code produced by MSS Code Factory 1.11.12529 Service Pack 5. The most important change is that all of the SQL Server 2012 code now works as intended, including the security checks for delete operations.
The only databases with any functionality restrictions are MySQL (which does not support the full Java date-time range) and DB/2 LUW (whose BLOB support is broken and worked around using base-64 encoded TEXT fields.)
Since 2014.10.20, an additional 551,995 lines of code have been created. That works out to 61,332 lines per day, 2,555 lines per hour, or 42 lines per minute. Most of the projects have shrunk because I cleaned up the stored procedures a fair bit, removing spurious blank lines, unused variable declarations, and adding extra blank lines to make code more readable. The changes are often only a few lines per file, but when you're dealing with thousands of files it adds up. The projects which grew are the ones that implemented the new essentials of the Chain support, establishing chain links in the sp_create() implementations and breaking the links in sp_delete().
|2.0 Project||Prior Total Lines||New Total Lines||Lines Added|
This is the Service Pack 5 test suite as manufactured by MSS Code Factory 1.11.12527.
The SQL Server stored procedure sp_delete_dbtablename() has been updated to unlink the deletion candidate from the chain before deleting it. SQL Server now passes the "Replace Complex Objects" test with CFDbTest 2.0.
SQL Server's JDBC code now properly detects errors and exceptions for the various sp_delete() stored procedures (by primary key and by index.) As a result, it now passes the security check testing that it used to fail at for the delete permission denied test.
All of the databases are now functional.
In the past 24 hours, an additional 21,997 lines of CFDbTest 2.0 code have been created and debugged, including debugging of some existing issues that weren't problems with the new code. That brings the total for CFDbTest 2.0 up to 5,445,432 lines. That works out to 916 lines per hour, or 15 lines per minute. And I *did* spend some time sleeping. :)
The MySQL stored procedure sp_delete_dbtablename() has been updated to unlink the deletion candidate from the chain before deleting it. MySQL now passes the "Replace Complex Objects" test with CFDbTest 2.0.
The Oracle stored procedure del_dbtablename() has been updated to unlink the deletion candidate from the chain before deleting it. Oracle now passes the "Replace Complex Objects" test with CFDbTest 2.0.
The DB/2 LUW stored procedure sp_delete_dbtablename() has been updated to unlink the deletion candidate from the chain before deleting it. DB/2 LUW now passes the "Replace Complex Objects" test with CFDbTest 2.0.
The PostgreSQL stored procedure sp_delete_dbtablename() has been updated to unlink the deletion candidate from the chain before deleting it. PostgreSQL now passes the "Replace Complex Objects" test.
It took an additional 5,008 lines of stored procedure code to implement the functionality, and took about 6 hours to code and test. The total for CFDbTest 2.0 is now 5,429,443 lines.
This is the test suite for exercising the chain link establishment for SQL Server. All of the databases now establish the prev/next links of a chain when objects are created, and therefore fail the "Replace Complex Objects" tests with error throws.
SQL Server now establishes prev/next links for chains. I've also modified the SQL Server JDBC to properly report errors thrown by the database engine, so it now passes the delete permission check tests.
What's odd is that instead of throwing an exception about an integrity constraint, SQL Server instead seems to continue on with processing after the delete fails, and then gets an error because a cursor is still in existence when the next iteration of the delete-by-index loop tries to remove an instance. I'm not sure how to address that problem -- I kind of count on a stored procedure throwing an exception and stopping execution when an exception is raised. Maybe I need to modify the stored procedures to explicitly check to see that an instance has been deleted (by analyzing the SQL status variable), and manually raise an exception to get the processing to stop. I'm not going to worry about it right now -- I'll deal with such changes when I'm doing the modifications to the delete processing for breaking the chain links.
Since 2014.10.24, an additional 52,473 lines of CFDbTest 2.0 code have been created, bringing the total to 5,424,435 lines. That's 26,236 lines per day, 1,093 lines per hour, or 18 lines per minute.
MySQL now implements prev/next chain links in sp_create().
DB/2 LUW was not properly establishing the prev/next links after all. That has been corrected and verified through database inspections.
Oracle now implements chain links in sp_create(), and has been verified through database inspection after running the CFDbTest 2.0 test suite.
The JDBC client layers for Microsoft SQL Server, Oracle, and PostgreSQL have been updated to re-read the created instance when client-side code is required for BLOB or TEXT attributes. The changes applied by the stored procedures could not be counted on to be consistently applied without a re-read. DB/2 LUW and MySQL allow TEXT and BLOB parameters to their stored procedures, and so did not require JDBC changes.
This is the test suite for the DB/2 LUW establishment of chain links as manufactured by MSS Code Factory 1.11.12508. The "Replace Complex Objects" test now fails as expected.
The Prev links were not being properly established so this release has been pulled. An updated release with support for both DB/2 LUW and Oracle will be issued once revalidation is complete.
This is the test suite for the PostgreSQL establishment of Chain links as manufactured by MSS Code Factory 1.11.12506. The "Replace Complex Objects" test now fails as expected.
This is the test suite for the container latching required by the Chain-enabled stored procedures. All of the databases have passed the tests.
CFDbTest 2.0 now weighs in at 5,371,962 lines of code, an additional 9,614 lines for today. It was about a 10 hour day, so that works out to 961 lines per hour, or 16 lines per minute. Not too shabby.
The CFDbTest 2.0 model was updated and remanufactured with MSS Code Factory 1.11.12499 in order to be able to develop and test the Chain code. None of the Chain rules have been developed yet, so this build will run. Future builds until SP5 are, however, "questionable" at best. Stick with the SP4 code until further notice.
Wow. That's quite the increase for four hours of processing and build time. Another 776,262 lines of code for the Chain test case in one day. 32,344 lines per hour or 539 lines a minute if it had taken a full 24 hours to make the changes. However, as it only took 4 hours, that's 194,065 lines per hour and 3,234 lines per minute. The total for CFDbTest 2.0 is now 5,362,348 lines (it had been 4,586,086.)
The latest code as manufactured by MSS Code Factory 1.11.12497 Service Pack 4.
Since 2014.10.15, over 20,000 lines per project have been added, but because the SAP/Sybase ASE support was dropped, the line counts have droppped by 4,696,791 lines. Sure deleting lines is easy, but that's roughly a million lines of code change per day. Pretty cool, eh?
|2.0 Project||Prior Total Lines||New Total Lines||Lines Added|
Microsoft SQL Server passes the ClearDep tests provided by "Replace Complex Objects".
Due to existing issues with Sybase ASE's "sp_delete_schemadef()" stored procedure failing without producing any errors, Sybase does not pass the "Replace Complex Objects" test that exercises the ClearDep support.
MySQL passes the ClearDep tests provided by "Replace Complex Objects".
DB/2 LUW passes the ClearDep tests provided by "Replace Complex Objects".
Oracle passes the ClearDep tests provided by "Replace Complex Objects".
PostgreSQL passes the ClearDep tests provided by "Replace Complex Objects".
As intended, this test suite fails gloriously on "Replace Complex Objects" because the Relation.Narrowed specifications haven't been cleared, causing integrity check violations. Now I can focus on getting the clearing code installed and tested for PostgreSQL before I migrate it to the other databases.
This release provides testing of the sub-object reference clearing for all of the databases. It also incorporates the fixes to the DelDep implementations that were discovered during execution of those tests. All databases pass the enhanced "Replace Complex Objects" test except for SAP/Sybase ASE.
Oracle now passes the sub-object reference clearing tests.
MySQL now passes the sub-object reference clearing tests.
I've also remanufactured the PostgreSQL and DB/2 LUW database scripts so that they include the CFSecurity 2.0 fixes. They should both install cleanly now (PostgreSQL allows for recursive stored procedures so it had been working already, though the code was in error.)
DB/2 LUW now passes the sub-object reference clearing tests. There is an error in the sp_delete_secform.sql code, but I'm not going to worry about that for now -- I have to remanufacture all the project database scripts for the next release anyhow, and that erroneous stored procedure doesn't affect the testing I was interested in performing.
Now that the PostgreSQL template code passes the "Replace Complex Objects" test with the sub-object reference test data, I can move forward with propagating that template to the other databases. Note that the other database creation scripts are currently out of sync with the JDBC code and application objects, so they will crash gloriously if you try to run them.
The CFDbTest 2.0 model has been updated to incorporate a test case for the dependency on child object definitions (by adding a "PIndex" reference to the Table object that references the component TableIndex specifications.) As intended, the "Replace Complex Objects" test case fails on a referential integrity check at runtime.
This is the test suite for exercising the cache forgetting sub-objects on deletion of a complex object.
A total of 14,414 lines have been added to CFDbTest 2.0, bringing the total up from 5,064,705 to 5,079,119 lines. 600 lines per hour, 10 per minute if I'd been working 24 hours instead of 8. But for the 8 hours I put in, 1801 lines per hour and 30 lines per minute. Not a bad day's work. :)
This is the latest code as manufactured by MSS Code Factory 1.11.12458 Service Pack 3, which adds method signatures that take a boolean "forceRead" parameter at the end of the argument lists for Obj.read(), Obj.getRelation(), and TableObj.read() methods. These new methods are used to force the data to be refreshed from the backing store database instead of being retrieved from the cache, as it's possible for the cache to go stale in a multi-user environment.
CountEmQuiet was not including the *.tsql scripts from dbcreate, so the increase in lines is somewhat inaccurate. But even before I fixed that, most projects were seeing roughly 10,000 new lines added due to the new methods, so I've estimated the total new lines at "over 110,000."
CountEmQuiet now breaks up the files to be scanned into smaller chunks to avoid the command line length limits that I *thought* were being encountered under Debian but not Windows. As it turns out, the real problem is the CFBam 2.0 had included a copy of net-sourceforge-MSSCodeFactory-CFCore-2-0 in its directory tree due to the nasty habit of the laptop touchpad going into drag mode instead of clicking on things. (I normally use a mouse with the laptop, but for a while there my mouse was DOA until I replaced it so I was using the much-hated touchpad.) As a result, the line counts for CFBam 2.0 *were* inaccurate, and so was the total line count for the 2.0 projects. Those numbers have therefore been skipped and marked with an asterisk (*).
Still, over 55,000 lines were added per day over the past two days, or 2,291 lines per hour, a mere 38 lines per minute.
|2.0 Project||Prior Total Lines||New Total Lines||Lines Added|
Latest code as manufactured by MSS Code Factory 1.11.12456 Service Pack 2B.
Since 2014.10.08 (5 days), a total of 6,845,775 lines have been pruned from the 2.0 projects, largely due to the removal of CFGui 2.0 from the project list (it had also been imported by CFUniverse 2.0, which is the big cause of its line count reduction.) Other than that some spurious blank lines were removed from code to tidy up the formatting. Most of the changes made since the first SP2 are line edits, not line additions. It's easy to prune lines. :P
|2.0 Project||Prior Total Lines||New Total Lines||Lines Added|
The build errors in CFBam 2.0 and CFUniverse 2.0 have been corrected. CFGui 2.0 has been eliminated.
This is the latest code as manufactured by MSS Code Factory 1.11.12448. Unfortunately, CFUniverse 2.0 can not be built because the code is too large -- it blows the Java limits for constants when compiling the X(ml)Msg Request Handler. I think what I'll do in the long term is drop the CFGui 2.0 project entirely; I no longer see a future for that effort.
Since the release of Service Pack 1 on 2014.09.26 (15 days ago), a total of 947,758 lines of code have been added to the 2.0 projects. That works out to 63,183 lines per day, 2,632 lines per hour, or 43 lines per minute -- 24x7 for over two weeks straight. :)
|2.0 Project||Prior Total Lines||New Total Lines||Lines Added|
There were more fixes required to correct late-night mis-edits I'd made to the PostgreSQL rule base, thinking I was editing MySQL or SQL Server files. :P
I also updated the Complex Object tests to use the new attributes I added. I had to remove the RelationCol specifications because those don't load properly for some reason and I don't feel like debugging it right now.
Apparently I got overzealous replacing dollar signs with percent signs in the rules while working on DOS script support for Windows. Sorry 'bout that.
The CFDbTest 2.0 model has been enhanced with a full set of inter-object relationships, DelDeps, and PopDeps as the complex schema objects will require for the CFBam 2.0 model.
With the new relationships, PopDeps, and DelDeps, the CFDbTest 2.0 code has grown by 30,975 lines since 2014.09.26 -- 3,097 lines per day, 129 lines per hour, or 2 lines per minute. Not too shabby at all, considering I took a week off to deal with end-of-year garden work, freezer packing, and dental issues. Not too shabby at all.
|2.0 Project||Prior Total Lines||New Total Lines||Lines Added|
The MSSql crsp_delete_$dbtablename$.tsql scripts now support DelDep the same way that PostgreSQL does, except that it has to rely on local variables instead of a record cursor construct. The CFDbTest 2.0 evaluation of the "Replace Complex Objects" test now passes for MSSql.
Note that Sybase ASE does not pass the validation suite. The DelDep-enhanced stored procedure sp_delete_schemadef() is not properly deleting SchemaDef instances as part of the "Replace Complex Objects" test, resulting in a duplicate SchemaDef name being found on the second invocation of the test.
The other errors in the Sybase ASE database installation scripts have been corrected. I just realized I was looking for the wrong error string patterns when scanning the Sybase ASE log file created by the install process.
I also corrected some typos in the database test scripts for Sybase ASE (the wrong user name was being specified for the FailDeletePermissionDenied test.) FailDeletePermissionDenied does not generate an exception as it should. While the Update permissions are properly checked by the exact same SQL as the Delete permissions, the Delete check fails. How can a join work once and not with a different key string parameter?
The MySQL crsp_delete_$dbtablename$.mysql scripts now support DelDep the same way that PostgreSQL does, except that it has to rely on local variables instead of a record cursor construct. The CFDbTest 2.0 evaluation of the "Replace Complex Objects" test now passes for MySQL.
The Oracle dl_$tablename$.plsql scripts now support the DelDeps consturcts for table object deletion, and have passed the CFDbTest 2.0 evaluation of "Replace Complex Objects"
This is the latest code as manufactured by MSS Code Factory 1.11.12425.
The "Replace Complex Objects" test for CFDbTest 2.0 works again for PostgreSQL. See 1.11 history for details.
The population of the Swing prototype GUI Picker windows has been reworked and tested with CFDbTest 2.0 to rely on the new Relation.PopDepChain specifications.
For an example of the new functionality, navigate to a Relationship and edit it. If you open the Picker for the To Table and specify "Select None" to clear it, then try to open the Picker for the To Index, you'll get a warning dialog informing you that you have to select the To Table first. Once you've selected a To Table, the To Index Picker properly populates with the Indexes of the selected To Table.
Previously there was no way to specify such relationship dependency behaviour in a 1.11 Business Application Model. See the CFDbTest 2.0 model "Relationship" object specification for examples of how to use this new functionality.
This set of builds is the code suite for the Service Pack 1 release. CFDbTest 2.0 was used for all validation and testing of the final code that was released as MSS Code Factory 1.11.12416-SP1. I'm quite happy with the end results, though of course I still have ideas about how to extend and improve things, so there will be an SP2 some day. But in the near future I'm going to take a bit of a break from the coding -- I've had too many late nights getting to SP1.
In the two days since the 24th, another 22,343 lines have been added to the 2.0 projects, as well as all the code was changed without adding new lines. That works out to 11,171 lines per day, 465 lines per hour, or 7 lines per minute. Mind you, I was putting in 18 hour days. :)
|2.0 Project||Prior Total Lines||New Total Lines||Lines Added|
There were some issues with the installation of the DB/2 LUW database that were resolved by the model and rule changes with MSS Code Factory 1.11.12414, but the actual Swing prototype GUI for DB/2 LUW ran correctly right from the first try.
The Microsoft SQL Server client also functions correctly with this release.
The Oracle client also functions correctly with this release.
Testing is going great. The Sybase ASE client functions correctly as well. That leaves MySql to be tested, but I'll need to get the server installed on my Windows laptop and address any database instance creation issues before I can do that testing. I am now completely confident that I'll have SP1 out the door before the end of September.
This is the test suite that is going to be used for validating the various databases that are supported by MSS Code Factory. I do not expect to update the release of CFDbTest 2.0 with any fixes as I go along. Rather, I'll bundle up all the fixes for all the databases in one big release for MSS Code Factory and do a full 2.0 build set after that's been done.
With any luck I'll have the testing done by the end of this weekend, so that I can release SP1 by the end of September.
The 2.0 projects are now in the final stages of the SP1 release plan. All of the functionality for the Swing GUI prototype is in place and has been tested against PostgreSQL. The remaining databases need to have their CLIs manufactured and their latest database instances installed, loaded with test data, and exercised by their respective (planned) GUIs.
But the PostgreSQL version provides a snapshot of the functionality you can expect for SP1 in the near future -- probably by the end of September.
The last refresh was 2014.09.21, so in the past 3 days, 14,386 lines of code have been added to finish of the GUI prototype. 4,795 lines per day, 199 lines per hour, 3 lines per minute. :)
|2.0 Project||Prior Total Lines||New Total Lines||Lines Added|
This should be the last test suite before SP1 is released. It provides a fully functional implementation of the Swing prototype GUI (meant for RAD prototyping and exploration of a business application model's data through a live interface instead of diagrams, which are not as well understood by many business-side people, who are not familiar with tools like ERDs or UML.)
The Close action for the Finder windows has been coded, and the rules have been refactored to use ICFJRefreshCallback instead of the schema-specific versions that were used during development.
All of the windows have had the close window-decoration buttons removed from them; you are now forced to rely on the coded logic to close windows properly.
This test suite exercises the Object Kind columns that have been added to lists of objects which have subclasses defined.
You have to admit that despite all the work MSS Code Factory can do for you, it is not very "intelligent", especially when it comes to producing GUI prototypes. Ah well, it still serves a purpose quite well: saving time and money on initial coding of an application. A leg up on a project, as it were.
This release of the CF 2.0 projects switches over to 720p compatible window sizing, and implements the refreshing of the invoking window list of data when the View/Edit windows are closed. It's a brute-force approach that refreshes the entire list and loses the selection state of the list, but at least the data presented remains consistent now.
Since 2014.09.16, over 83,680 lines have been added to the 2.0 projects. The previous line counts on CFBam 2.0 and CFUniverse 2.0 were apparently inaccurate because of constraints on command line length under Linux that are extended under Cygwin64 on Windows, so I'm not sure what the exact number of new lines would be. Over the course of 5 days, that works out to 16,736 lines per day, 697 lines per hour, or 11 lines a minute. Not bad considering I really only worked two days on rules, coding, and debugging instead of platform migration.
|2.0 Project||Prior Total Lines||New Total Lines||Lines Added|
|CFBam||11,066,764||14,585,449||previous results inaccurate|
The PostgreSQL scripts have been refreshed with the new .bat files added to MSS Code Factory 1.11.12401 and tested under Windows 7 with PostgreSQL 9.3.
The Picker windows now coordinate their selection results with the Reference widgets from their invoking windows. However, I do still need to do some work on synchronizing the refresh of invoking windows after View/Edit windows are closed and have saved their changes. (In particular, I need to invalidate the list that the window was launched from in its parent.)
But for now, things are at a good "cut point" for me to take a pause and work on shifting development from my dying Linux box to my Windows 7 laptop. So it'll be a while before there are any further updates; probably a couple of days or so. (I did a test build and packaging of CFSecurity on the Windows box, so it's do-able, but I expect some issues with the packaging and archive scripts, and I may need to modify my top-level scripts. Plus I need to test some of the scripts under the git shell instead of the cygwin shell. So many shells for different tasks when using Windows, rather than one grand unified bash shell everywhere.)
Since 2014.09.14, the code base has grown a bit. I did code and develop the entire suite of Picker windows, though, so that's not too surprising. 957,839 new lines of working code in 2 days -- 478,919 lines per day, 19,954 lines per hour, 332 lines a minute, 5 lines per second.
|2.0 Project||Prior Total Lines||New Total Lines||Lines Added|
You can log in and out as many times as you like with the Swing PostgreSQL client now. Too many bugs to summarize here were fixed, some of them quite major even if they only amounted to a one-line change.
The callbacks and buttons for the Picker windows have also been added, but they aren't working yet and I'm not sure when I'll get them debugged. Perhaps tonight, perhaps tomorrow. But compared to the login issues, I expect it to be a relatively painless debug session to fix those.
The other 2.0 projects will get remanufactured for the affected layers throughout the night and into tomorrow (I will be getting some sleep, however), but I won't be releasing that code as a full build until I get the Pickers working, and see if I can find any other glaring oversights before I call the Swing GUI prototype good to go.
The Picker windows aren't done yet, but they properly populate their lists of objects based on the new functionality I added throughout yesterday and this morning. See the 1.11 notes for details.
The PostgreSQL database creation scripts have been corrected. I don't know if the problem affects other databases as well or not. I guess I'll just have to remanufacture all the projects completely sometime in the near future so the fixes I did get applied just in case.
It's worth noting that I only started on the Swing prototype GUI on 2014.05.22. That's less than four months ago.
By 2014.07.01 I had 4,615,392 lines of code in CFDbTest 2.0. As of this morning, I have 4,889,416, an increase of 274,024 lines in a little over two months, and an increase of 76,230 lines in the past day alone.
I rather like the user interface that is provided by this build. Things are really finally coming together on the GUI side.
Over the past 24 hours, I've only produced as many new lines of GUI code as an average programmer writing code "the old fashioned way": about 2000. But I don't include the numbers for CFLib in the 2.0 project list, and most of what I was doing today was substituting some new custom widgets for the default Swing implementations to get the customized behaviours I wanted. Sometimes working "smart" just modifies a lot of code instead of adding new lines. :)
|2.0 Project||Prior Total Lines||New Total Lines||Lines Added|
This is the test suite I used for verifying that my changes were effective. It's still not perfect, but I'm doing a manufacturing run anyhow because perfection is a long way off...
The field edits for Add/Edit of objects are enabled and supported by this build. However, you still cannot edit Text fields, and the TZ data types are such a mess it's ridiculous. I also still need to code the Picker windows, so you can't select lookups, parent, or master relationships yet, though you can create components of container objects, so *some* relationships do get established at this point. But by no means is this code complete yet.
Since 2014.09.09, another 32,743 lines of code have been added. I took a couple of days off away from the computer for a change. Still, I never count my days off as days off, so it's been three days since the last 2.0 refresh, for a total of 10,914 lines per day, or 454 lines per hour.
|2.0 Project||Prior Total Lines||New Total Lines||Lines Added|
Rather than wrassle with the Java Swing JFormattedTextField implementation, I opted to switch over to the more basic JTextField as the base class of the Date/Time/Timestamp editors, wire in a data value attribute, and simply leverage my XML parsers and formatters for now. Yeah, it's crude. Yeah, it's ugly.
Note that the TZ types don't work worth squat, but I already knew this.
You still can't set lookups and such because I haven't written the Picker windows yet.
The CFCrm 2.0 initial testing of the workflow for adding and editing objects passed for the Tenants of a Cluster. That was good enough for me to take the time to rebuild CFDbTest 2.0 so I can exercise the editing of the various widgets from CFLib for the different data types.
I expect it to be quite some time before I squish all the bugs I expect to find in them, so this version is *not* guaranteed to let you edit everything you need to in order to persist objects. In particular, lookup relationships and such are not modifiable yet (I need to code the Picker windows for that.)
There was a core Java object error that was corrected with this release. Specifically, if you tried to access a reference attribute of a new object instance that used the primary key of the object to locate its data, a null exception was being thrown because the PKey attribute had not been allocated yet. This is the first core bug I've found in the Java layers in well over a year.
The Swing GUI prototype has been greatly enhanced. Now you only see the Add entries for objects that can be parented by the focused object when a ListJPanel is displayed as a sub-element list of an object. The enabling and disabling of the subpanel menus is now properly coordinated with the JInternalFrame state associated with them. Enabling and disabling of fields in the AttrJPanel has been corrected such that fields are now enabled for editing during an Add operation.
Last but not least, this release was used with CFCrm 2.0 to persist a default object instance from the GUI (i.e. Hit Add and then immediately Save and Close in the detail window. There aren't many objects you can do this with, and you can only do it once for each such object because the empty string for a name will collide with any second insertion attempts.
Over the past 24 hours, 82,117 new lines of code were created along with an awful lot of debugging. That's 3,421 lines an hour, or 57 lines a minute. Not quite a line per second. :D
|2.0 Project||Prior Total Lines||New Total Lines||Lines Added|
I successfully created a default value instance of a Tenant within the system Cluster for CFCrm 2.0. I think I'm done with debugging the flow of the state machine for editing objects now. Next I need to focus on the proper enabling and disabling of the edit widgets in AttrJPanel.
I'm getting the enable/disable state behaviour I want on the ListJPanels within a View/Edit panel, though, which is nice to see. I love it when a plan comes together.
At this point, the ICFJPanelList interface has been incorporated by the Swing GUI prototype code. However, I've yet to make the changes to the enable/disable state of the list panel menus to reflect the presence or lack of a container reference (which is cleared to null if this instance can't parent a sub-element list.)
At least I didn't *break* anything with this batch of code. :P
There were a number of exceptions being thrown while trying to navigate around the Swing GUI. The causes of those exceptions have been corrected or allowed for as appropriate, and there are no longer any exceptions being thrown while navigating around the CFCrm 2.0 data object hierarchy. The other CF* 2.0 projects have been remanufactured and rebuilt as well, and are all being tagged as the 2.0.12365 release, as manufactured by MSS Code Factory 1.11.12363.
So download, play, and enjoy.
Over the past 24 hours there have been 130,914 new lines of code added, 5,454 lines per hour, or 90 lines per minute -- had someone been able to type that fast for 24x7 without so much as a bathroom break. I think I beat the human possibilities by leveraging MSS Code Factory. :P
|2.0 Project||Prior Total Lines||New Total Lines||Lines Added|
There were a number of exceptions being thrown while trying to navigate around the Swing GUI. The causes of those exceptions have been corrected or allowed for as appropriate, and there are no longer any exceptions being thrown while navigating around the CFCrm 2.0 data object hierarchy.
A significant refactoring and reworking of the use of the swingFocus attribute vs. the new CFLib 1.11.12362 get/setSwingFocus() accessors was coded and tested as well. There were several "speed bumps" along the way to getting these code changes to build without errors.
So download, play, enjoy. I'm going to refresh the other 2.0 projects and get them building...
My latest test build of the CFCrm 2.0 project as produced by MSS Code Factory 1.11.12359 showed the enable states of the sub-element list menus getting updated properly. Running with CFLib 1.11.12358, the text of disabled widgets is finally black as I wanted it to be. The borders of the edits are still blue when disabled, but I can live with that faded-out look for them, even if it is kind of ugly. I'd rather be able to take over the border drawing with a Ditch or Ridge border (preferably Ditch.) Someday I'll figure out how to do that. It may mean a completely custom edit widget, though...
It's been one day since my last refresh. During the course of that 24 hours, I created, tested, and debugged 45,724 new lines of Swing GUI code, or 31 lines a minute. The workflow of the overall interface is starting to come together nicely.
|2.0 Project||Prior Total Lines||New Total Lines||Lines Added|
This build is linked with CFLib 1.11.12353, which forces the foreground colour of text edits to black, improving the text contrast dramatically compared to the default blue text that Swing uses for widgets.
The window state propagations have been corrected, so now the enable states of the sub-menubars in the list panels get updated properly.
CFLib 1.11.12350 does not change its programming interface specifications, but merely adds functionality to the core widgets such that they display the background colour of their containing panel as their background when disabled, and white when enabled.
The new code has been tested with CFCrm 2.0.12352.
The code for the attribute panels and otherwise have been updated to support the new CFJPanel.PanelMode.Update state. Now in order to save an Add or an Edit, you transition to Update, which then transitions to View after applying the changes and losing the data pin.
When incorporated in an AskDeleteJPanel, the AttrJPanel also implements the CFJPanel.PanelMode.Delete state to apply the deletion, clear the SwingFocus, and leave the widget in a state of CFJPanl.PanelMode.Unknown.
The logic for state changes is largely that of AttrJPanel, which is responsible for doing all actual object instance manipulation. The ListJPanel instances, on the other hand, are always in a state of View or Edit, which disables and enables the data editing actions (also allowing for the state of the row selection in the list.)
I'm pretty happy with the functionality for viewing data at this point. I'd call this "good to go" for doing walk-throughs of data that you've loaded up using the SaxLoader implementations and your own data scripts. You can even Delete data at this point. I just haven't got the code in place for applying the edited values to the EditObj when performing a AttrJPanel.Update state transition.
With this release, the sub-object lists in the tab panel at the bottom of a ViewEditJPanel are implemented for view and delete. I've tested delete (accidentally), and the cycle of state changes properly removes the instance from persistent storage. (You don't want to delete the system cluster, BTW.)
The ListJPanel.SwingFocus tracks the currently selected row of the list box, and the menu bar and items of the panel are synchronized to the edit-enable state of the panel and the selection state of the list box itself. Setting ListJPanel.SwingCollection refreshes the list box with the contents of the specified collection. It will be up to the owning element tab widget and the focused object of a ViewEditJPanel to propagate changes of the ViewEditJPanel.SwingFocus to apply the focused object's sub-element lists to the appropriate ListJPanel instances in the sub-element tabs.
In a nutshell, all the pieces needed to navigate a complex data hierarchy such as is loaded/created by running the CFDbTest 2.0 suite are in place. There are still additional widgets/frames and a lot of inter-frame synchronization to be done yet, but it's coming along rather nicely.
So in under 48 hours, I've added 1,490,501 lines of new code, built it, packaged it, tested it (quickly), and deployed it. That's 31,052 lines of code per hour, 517 lines per minute, or over 8 lines per second -- working 24x7. :P :P :P
|2.0 Project||Prior Total Lines||New Total Lines||Lines Added|
The ListJPanel implementations have been enhanced with the code to create the miscellaneous components required for a list of data.
The final step of wiring the setting of the collection to the re-preparation of the list box data and subsequent invalidation of the list box itself has not been done yet. I just wanted to take a pause while I have a clean build, and *then* populate the data.
The list headers for the list boxes get displayed properly, though. I'm seeing some anomalies in how the list boxes get displayed if there isn't enough column width consumed to take up the entire display area. In that case, an empty area is displayed in the right portion of the list box, and I'd like the rightmost column to take up all that space so that things look "pretty" with fully-populated rows instead of that ugly unused segment of the display.
This batch of changes allows the enable/disable state of the Finder menu items to adjust to match the selection state of the list box shown by its component FinderJPanel. The FinderJPanel coordinates the SwingFocus attribute of any containing CFJInternalFrame which implements ISchemaSwingTableJPanelCommon with the selected row of the list box.
The new behaviour has been tested, and the menu actions bring up the appropriate windows over the selected data.
All of the 2.0 projects for MSS Code Factory build successfully.
The new code is completely untested, of course.
This has been more like it. Since 2014.08.28 (8 days) I've created 314,666 new lines of code. That's an average of 39,333 lines per day, or 1,638 lines per hour if I had been working 24x7 for that period. 27 lines per minute. Two seconds per line. My fingers hurt... :P :P :P
|2.0 Project||Prior Total Lines||New Total Lines||Lines Added|
The way that the desktop is resolved for adding new JInternalFrames has been reworked and refreshed. The end result is the same, but this code *properly* navigates through the object hierarchy to locate the desktop instead of just doing an iterative prove of getParent() until JDesktopPane is found.
The setPanelMode() implementations now check for the subset of valid values that are appropriate for the window or panel in question.
I believe all the state changes are properly tracked by the edit flow events now, but I'll have to do some actual testing to be sure. :)
This is all very much untested GUI code at this point. It builds. It might work. It might not. 'tis the Schroedinger's Cat of Code... :P
There were some inconsistencies and holes in the GUI functionality to date. Some of that has been corrected and fleshed out. I think it should be close to functional at this point, give or take a lot of missing postChanges() code for the AttrJPanel.
With the addition of the PanelMode transitioning matrix for the AttrJPanel, I think I'm now ready to begin looking at testing this code, so I've manufactured and built both CFCrm and CFDbTest 2.0 this time.
The FinderJInternalFrame and FinderJPanel have most of the code in place for launching appropriate ViewEdit in mode Add/View/Edit and AskDelete in mode View. The code for retreiving the row data of the currently selected row has to be added -- the sections are marked with TODO WORKING comments.
The AskDeleteJPanel now properly responds to its Delete/View state changes by updating its user feedback accordingly and propagating the state changes down to the component AttrJPanel.
The ViewEditJInternalFrame has been updated to propagate its state to the component AttrJPanel, and to update the user interface action enable states according to the value passed to setPanelMode().
The AttrJPanel has a skeleton outlined for the possible state transitions to be reacted to, ready to receive the code from yesterday morning's Programmer Notes. It is, I believe, the last piece of this big event and state change puzzle to be coded.
The PickerJPanel and PickeJInternalFrame have been fleshed out a bit more.
This release produces a clean build of CFCrm 2.0.
View/Edit/Delete Selected menu items have been sketched out for list panels, and are now enabled and disabled appropriately. The implementations of these methods need to retrieve the currently selected row's data and use it for opening the appropriate focus window. In the long run, there will be a catalogue of the different classes of windows so that only one instance of each type of JInternalFrame can exist for any given data instance. If such a window already exists, it will be brought to the forefront, and if necessary, transitioned to a different panel mode.
The JInternalFrames of the Swing GUI now propagate changes to their PanelMode values to their sub-objects as required.
The ListJPanel/IJPanelList Add menu and action items are now enabled and disabled by the GUI logic whenever the PanelMode changes state via setPanelMode(). The states CFJPanel.PanelMode.Add and CFJPanel.PanelMode.Edit are expected by instances of this viewport; CFJPanel.PanelMode.Delete is unexpected and CFJPanel.PanelMode.Unknown is the initial state of the panel.
The EltJTabbedPane now sets the singleton Attribute JPanels to CFJPanel.PanelMode.View, and propagates the PanelMode to the duplicate List (IJPanelList) implementations. The actions for Selected.showView(), Selected.showEdit(), and Selected.showConfirmDelete() have not been specified yet, nor have appropriate menu items been wired to reference those actions. There is placeholder logic for adjusting their enable/disable states in the comments of the ListJPanel/IJPanelList specifications.
The AskDelete and ViewEdit JPanels propagate their PanelMode changes to their component Attribute JPanels.
This is all untested code, of course. First I code; then I test-run.
This is the latest clean-building code for CFCrm 2.0, and requires CFLib 1.11.12320 (.jar included by installer.)
There isn't a significant amount of new functionality with the August refresh, but there are a lot of bug fixes and code refactoring that was done, so if you're relying on the CF* 2.0 code bases, you should download a refresh and use that instead of last month's builds.
It's been 31 days since 2014.07.28, so I only produced 2,987 lines of code on average per day this month. No where near as productive as usual. But I know the lines of code *changed* are about the same as the number of lines added, so I don't feel *too* bad about my productivity. Plus I spent a fair bit of time on custom widget code, which doesn't produce very high code volumes compared to working with the manufacturing rule bases. But that low-level coding and debugging still have to be done, as "unproductive" as that task may be.
|2.0 Project||Prior Total Lines||New Total Lines||Lines Added|
The preparation for supporting window modes has been coded. There is no change in functionality compared to the previous release.
CFLib 1.11.12311 replaces the text edit version of CFJBoolEditor with a custom widget. Because the class hierarchy of that widget was changed, the Swing GUI package for CFDbTest 2.0 has been rebuilt. I'm pleased -- it only took about an hour to debug the custom widget. The old dog hasn't lost his widget coding skills yet!
The Swing layers have been remanufactured by MSS Code Factory 1.11.12308 in order for CFDbTest and CFCrm to use the latest CFLib 1.11.12307. There was a significant refactoring effort implemented, which required the same to be done to the manufactured code.
CFLib 1.11.12307 also corrects some formatted field errors for CFJTimeEditor, and masks out the non-time attributes of Calendars when getting and setting values of [TZ]Time cell renderers and editors.
The script for launching the Swing GUI has also been corrected to use the current PostgreSQL driver jar; I'd only been launching the program from the Eclipse debugger so I hadn't noticed the typo.
With the updates made to the rules and to CFLib, CFDbTest and CFCrm 2.0 now render correctly for all of the supported atomic data types (i.e. everything but Blobs, which are hidden because a general purpose GUI can't guess how to interpret a Blob.)
Up until now, testing had only been done with CFCrm, which had left some errors in the rendering of the various date/time/timestamp table columns.
The Bool columns now render as a crude checkbox with a question mark for null values. This needs some tidying up, then I need to shift that custom rendering over to a custom widget for editing boolean values instead of relying on a true/false/blank text field.
All of the CF* 2.0 projects have been remanufactured by MSS Code Factory 1.11.12295, rebuilt with the latest CFLib 1.11.12289 and JDK 1.7.0_65, and repackaged for distribution.
This refresh incorporates fixes to the readAll methods for DB/2 LUW, PostgreSQL, and Sybase ASE. It also adds the Login/Logout support to the Swing GUI prototype (I recommend logging in as system/system/system to avoid having to populate the security tables for now, with your database username and password that were used to instantiate the PostgreSQL instance.)
The Swing GUI now populates the Finder windows and displays their lists of data with custom drawing for the cells of the list boxes. There is no support for viewing the details of the rows yet; I'll be working on that in August.
The last full refresh was on 2014.07.12 (16 days ago), and since then the projects have grown a bit with the new GUI code. There is a total of 3,820,895 new lines of code, or an average of 238,805 new lines per day. Without including the CFCore 2.0 code, there are 696,296 new lines of code, or an average of 43,518 lines per day (I'm more comfortable with that number -- the only work involved in CFCore 2.0's growth was turning on the features for the manufacturing and build.)
The calculations for the lines added had to be adjusted for the fact that I neglected to do an "ant clean" before the line counts on the last refresh. The builds include copies of some rather large XSD files, which were ending up getting counted twice in the last refresh.
The numbers presented as "New Total Lines" are from a modified version of the CountEmAll and CountEmQuiet scripts so that skip any files with a path incorporating bin/ or build/. That omits the test data for CFDbTest, but I'd rather that than have to keep futzing around with project cleans before doing line counts.
CFCore 2.0 grew by an obscene amount because I enabled production of the database layers and the XMsg handlers for that project.
|2.0 Project||Prior Total Lines||New Total Lines||Lines Added|
The JDK on my Linux box has been updated to 1.7.0_65, and CFLib's new cell renderers have been debugged, so MSS Code Factory CFCrm 2.0 has been rebuilt and repackaged with the relevant jars. I'm finally happy with the way the list boxes in the Finder windows look.
The Swing GUI Finder JTables are looking better. They now adjust to the proper row height, the header has a proper height, the background and foreground colours are being set for the first column, and I've got the raised/bump look that I want for the cells.
The problem is not all the cells are being rendered properly yet, so I've still got some debugging to do. I sure hope there isn't something squirreled away in the JDK code that prevents you from returning non-Strings as cell data for a JTable. I'm kind of counting on being able to do that.
The Swing GUI Finder windows have been enhanced such that they display all of their data attributes, similar to what is presented by an attribute panel. The headers were a bit of a bear -- it's amazing to me how much I've forgotten about Swing coding since I did it last. So much glue code...
The header for the Qualified Name now displays in the Swing GUI Finder windows, with the data below. To see it for yourself, manufacture and build a project, and install its database to PostgreSQL as the "postgres" user.
Fire up the Swing PostgreSQL GUI, and log in as system/system/system/postgres/yourpassword.
Do File/Find/Cluster or File/Find/Tenant and you'll see the "system" entry for the corresponding data displayed in the data table.
An error in readAllBuff() has been corrected for DB/2 LUW, PostgreSQL, and Sybase ASE.
The Swing GUI now enables and disables the menu items of the main window according to the login state of the client application.
The finder windows retrieve their data from the database, although they do not display it yet (of course not -- I haven't even instantiated the list box, much less bound it to the data.)
A total of 5 new files and 2,293 lines of code were created to handle the logins and logouts for CFDbTest 2.0.
The bugs in the XMsg Loaders have been corrected and both the GUI and the XMsg Loaders now function correctly.
The CFCrm 2.0 project was used to debug the login/logout code, which now works. I really should do a regression test with CFDbTest 2.0 one of these days. Perhaps I'll leave that manufacturing tonight while I get some sleep and do some testing tomorrow/later today.
A login window has been laid out and played with, and it's ready to be wired to the instance methods for performing a login and initializing the security data.
The main windows File/Close menu item has been wired to dispose of the main JFrame.
I've expanded on CFCore a little bit. It now has the pieces in place for an ant build of its jars, and produces the full set of database adaptations as per the other applications/projects.
Note that it cannot support the XML layer and SAX loaders because it has unnamed lookup relationships whose attributes are hidden in the object layer by their lookups. This is correct interfacing for the objects, and a limitation of the model.
Just another reason you should never include CFCore directly into your model through an import, but instead reference the libraries as needed.
All of the 2.0 projects now include Ant build scripts, save for CFCore 2.0, which needs to be built under Eclipse so that you can export the jar and src.zip to versioned file names. I might be able to work up some way of automating the versioning numbers. If I do figure out a way of doing that, I can also wire the creation of the installation packages and source archives to the ant scripts, such that it would ideally do a clean run of everything after a manufacturing run.
Unfortunately that's not likely to happen any time soon as Javac keeps crashing randomly under 64-bit Debian with OpenJDK 1.7. So you do have to do manual interventions and reruns of ant builds in order to get a full set of jars for installer packaging.
Still, instead of redoing entire builds, you just delete the build directories for the package that failed so it does a clean rebuild of that package, and rerun the ant build -- it'll pick up from where it left off by rerunning the failed package.
When you *do* want a clean build instead of running a recovery/continue build, just do an "ant clean" in the java directory of a project before running "ant" to compile the code and build the project java/bin/*.jar files.
All in all I'm happy with my 14 hour effort to learn Ant and create the ant scripts for the 2.0 builds. It takes about an hour less time to run the ant builds vs. running Eclipse builds -- and I couldn't get CFUniverse 2.0 to build under Eclipse any more at all.
And no wonder -- those same javac segmentation faults that are getting reported by ant seem to be killing Eclipse.
It took a lot of hours of work with ant scripts, but I can finally build CFUniverse 2.0, and have done so. I've also test run a couple of the programs to make sure everything is ok. It runs!
The java+swing layer has been refreshed by MSS Code Factory 1.11.12262. There was no need to wait for a full manufacturing cycle as the models are unchanged, and only the rules for java+swing have been touched. If I could only get a nice fast box so I could run it all under Linux I could automate most of what I do, especially once I get those Ant scripts done for CFUniverse and propagagated to the rule base.
I'm working my butt off from the start of the rule changes for this cycle:
* pushed the rules to the Windows 7 CPU horsepower laptop (16X as fast as
my Linux box)
* remanufactured just the java+swing layer
* reverted the .bat script changes
* committed and pushed the refreshed code for the 2.0 projects
* pulled the 2.0 code down to the Linux box
* rebuilt it in the Eclipse projects and exported the updated jars (my GAWD but Eclipse is slow to initialize when you're bouncing through a dozen projects!)
* tagged the git repositories and pushed the tags
* packaged the source code archives
* packaged the installation zips
* queued them for upload
TOTAL TIME: Under five hours
All of the CF* 2.0 projects have been remanufactured by MSS Code Factory 1.11.12260. They've all been rebuilt except for CFUniverse; I'm working on an Ant script for rebuilding CFUniverse without going through the pain of perpetual Eclipse crashes on large projects. I'll update when that build is working. The Ant script is included in the CFUniverse source, but it has never been run yet and probably won't work without substantial changes.
The icons for the CFJReference action buttons are now displaying their ugly icons properly, and the actions have been wired in to the GUI prototype.
The list boxes aren't added yet because they're going to be a lot of work to code and I don't want to deal with that code until I have data to display with it. So for now the picker and the finder and the sub-lists are just blank; you only get an indicator as to what they contain from the temporary menus and labels associated with those data panels.
The view/edit windows now have menus that include the delete, save, and cancel/close operations. The delete menu items bring up the appropriate confirmation window, whose Ok and Cancel buttons are wired to just close the window for now (without closing the parent window, which should happen in the case of a delete. Not sure how I'll implement that at the moment -- there are a few historical options from passing window handles around to issuing custom event messages, depending on the GUI toolkit involved.)
There have been significant enhancements made to the manufactured GUI prototype, ranging from the maximum field sizes now configured into CFLib 1.11.12249 for string/token/nmtoken/nmtokens fields to proper component creation and layout in the delete confirmation windows (which aren't wired yet so you can't see those changes just yet.)
The splitters in the view/edit windows are now resizable, close buttons have been enabled, and the attribute panels throughout are now displayed in JScrollPanes instead of as regular panels.
It's really coming along quite nicely. :)
Sub-objects are usually lists of objects, so I've modified the GUI prototype code to display a JMenuBar for the ListJPanels instead of an actual list box. This lets you add the referenced objects, opening up their View/Edit windows, from which you can navigate further down the tree.
You can fully explore the hierarchy of add/view/edit windows now, though I haven't wired delete functionality anywhere yet. I do think the add/view/edit windows should be where the delete confirmation gets wired, so you have to look at an object *before* you delete it rather than being able to delete it directly from a list box row (i.e. the functionality would not be part of the list panel, but the view/edit internal frames.)
The CF* 2.0 projects have been remanufactured by MSS Code Factory 1.11.12243 and rebuilt using CFLib 1.11.12242, applying the latest GUI enhancements to the Swing implementations.
The broken, incomplete archives of Eclipse configurations have been removed from the source archives because properly including the .metadata directory ballooned the size of the archives to over 200MB each. I'll just have to re-create the configs by hand if Eclipse blows chunks and corrupts itself again (as happened to me 2014.07.11 with CFFreeSwitch.)
As it turns out, removing two lines from each file (the $Revision$ tags) in an entire project outweighs the new code I added to the Swing layers, so project sizes have gone down since 2014.07.01.
For example, CFCrm 2.0 has 9,673 files, so 19,346 lines were removed and 7,177 new lines were added. That made it more work than usual to calculate the number of lines added.
In the end there are 132,779 new lines of Swing GUI code, for an average of 11,064 lines per day since the first.
|2.0 Project||Prior Total Lines||New Total Lines||Lines Removed||Lines Added|
The new widget features provided by CFLib 1.11.12242 are now incorporated and partially respected by the manufactured GUI code. In particular, the maximum field sizes of the fixed-format fields (such as date-time and numeric values) are now respected when laying out the attribute panels, and the CFJReference widget now properly positions its buttons and text edit without the assistance of a layout manager (which wasn't working properly anyhow.)
The various internal frames are now resizable, minimizable, and maximizable as well. Which is how I discovered the bugs with the layout of scrollable panels as reported in the 1.11 logs.
The manufactured code for laying out the attribute panels for a table has been completely reworked. No layout manager is used any more; instead I overload the doLayout() method and calculate the repositioning of the attribute widgets manually.
The tabs of the subobjects and subobject lists have been relabelled, and the attribute tabs for singleton references are displayed properly.
The manufactured code for the desktop of the Swing PostgreSQL application now has the actions wired to launch instances of the find windows. I think I want to restrict the find windows to a single instance, though, and just do a show() on any existing instances instead of creating multiples of these core user interfaces.
The main window's menu bar is now initialized and has some skeleton action classes defined and instantiated in order to initialize the menus. I may need to hang on to references to the actions rather than references to the menu items, though, as it seems the enable state is kept by the action not the menu item itself. One step at a time.
The file menu includes a Find submenu that lists all the unrooted classes in the application model, which always includes the Audit Actions, Cluster, Tenant, and various security objects.
The actions don't actually *do* anything yet.
The GUI is coming along nicely, although it's not a huge amount of new code since the last update. But it no longer throws the database connection exceptions because I'm working on shifting that to a login window (which has yet to be sketched out, much less functional.)
Before that I think I'll work on the menu items for the main window, at least the display of the Find windows. What I want to do is sketch out some initial prototype wiring that just lets you launch all the possible windows in the application, except maybe the delete confirmations (seeing as there won't be records in the lists to delete.)
After that I'll work on getting the windows to populate themselves in read-only fashion, though I might not be mucking about with making widgets read only for a while, as I'll be more interested in getting data to show up than in properly restricting user functionality based on the edit state of the objects.
Code refresh of work in progress. See 1.11 notes for details.
The latest code with the new methods for supporting client-server development and the new test executable for the Swing layer over a PostgreSQL database has been added. The new executable just connects to a database and does nothing useful.
All of the source for the various 2.0 projects has been refreshed, the scripts modified to reference the CFLib/CFCore 1.11.12225 jars, the Eclipse projects updates done, the builds performed, the jars exported, and the release packaging done. CFUniverse was not built and packaged because of Eclipse instability on my box, but there is no reason to expect problems building it on a larger box as all of the sub-projects it incorporates build.
The plumbing aspects of git have been tidied up (e.g. the build jars for the projects are now referenced by the .gitignore files in the java/bin directories, the installer directory is now ignored in the java directory, and the archive scripts now include the .gitattributes and .gitignore files.)
As a final test to the resync with the git repositories that are now used, CFDbTest has been completely rebuilt from source using CFLib and CFCore 1.11.12225 to verify that the currently manufactured code builds properly.
So, yes, you can safely download and use the version of MSS Code Factory that now resides in the git repositories.
Only the source is being repackaged for this release, not the executables.
I need to get things in sync with Git before I can do builds. I think I'm just about done with that task, though.
CFDbTest 2.0 was used as a test bed for building against the latest widgets in CFLib 1.11.1408 because it exercises all of the data types. There were changes that had to be made to allow for the new constructor signature for CFJNumberTextField.
Code as produced by MSS Code Factory 1.11.1390, comparing to 2.0.1211 which was released on 2014.06.18 (13 days ago.) That's an average of 83,077 new lines of code per day. :)
|2.0 Project||New Lines||Total Lines|
CFCrm 2.0 has been refreshed by MSS Code Factory 1.11.1333, rebuilt, and repackaged.
CFCrm is my "testbed" for working on the Swing code. At this point I'm *almost* done stubbing out all the JPanels and JInternalFrames that are going to be needed, though they're by no means complete code. They're just outlines -- the Pickers and Finders don't even include their List objects yet!
The PostgreSQL database creation scripts have been corrected.
Added CFAcc.AccountContact which optionally binds a CFAcc.Account to a CFCrm.Contact and CFCrm.ContactList. (ContactList is only there because of the way contacts are resolved by name -- not including this parent object results in name resolution errors.)
Added CFAcc.AccountConfig with the required lookup attribute DefaultCurrency which defaults to CAD, and the optional lookups Cust(omer)ContactList, Emp(loyee)ContactList, and Vend(or)ContactList as required to limit the display and access of the appropriate lists within the manufactured accounting GUI.
As a general concept, I'll want to add ACLs to the contact lists at some point because there should be finer-grained security control of the tenant's contact lists than merely to decide an all-or-nothing table access.
I'll think about it. I'm in no hurry.
This code adds another 45,000+ lines of code to CFAcc, which brings it just over two million lines total (2,010,972 lines to be precise) including the Java source, the database scripts, and the XSDs. In addition, CFGCash is now 2,026,943 lines of code, having grown by 46,201 lines since the last update.
540 new files were added to each of CFGAcc and CFGCash, with over 118,000 lines of new code added to each of the projects. Note that this is just the first draft of the core accounting objects; the modelling for these projects is far from complete and ready for coding/customization.
CFUniverse 2.0 is only released as source; all other 2.0 projects have been rebuilt and their installers repackaged.
The test suite has been enhanced with a DataCol reference added to the IndexCol, and appropriate test data added to the CreateComplexObjects test. The test suite has been exercised using the code manufactured by MSS Code Factory 1.11.1195 and passed.
The source distribution for CFUniverse 2.0 has been updated. An installer will not be posted because it's too difficult to do builds under Eclipse on my old box. Maybe if I switch to ant builds it would work better. I'll have to give that a try.
2014.06.16 MSS Code Factory CF* 2.0.1169 A sexy release of the latest source and installers All of the projects have been refreshed by MSS Code Factory 1.11.1163, rebuilt, and repackaged except for CFUniverse, which is still being manufactured. While a source bundle will be released for CFUniverse in the near future, I will *not* be rebuilding that project again -- my machine just lacks the horsepower and memory to deal with it, so Eclipse crashes repeatedly, sometimes even before it finishes initializing. Note the CFBam 2.0 is now available under the Eclipse Public License as well as the GPLv3 and a commercial licensing option. This will allow someone with the interest in doing so to create a business application model editor that runs under Eclipse for the 2.0 code series. However, I have no intention of releasing the rule base under anything other than a GPL or commercial license. If you want to integrate MSS Code Factory itself with Eclipse, you'll have to do so by running it as an external tool, the same as would be done for gcc.
The code base has been refreshed and repackaged using the latest CFLib and CFCore jars, and updated using the latest version of MSS Code Factory, 1.11.1069. These three packages comprise the core Apache 2.0 licensed code base that can be used by any application, including non-GPL licensed code.
There were 103,564 lines added to CFCrm 2.0 since the last major update for production. Not bad for a slow month. 3,340 lines per day average for a 31 month stint without breaks. Not my most productive numbers by a long shot, but nothing to complain about.
There were 77,558 lines added to CFAsterisk 2.0 with this refresh.
Another 63,580 lines were added to CFEnSyntax 2.0.
CFFreeSwitch 2.0 saw the creation of 98,453 lines of new code.
CFGCash 2.0 added 118,271 new lines of code.
CFDbTest 2.0 saw an additional 174,618 lines of new code.
CFBam 2.0 grew by a measly 248,101 lines of code. :P
CFGui 2.0 grew by 141,036 lines.
I forgot to capture the line increase counts for CFSecurity, CFInternet, CFCrm, and CFAcc. Actually I couldn't do it for CFCrm because that's the project I use for the work-in-progress tests.
Last but not least amongst the measurable projects, CFUniverse 2.0 added 554,426 new lines of code.
That's a grand total of over 1,579,607 new lines of code in the 25 days since MSS Code Factory was released to production and the 2.0 projects were last refreshed. A paltry 63,184 lines of code per day. Damn but I'm slow... :P :P :P
CFCrm 2.0 has been refreshed by MSS Code Factory 1.11.1000.
CFInternet 2.0 has been refreshed by MSS Code Factory 1.11.1000.
CFSecurity 2.0 now incorporates a build of the Swing code.
All of the CF* 2.0 projects have been produced by MSS Code Factory 1.11.735 and have passed their relevant build and database installation tests. CFDbTest 2.0 has also passed all regression test runs, subject to the limitations of the databases.
All CF* 2.0 projects save for CFBam and CFUniverse have been refreshed by MSS Code Factory 1.11.702 and rebuilt with CFLib/CFCore 1.11.688. CFBam and CFUniverse will be done much later today (it's 02h10 right now, and I don't expect them to be done manufacturing until noon at earliest.)
This is the test suite for MSS Code Factory 1.11.702. See the 1.11.702 release notes for details.
In changing the name of the AuditAction table to AudAct in the CFSecurity model, I broke all the releases database install scripts. I've changed it back and am remanufacturing the code. However, there are only two compile bugs left in each of CFBam and CFUniverse, so they were getting remanufactured regardless. It just means I have to remanufacture and repackage all the projects after this current batch of manufacturing by MSS Code Factory 1.11.614 is done.
The core libraries and shared packages referenced by most of the MSS Code Factory-produced projects have been remanufactured by 1.11.587, rebuilt, and repackaged for distribution.
This is the latest souce code bundle for CFBam 2.0. It doesn't compile yet. There are the expected errors in the cfbam20 source bundle from the old type references in the custom code. There also appear to be problems with cfbammssql20 (which I didn't expect), and with cfbamxml20. Now that I think on it, I believe there had still been an outstanding compile error in the XML layers for this project even before all the object hierarchy restructuring was done, so I shouldn't be surprised. Down to 1,217 errors.
Unlike the other CF* 2.0 projects, CFCore produces a single jar file with a version number specified and a zipfile of its source that can be referenced from projects using the library.
The run scripts for CFAcc, CFAsterisk, CFCrm, CFDbTest, CFEnSyntax, CFFreeSwitch, CFGCash, CFGui, CFInternet, and CFSecurity were mistakenly referencing the CFCore 1.11 jar. This has been corrected and their installers have been repackaged with the updated scripts. Their code should all be current with MSS Code Factory 1.11.490.
The source code snapshots for CFBam and CFUniverse are still based on the 1.11.420 code, so they are rather out of date. But the Eclipse projects have been configured for them, so this update provides those Eclipse configurations in the source bundles for these two projects.
CFFreeSwitch 2.0 has been built and packaged from the code produced by MSS Code Factory 1.11.490.
CFGui 2.0 has been built and packaged from the code produced by MSS Code Factory 1.11.420.
CFEnSyntax 2.0 has been built and packaged from the code produced by MSS Code Factory 1.11.420.
The 1.11.420 release has problems with MS SQL Server code when an object is not audited, because the rules were automatically producing the release logic for the Audit statement objects, when they don't exist unless a base table HasHistory.
I'm about to test build CFBam as well. If there are more defects to fix in CFBam 2.0, I'll do so before issuing a release of MSS Code Factory 1.11 with the patches that produced a comiling version of CFAsterisk.
Just to be clear: 1.11.420 will *not* produce the CFAasterisk 2.0.473 code without updates from the SubVersion repository.
This is a full snapshot of the source bundles and the builds with the MS SQL Server 2012 JDBC 4.0 jars referenced by the builds.
The source bundles now include the Eclipse configurations for the 2.0 projects, for those projects whose builds have been posted to date: CFAcc, CFCrm, CFDbTest, CFGCash, CFInternet, and CFSecurity.
Eventually the other projects will have build environments created and their refreshed source bundles with the Eclipse configuration and a build installer will be distributed for them. It takes an hour or two per project to set up a build environment in Eclipse, do the build, and export the jars for packaging.
It's dull, tedious work. I'm thinking about switch to ant scripts so I can automate the process down to something I can invoke from the command line and script the whole process of extracting SubVersion updates, doing a build, packaging the installer, and packaging the source bundle.
CFAcc(ounting) 2.0.460 has been built and packaged from the source code produced by MSS Code Factory 1.11.420.
CFCrm 2.0 has been built and packaged for installation.
I've changed my mind about introducing a dependency between a project/schema and the schemas it imports/references. I had been thinking about modifying the object inheritance to follow the project hierarchy, but I realized that won't work with the way I've defined the database buffers. Not, at least, without making for some seriously ugly and error-prone code.
CFGui has been refreshed by MSS Code Factory 1.11.420, and CFDbTest has been refreshed, rebuilt, and repackaged.
CFBam and CFUniverse remain to be refreshed. Those will take many more hours, however.
CFAcc, CFAsterisk, CFCore, CFCrm, CFEnSyntax, CFFreeSwitch, CFGCash, CFInternet, and CFSecurity have all been remanufactured by MSS Code Factory 1.11.420.
CFGCash, CFInternet, and CFSecurity have also been compiled and packaged for installation.
CFGui, CFBam, and CFUniverse are still manufacturing.
Additional builds will occur in due time for the other packages that haven't been compiled yet.
The changes to correct the implementation of a verb in the MSS Code Factory 1.11.373 release corrected the build problem for CFGCash 2.0, while also applying the changes made to the CFCrm 2.0 business application model.
The CFCrm changes will now have to be propagated to CFAcc, CFAsterisk, CFBam, CFCore, CFDbTest, CFEnSyntax, CFFreeSwitch, CFGui, and CFUniverse.
CFAcc will become the repository for common accounting principles and algorithms, while CFGCash will become a user interface over top of CFAcc.
The last of the 2.0 projects has now been refreshed by MSS Code Factory 1.11.306. Another 65,711 lines have been added to CFUniverse 2.0, bringing the total size to 13,884,300 lines of code.
I've also started working on building CFGCash 2.0, but there are a couple of errors compiling the SAX parser, so I've got to look into that. Good thing 306 is only a release *candidate*, not production. I really should build all the 2.0 projects before I consider a production release.
The code for CFBam 2.0 has been refreshed by MSS Code Factory 1.11.306.
CFUniverse should be done in another hour or few.
The code for CFAsterisk, CFCore, CFCrm, CFEnSyntax, CFFreeSwitch, CFGCash, CFGui, CFInternet, and CFSecurity 2.0 has been refreshed by MSS Code Factory 1.11.306.
CFBam and CFUniverse will follow along some time in the next 24 hours.
This is the test suite for release candidate 3 that was used to validate the X(ml)Msg layers.
JDK 7u55 has been released for Debian, so MSS Code Factory CFDbTest 2.0 has been rebuilt and repackaged using that release.
The update to the JDK seems to have corrected the problem with resolving "localhost" network names, so you no longer are required to use IP addresses in the .cf*rc configuration files for the loaders.
The remanufactured database schema creation scripts now make consistent use of the TLDId attributes, as do the java layers.
The jar trees for the remaining databases have been added for the XMsg loaders.
Seeing as PostgreSQL 9.1 is misbehaving under Debian, I'm allowing for the use of MySQL 5.5 as an alternative test bed for the XMsg testing.
The distribution has also been rebuilt and packaged with CFLib/CFCore 1.11.243.
Modified the CFInternet model to consistently reference the TLDId in hopes of resolving the errors in tests 0034 and 0035.
Running the CFDbTestRunXMsgMySqlTests test suite, there are still 4 issues that are not caused by the limited date range support provided by MySQL.
The first two tests, 0002 and 0004, also affect the CFDbTestRunMySqlTests execution, so I'll focus on dealing with those two problems next.
Test 0002 LoadISOCurrency attempts a duplicate insert of the ISOCurrency Id 2, the same as had been occuring for the PostgreSQL version of this loader.
Test 0004 TestNamedLookup does not get to attempt a duplicate insert of the ISOCountryCurrency as the PostgreSQL code used to do, because it is getting the same error as Test 0002 does before the lookup searches can be invoked, which don't seem to be happening properly for join-by-name for the XMsg interfacing when operating via the command line.
Basically it sounds like there is an issue with the indexed name resolvers with a client implementation. I'll rerun the raw MySQL SAX loader tests as well to see if the problem exists in that code as well. It shouldn't. It used to work.
Test 0034 has developed a new exception. It is now throwing a ClassCastException trying to coerce a CFDbTestTopDomainObj into a ICFDbTestProjectBaseObj.
Test 0035 is still producing the "Unrecognized attribute 'TLDId'" exception from the CFDbTestXMsgRqstTopDomainUpdateHandler.
There are still quite a few tests to be debugged before all 33 tests are passed by the XMsg interface, but it's a solid start with the change over to CFLib/CFCore 1.11.207.
CFUniverse 2.0 was the last to be remanufactured, and now weighs in at 13,818,701 lines of code. :P
CFBam 2.0 has now been refreshed by MSS Code Factory 1.11.42. That leaves CFUniverse still running. It should be done by tomorrow.
With the exception of CFBam and CFUniverse, all CF* 2.0 projects have been updated with MSS Code Factory 1.11.42. That includes CFAsterisk, CFCore, CFCrm, CFEnSyntax, CFFreeSwitch, CFGCash, CFGui, CFInternet, and CFSecurity.
CFDbTest 2.0.50 was already in sync with 1.11.42.
This code has not been remanufactured by MSS Code Factory 1.11.42, it's just a repackaging of the code after it was restored to the subversion server as part of my recovery efforts. Now I can remove the old downloads from the SourceForge "Files" service.
There have been no code changes since the last issue of CFDbTest 2.0, but I wanted to get the version numbers in sync with the ongoing restoration of the subversion repository.
The first four CFDbTest 2.0 X(ml)Msg tests now run successfully. There are a lot of problems with the remaining tests, some trivial, some not so much. But I got a lot accomplished for today so I'm calling it a night, and will pick up the debugging at some point in the future. Maybe tomorrow, maybe later. It depends on my mood, and the weather is not cooperating (migraines.)
The latest code manufactured by 1.11.10754 incorporates a new executable for CFDbTest. The appropriate run-scripts have been created, and I'm ready to begin testing the implementation. But perhaps not today. Let me enjoy a day of do-nothing before I dive into testing. Or maybe a few days. Perhaps even the end of the month. We shall see.
The code base is now in sync. Note that the binaries for CFDbTest 2.0 have been refreshed as well. This rebuild adds over 400,000 lines of code to CFUniverse.
Just a few tweaks and cleanups, really. But it touches a lot of header comments.
This clean-compiling version of CFDbTest has an almost-complete implementation of the request/response partner pairs. I just need to stitch together a simple direct-invocation version of the sendReceive() method to bind together a general database persistence layer with a Client layer invoking the persistence layer via a request parser's parseStringContents() method.
There is now enough functionality in the Client layer that it will be worthwhile to detour into creating a test framework for the new code now.
The response processors need to be wired next. The list responses have been properly handled, but not the unique key references.
The wiring of the response processors and result evaluations is far from complete. I haven't even sketched it out for one case completely yet; I've been focused on providing the core code to use for the parser implementations.
CFDbTest 2.0 has been remanufactured by MSS Code Factory 1.11.10648 and rebuilt with CFLib and CFCore 1.11.10626. This enabled the addition of a parseStringContents() method for the base implementation of the CFLib SAX Parser and subsequent overloads by the XML SAX Loader, XMsgRqstHandler, and XMsgRspnHandler parser implementations.
CFDbTest 2.0 has been remanufactured by MSS Code Factory 1.11.10595 and rebuilt with CFLib and CFCore 1.11.10572.
The SAX request response formatting has been added to all of the 2.0 projects.
During the refresh, CFUniverse 2.0 grew by 96,836 lines of code.
With the remanufacturing of CFDbTest, the initial coding of the XML message request parser is complete and ready for integration with a communications protocol.
CFUniverse 2.0 has been refreshed by the latest version of MSS Code Factory.
This beastie now weighs in at 13,266,341 lines of source code in a 112MB compressed zip file. :)
CFBam 2.0 has been refreshed by the latest version of MSS Code Factory.
CFAsterisk, CFCore, CFCrm, CFEnSyntax, CFFreeSwitch, CFGCash, CFGui, CFInternet, and CFSecurity 2.0 have been refreshed and refactored to replace the IMssCFAnyObj references with ICFLibAnyObj by remanufacturing them with MSS Code Factory 1.11.10485.
The parsers and formatters produced for the three XMsg layers now properly implement the formatting and parsing of the Revision and Audit attributes, having been remanufactured by 1.11.10485.
CFAsterisk, CFCore, CFCrm, CFEnSyntax, CFFreeSwitch, CFGCash, CFGui, CFInternet, and CFSecurity 2.0 have been refreshed and refactored to replace the IMssCFAnyObj references with ICFLibAnyObj by remanufacturing them with MSS Code Factory 1.11.10459.
All sub-projects save CFCore now ExtendCFCore; CFCore cannot, of course, extend itself.
CFDbTest 2.0 clean compiles with the code produced by MSS Code Factory 1.11.10459. This completes the refactoring from IMssCFAnyObj to ICFLibAnyObj, including the removal of the CFCore imports for the majority of the manufactured object code. Only SchemaMssCF itself depends on or imports CFCore now.
The message formatting static methods for requests and responses have been added to the XMsg package by remanufacturing with MSS Code Factory 1.11.10408.
CFAsterisk, CFCore, CFCrm, CFEnSyntax, CFFreeSwitch, CFGCash, CFGui, CFInternet, and CFSecurity have been remanufactured by MSS Code Factory 1.11.10361
The test suite has been rebuilt and repackaged with the latest version of CFLib and CFCore.
The new CFDbTestXMsg20 jar/package clean compiles now and is included in the installation zip file.
The test suite has been rebuilt and repackaged with the latest version of CFLib and CFCore.
The test suite has been refreshed, recompiled, and repackaged.
CFLibXmlUtil.formatBlob() has been added to the XML formatting/encoding repetoire.
CFDbTest 2.0 has been rebuilt and repackaged with the latest CFLib and CFCore.
The 2.0 projects have been updated and refreshed to apply the change set up to and including 1.11.10296.
CFDbTest 2.0 has been updated and refreshed to apply the change set up to and including 1.11.10296.
Seeing as MSS Code Factory 1.11.10241 exposes the delete-by-index methods through the TableObj layer, I decided to remanufacture all the projects. I was also curious about just how big things were going to get with the X(ml)Msg layer added, even though it's no where near ready for use or testing yet.
CFDbTest 2.0 has been rebuilt and repackaged using CFLib 1.11.10205 and CFCore 1.11.10205. The X(ml)Msg layer has also been refactored into seperate request and response parsers.
CFUniverse refresh is finally complete. It takes a while. About 30 hours.
CFBam has finally been refreshed. It takes a few hours to process that one. The only project that takes longer is the CFUniverse.
CFAsterisk 2.0, CFCore 2.0, CFCrm 2.0, CFDbTest 2.0, CFEnSyntax 2.0, CFFreeSwitch 2.0, CFGCash 2.0, CFInternet 2.0, and CFSecurity 2.0 have been remanufactured and repackaged by 1.11.10050 RC2.
The CFGCash 2.0 project has been initiated. This project will specify a data model based on a re-engineering of the gnucash-2.6.1.tar.bz2 source data objects and storage in terms of an MSS Code Factory Business Application Model, starting with the objects in CFSecurity, CFInternet, and CFCrm as the base business data model.
The MSS Code Factory CFCrm 2.0 model will be extended and enhanced as required to store any extra attributes specified by GNU Cash 2.6.1 for those same business object concepts, on the presumption that GNU Cash is following some sort of standard for the definition of those attributes. I'd prefer to base it on SWIFT data attributes, but you have to pay for those and sign NDA's, so I'll settle for GNU Cash as my "standard" for an accounting business application model.
The idea is to re-engineer GNU Cash in Java 7 as a client-server application centering on a single database server for the enterprise data cluster.
Rather than implementing a transaction API per-se, I'll use the existing functionality and power of the MSS Code Factory to specify the client-server transactions as adaptive Java object code and attributes that wires themselves to not only the current project, but as a referenced project by other projects.
You can see what I mean in the CFSecurity 2.0 specifications -- there is custom Java code deployed on a per-instance basis for several of the security table implementations. You run that adaptive code every time you call on the CFSecurity servies. (Speaking of which, I need to propagate the isSystemUser(), isClusterUser(), and isTenantUser() implementations from the JDBC schema layers and make those APIs part of the standard/core schema implementations, with default true implementations for the RAM layer, and default false implementations for the base objects.)
By using this technology, I'll expose the transactions of the GNU Cash engine itself as a standardized adaptive API and implementation/customization.
With that experience under my belt, I'm already thinking about how I could add specifications for a Transaction with TxArg specifications that reference a Table, IndexColumn, or Table column as type specifiers, as well as a TxKey specification that references an Index specification, and finally a TxObj specification that references a Table specification.
Dealing with polymorphic interfaces from those business model extensions will be "challenging", to say the least, but my goal is to be able to specify the transaction body as a GEL expansion, the same way that I do for the custom Java code layers in MSS Code Factory specifications today. You *could* continue to write the code a custom Java code as I will be for this prototype, but I want the have a more maintainable and formal specification of a transaction body so that I can create custom XSD specifications for the transaction invocations and wrapper them up as remote procedure calls against a JEE/XML server.
But one step at a time.
First a client server adaptive layer of customized Java implementations that expects to be running within the database context of a connection. With the existing schema implemenations, that means you can wire a new JEE connection object from your server pool to a manufactured schema implementation, and it will rewire itself to use that connection, including re-establishing the prepared statement bindings that it understands. The longer and more complex a transaction is, reusing the prepared statements, the faster the execution of the MSS Code Factory implementation becomes.
It is, literally, JIT database interfacing, with the binding recompiled/re-established for each transaction/request being processed in the JEE environment. i.e. The implementation templates were prototyped about 3 years ago. I know how to do the JEE wiring just fine. That's the easy part.
CFGCash 2.0 starts out with 1,369,045 lines of inherited code from its MSS Code Factory origins, plus the contents of the GNU GPL bzip of source code in the "sources" directory which will provide the "specifications" of what the transactions have to do with the data.
Just for information, here are the sizes of the manufactured code bases:
|Sub-Project||Lines of Code|
This is the test suite for the code manufactured by the release candidate.
The test suite now uses the dynamic schema names for all the databases, though the application CLIs have not been modified to accept a schema name. Instead, you use the database connection configuration file to specify the database name, which is automatically applied during program initialization.
The CFUniverse project is a merger of all the open source projects I have "on the go" with MSS Code Factory (i.e. ideas knocking around my head.) It is 51,029 files of dual GPLv3/Commercial licensed goodness weighing in at 11,380,996 lines of code, and shipped as a 95 megabyte zipfile of source code. It takes about 22 hours for my Core i7 laptop to manufacture the code.
There was a correction made to the Oracle database creation scripts for CFGui 2.0.
There was a correction made to the Oracle database creation scripts for CFAsterisk 2.0.
CFEnSyntax is a renaming of CFParseEN. I don't know why MSS Code Factory doesn't like the old name, but it doesn't. It seems to confuse a Java hashmap for some reason. At least that's my guess. Regardless, here it be.
CFGui 2.0 now manufactures correctly and is ready for a test build.
It's been many months since CFBam 2.0 would manufacture successfully. :)
The regression tests for PostgreSQL have been passed by the code produced by MSS Code Factory 1.11.9792.
The Apache 2.0 licensed projects CFSecurity, CFInternet, and CFCrm have all been refreshed by MSS Code Factory 1.11.9792.
The Dual GPLv3/Commercial by Mark Sobkow licensed projects CFCore, CFBam, CFAsterisk, and CFFreeSwitch 2.0 have been refreshed by MSS Code Factory 1.11.9792.
There is now an msscf project created similar in scope to the ram project, to which the MssCF Java packages have been moved. This means that the core package produced for a project will no longer include the MssCF support by default; you'll have to specify the jar/library explicitly.
Remanufactured by 1.11.9769.
The core projects have all been remanufactured and repackaged with the changes made to the CFIntenet model as of MSS Code Factory 1.11.9769.
The Dual GPLv3/Commercial license headers now reference the CFSecurity, CFInternet, and CFCrm projects rather than the old conglomerate CFSme project.
The Dual GPLv3/Commercial license headers now reference the CFSecurity, CFInternet, and CFCrm projects rather than the old conglomerate CFSme project.
The Dual GPLv3/Commercial license headers now reference the CFSecurity, CFInternet, and CFCrm projects rather than the old conglomerate CFSme project.
The license headers have been updated to reflect the references to CFSecurity, CFInternet, and CFCrm by application code instead of the old conglomerate CFSme project.
The license headers have been updated to reflect the references to CFSecurity, CFInternet, and CFCrm by application code instead of the old conglomerate CFSme project.
The old source from the CRM portion of the old CFSme model has been purged, leaving only the CFSecurity portions.
The old source from the CRM portion of the old CFSme model has been purged, leaving only the CFSecurity portions.
The FreeSwitch configuration information database is just something I have had kicking around for a while now...
The Asterisk configuration information database is just something I have had kicking around for a while now...
The CFCrm source bundle has been refreshed. There was no change to the manufactured code caused by the shift to the integrated CFIso/CFSecurity model (as expected.)
The CFInternet 2.0 source bundle has been refreshed. There was no change to the manufactured code caused by the shift to an integrated CFIso/CFSecurity model (as expected.)
The CFSecurity 2.0 source bundle has been refreshed.
The Apache 2.0-licensed source bundles for the CFIso, CFSecurity, CFInternet, and CFCrm projects have been released as 2.0.9676 source zips and posted to the MSS Code Factory "Files" section for download. They have not even been test-compiled yet; I suspect CFIso will have to be merged into CFSecurity.
I'll keep you posted on that.
This is the regression test suite for beta 22.
The CFDbTest 2.0 executables are now ready to begin the PostgreSQL regression test runs. Once those are passed, I'll issue a beta.
By all means, feel free to download it and give it a run while I do the same formally.
This is the first of many internal snapshots of the updated PostgreSQL functionality to implement dynamic schema naming/binding at runtime. I may need to rework the naming of some procedure invocations as well to strip off the explicit specification of a schema name.
The CFSme 2.0 template model is now deployed as Apache 2.0 licensed code, so that any project, whether proprietary or not, can extend and reference the shared code base.
The CF* 2.0 projects are currently in the process of being updated to reference their use of this Apache 2.0 licensed code, serving the notification of change requirement for extending Apache 2.0 code.
Note that CFSme 2.0 is only distributed in source form, not prebuilt packages.
CFSme 2.0 went on a code diet with the license change, and is down to 1,235,914 lines of source code, down from 1,391,274 lines of source, a reduction of 155,360 lines.
There is one test for DB/2 LUW 10.5 which fails at the command line -- Replace OptFullRange. However, the create() and update() methods that are used by this test have already *passed* their testing in earlier tests; the only thing this test does differently is a delete and re-insert, using the same insertion code that had already worked!
Furthermore, the bug does not occur under Eclipse.
So I just have to chalk it up to IBM being inconsistent in some form or fashion with their driver; there is nothing I can do to fix the problem.
All other tests run successfully for DB/2 LUW 10.5.
The delete permission problem turned out to be an issue with the scripts that drive the tests, at least for PostgreSQL. They should have been configuring the behaviour of the loader for OptMinValue objects, not OptFullRange objects. The test suite has been re-run for PostgreSQL and passes all tests with flying colours now.
Oracle 11gR2 passes all tests successfully now.
MySQL 5.5 still fails on some of the extremes of date-time ranges, but there is no fix for that -- it's a restriction of the database.
Sybase 15.7 and SQL Server 2012 still fail to reject on a permission denied for the delete tests. I'll have to investigate those two databases further. This may be tied to the failure of SQL Server to report an error when replacing complex objects; I'm sure some error is being produced by the stored procedure, or there wouldn't be data litering the system after running the sp_delete_schemadef().
DB/2 LUW won't be retested until some time in the future as I need to reinstall the database software on my rebuilt Debian box after I lost my Ubuntu installation to a failed upgrade of the system. I haven't used DB/2 LUW since that time, so I've some work to do. I'll probably upgrade to a newer release of DB/2 if available as well.
The CFDbTest 2.0 test suite has been refreshed and packaged for MSS Code Factory 1.11.9420.
This is the updated core/shared code snapshot for 1.11.9420, which is published under a BSD license. Because all application models derive from an SME model to start with, they include the objects it defines. Rather than restrict such code to a GPL license, it is released under BSD so all you have to do is credit the CFSme 2.0 project in your license header for your application, and you're free to use it in combination with any license you choose for your code.
That's over 1.4 million lines of free code for you to start your project with.
The updated snapshot of the core BSD code on which everything gets built and extended has been released.
The CFDbTest 2.0 regression test suite for MSS Code Factory 1.11.9359 Beta 19 is now available.
The complex object replacement is still failing; I suspect a problem with the delete stored procs. However, all the other tests are passing now.
The Sybase ASE 15.7 SAX Loader command line interface has been upgraded to support the extra security arguments needed for permissions testing later on in the test cycle. First off, it will be necessary to upgrade the arguments passed into the CLI to match those provided for the PostgreSQL runtime tests.
The JDBC enhancements for the Sybase ASE 15.7 binding layer have been coded and are ready to be integrated and tested. There are some changes to the SaxSybaseLoader implementation that need to be brought over from the PostgreSQL implementation of the same layer in order to drive the enhancements that will be made to the test invocation scripts for Sybase. The extra arguments are needed to allow for security constraint tests at runtime.
The Sybase ASE 15.7 implementation is ready for testing, save for auditing of tables incorporating BLOB data. Those get updated without auditing by the client side code, so they may error out in practice. Basically, BLOBs are NOT SUPPORTED by this build.
Some of the other databases were not properly invoking the specialized delete methods for subclass objects/tables, which could have been a problem for more complex data structures. This has been corrected, and the updated Oracle creation scripts are shipped with this installer. I believe DB/2 LUW, PostgreSQL and MySQL were ok, but you should download the 1.11.9215 installer and make sure you're up to date, because it was a rather serious bug.
The Sybase ASE 15.7 scripts install clean (save for warnings about undefined functions during the load/install process), but the JDBC layer has not been brought in sync with the new function signatures, so you can't run any of the Sybase tests at this time.
The changes to the Sybase database creation and stored procedure scripts are complete and are ready to begin test installations to a database server.
The BSD-licensed CFSme 2.0 code has been refreshed by Beta 18, adding roughly 75,000 lines of code.
The integration test suite for 1.11.8954 passes the tests for Oracle 11gR2, DB/2 LUW 10.1, MySQL 5.5, and PostgreSQL 9.1.
There were a number of errors being reported by the BLOB support for the DB/2 LUW tests. However, there is nothing in the requirements of testing the permissions that requires the use of a client-side BLOB object for the testing, so I switched it over to using a "regular" object without BLOB columns.
Now DB/2 LUW 10.1 passes all tests save for the ones involving BLOB columns, and even those tests run successfully under Eclipse.
The BSD-licensed SME code on which all projects are based now weighs in at 1,218,523 lines with the DB/2 LUW support added.
I'm down to two errors for the DB/2 LUW implementation of CFDbTest 2.0. Both are cases of messages like the following when running the test suite from the command line. However, when you run those loads under the Eclipse debugger, they load just fine, as they should. So I have no outstanding bugs that I can replicate under the debugger and tackle.
Therefore I'm going to have to give DB/2 LUW a "qualified pass" for Beta 17.
This beta includes support for PostgreSQL 9.1, MySQL 5.5, and DB/2 LUW 10.2 as tested under Ubuntu 13.04 64-bit.
While looking for something signifcant about "8680", I came across the Massey-Ferguson 8680 tractor. Not exactly relevant to programming.
PostgreSQL 9.1 and MySQL 5.5 work as well, MySQL with date limitation constraints, PostgreSQL with a full unqualified pass of all tests.
Four more bugs to squish...
PostgreSQL and MySQL pass; DB/2 LUW is passing about half its tests.
The CFSme 2.0 project is released under a BSD license, not GPLv3 like most of the factory itself. This is a freely available code base that you can pick and choose from when implementing an enterprise-focused system with auditing and history logging requirements. With or without relying on MSS Code Factory to produce customized version of the code (easiest), of if you want to go with the old fashioned manual copy-paste-edit approach from the Files download net-sourceforge-MSSCodeFactory-CFSme-2.0.8625-BSD-src.zip
PostgreSQL and MySQL support work. DB/2 LUW is a work in progress.
It fall down for now.
Both the MySQL 5.5 and PostgreSQL 9.1 implementations for Ubuntu 13.04 have passed their regression tests with the code produced by MSS Code Factory 1.11.8594.
Both MySQL and PostgreSQL for Ubuntu 13.04 have been tested with support for the new audit attributes of the Buff objects. There are no unexpected issues with this release of code; everything "just works."
The CFDbTest 2.0 suite for MSS Code Factory 1.11.8226 passes all tests for MySQL 5.5 under Ubuntu Linux 13.04 save for 3-4 failures caused by well-reported date/timestamp exceptions caused by MySQL's limited date range support compared to that defined by Java itself.
The CFDbTest suite for MSS Code Factory 1.11.8123 passes all test for MySQL 5.5 under Ubuntu Linux 13.04 save for a couple of date exceptions caused by MySQL's constraints on date ranges vs. the values considered valid by Java.
The CFDbTest suite passes regression tests for PostgreSQL under Ubuntu Linux 13.04.
The CFDbTest database creation scripts for MySQL 5.5 under Ubuntu Linux 13.04 install cleanly.
The CFDbTest suite for MySQL 5.5 does not currently run successfully because the stored procedures and client JDBC have not been enhanced to specify the required audit columns that have been added to the table creation scripts for the database.
This regression test suite for the latest PostgreSQL changes to support read and delete by optional indexes through stored procedures has been passed.
PostgreSQL passes all the regression tests in CFDbTest 2.0. The PostgreSQL feature set is now complete.
Test suite produced by MSS Code Factory 1.11.7764 passes PostgreSQL regression tests.
The PostgreSQL security enforcement has been implemented and tested using MSS Code Factory CFDbTest 2.0.7683.
Note that I've upgraded to the "Kepler" release of Oracle OEPE for doing builds, and now use Java JDK 7 for those builds. Previously I'd been using an Eclipse release that was built for JDK 6.
The test suite for 1.11.7658 only partially works -- you have to manually insert a bunch of the TSecGroupMember objects by hand using PgAdmin or the command line before the tests run properly. But the database IO code itself has been exercised with this bundle.
The test suite has passed its regression tests for 1.11.7632.
Test suite produced by MSS Code Factory 1.11.7616. Note that all of the regression tests are run as the system user, so verification of the user-based permission enforcement isn't actually complete yet.
Test suite produced by MSS Code Factory 1.11.7525.
The test suite for beta 11 weighs in at 40MB compressed, the result of compiling and packaging 2,734,800 lines of code.
This test build is the first full clean compile of the PostgreSQL audit column support. With any luck this will lead to Beta 11 later today.
Note that this is a much larger installer, as CFDbTest 2.0 now weighs in at over 2.5 million lines of code. Considering the size of the code base, a 40MB executable isn't too bad.
CFDbTest 2.0 now exercises the PostgreSQL audit histories for Beta 10.
This is a test case build for exercising the new PostgreSQL support for auditing through enhancement of the stored procedures. There are still a handful of outer cases that need to be addressed with client-side code when dealing with BLOB data attributes in the object hierarchy. Those objects don't currently get properly audited.
The PostgreSQL history table creation has been tested and corrected.
I'm ready to begin testing with this snapshot.
CFDbTest 2.0.6900 weighs in at 1,505,524 lines of Java and database creation code and takes over five hours to produce.
This is the regression test suite used to verify that the changes to the delete-by-index stored procedures for MySQL 5.5 haven't broken any previously-tested functionality.
This is the regression test suite used to verify that the changes to the delete-by-index stored procedures for PostgreSQL haven't broken any previously-tested functionality.
The goal is to create a complete model of the Asterisk 11 configuration files and component data structures and elements within those files, such that a CFCore engine can be used to author those objects from a database repository containing the configurtion. Configurations will be stored on a per node basis, as you can only have one Asterisk installation running on a typical node; you don't even want your database engine running on the same node. IP telephony demands uninterrupted fast access to the CPU to function properly. Freeswitch has the same limitations on load distribution.
This build incorporates the tests that were used to exercise the MySQL stored procedure implementation. Due to limitations of the MySQL data types, date range exceptions were being thrown by a number of tests. However, the code does work, it's just limited in functionality.
I've modified most of the tests to allow for MySQL's range limitations, but there is no trivial way to correct the implementation of TZTime columns, which presume a fully-ranged datetime column that can deal with a Java starting date of 0001-01-01, which MySQL cannot do. Therefore if you are planning to support MySQL with your project, do not use TZTime columns.
Errors with the other database tests were also corrected since Beta 8.
Code clean compiled and ready to begin testing.
A TableAddendum is used by a SchemaDef to specify additional characteristics of a table. It will eventually replace the TableRelations functionality for CFBam 2.0 specifications. A SchemaDef may only have one TableAddendum for any given table. If more than one set of addendum text is found in a 2.0 specification, the additional specifications will be added to the existing addendum.
After a given SchemaDef is fully loaded, its addendums are applied to the Table being modified, so that the Table specification incorporates all of the addendums specified by this schema.
The initiating or root SchemaDef that is to be manufactured should specify all of the schemas it wishes to import rather than relying on the implicit import of schemas by referenced business application models. I'm not even sure implicit imports will resolve properly as I haven't written the code yet. But I don't want that behaviour to be presumed at this time.
TableAddendums can contain any of the objects normally specified as being Contained by a Table, including AddendumIndex, AddendumRelation, AddendumCol, and the various Addendum atom specifications.
During the load of a referenced SchemaDef, the load is done into the tenant's object space where it can be referenced by the tenant's business application model(s).
Thus we build up the onion layers of BAM models, one on top of the other until we've built a complete view incorporating all the referenced models as one monolithic enterprise model.
Each AnyObj object now specifies a Defining object relationship which specifies not where the object was originally defined, but where it was imported from by the current model. Thus a chain of definitions is possible as implicitly loaded models will appear to be defined by the model that specified their loading rather than the originating model as would be the case if you properly listed the loaded models in your main/root model.
The CFDbTest 2.0 packaging does not include any source code changes, just finalization of the MySQL 5.5 stored procedure creation scripts.
This release provides the test suite as used to validate the implementation of the table-class hierarchy creation and update code.
The executables for CFDbTest 2.0 have all been recompiled with OpenJDK 7 and incorporate the new PostgreSQL stored procedures and JDBC code for implementing delete-by-index as well as cascading deletes in the stored procedures themselves.
The code is now ready for testing.
The new JDBC implementation wiring in the new stored procedures for the implementation of the delete-by-index methods is completely coded, clean compiles, and is ready for testing.
Both the direct and dynamic SQL support for the delete-by-index stored procedures compiles and installs cleanly to a PostgreSQL 9.1 database.
The RAM implementation now incorporates cascading deletes and fleshed out implementations of the delete-by-suffix accessors, complete with analysis of ClassCode values to determine the correct sub-delete to invoke for a given instance.
All of the CFBam, CFDbTest, CFSme, and CFCore 2.0 projects clean compile as of build 6206.
This is the test suite that was used to validate the Beta 6 manufactured code for the supported databases.
The BL is done being manufactured for CFCore and CFBam 2.0. I don't expect to be adding many, if any new objects for 2.0. I have some ideas for 2.1, but I'm content with the 2.0 model.
In particular, for 2.1 I want to restructure the models with an InheritModel construct that lets you import and inherit a model schema as part of the schema you're defining.
So you'd do something like a series of dependencies:
Then application models can just import/inherit CFSme.
I see no reason why you couldn't inherit multiple application model/modules, such as CFGeneralLedger, CFAccountsReceivable, et. al. and bundle them as an overall CFAccounting project.
After some consideration, I decided to implement a "DefinedBy" lookup relationship between the distinguished modelling objects within a cluster.
So if someone references a model outside the cluster, the system tries to look up a URI for the external reference to retrieve a copy of the referenced model. This may or may not include secure protocols and some sort of sign-in authorization to do so (best handled by a wallet system over SSH or SHTTP.)
The model is copied into the local cluster, and then the copy's ids are used to resolve all future URI requests for elements of that model.
Note that this means each model you import is literally copied in to your model with appropriate "DefinedBy" references set for the inherited models (plural: multiple inheritence is a given when defining a code fabric, even if the implementation uses single inheritence code manufacturing to allow a site to pick and choose the models they're going to incorporate in their business system(s) as defined by their BAM (Business Application Model.)
This way anything that can be defined can be imported.
I'll need to add some new verbs/phrases to the XML parser for 2.0, including:
<InheritSchema Name="CFLookup" Reference="net-sourceforge-MSSCodeFactory-CFLookup-2-0" />
[Atom, TableCol, Index, and Relation definitions, atoms and tablecols must be optional]
When a schema specifies a DefiningURI, it is defining its own URI in string form (e.g. net.sourceforge.MSSCodeFactory.CFBam[.majorver[.minorver]])
Any new objects defined by the schema which are not the result of importing/inheriting another schema reference is by default defined by the same URI as its containing schema.
That way each object in the model can resolve the appropriate license for the specific attributes of the final business application model. So you get text like "Incorporating code licensed under the blah license[, contract #....] from author" with a quotation of the license text to be included when incorporating such code (yet another attribute I need to add is some sort of a separation of license text and licensing contract acknowledgement text.)
It also means that as long as you publish a model to an http server, you can make it available for incorporation in other models. Or you can squirrel it away behind a login-required SHTTP website, or even an SFTP site. As long as the URI resolves to a newline-delimited model file, MSS Code Factory will be happy.
Part of deploying 2.0 will be publishing a new directory tree at http://msscodefactory.sourceforge.net/model, with the shared BSD and LGPL and GPL licensed models published there for reference. I'll probably also host some standard license definition files that can be incorporated using shorthand references for shared text and translations.
I've decided schemas must always specify a publication URI that is unique to the tenant. They *should* be unique globally, but I can't think of any sane way to enforce that. This way you can at least avoid name collisions with your import space manually.
Pruning the unnecessary database support code has reduced CFBam 2.0 from 5,599,979 lines of code to 2,455,395 lines of code. That's still a big application, but much more manageable.
A quick (and I mean quick!) remanufacturing with the pruned rule set (skipping the database support) only took a little over an hour instead of 26 hours. That's much more reasonable for the work I intend to do.
It took 26 hours for MSS Code Factory 1.11.6008 to manufacture this code, a total of 5,599,979 lines of code. By far and away the biggest project I've ever worked on. Had this been a first-cut manufacturing, that would be 215,383 lines per hour, or 3,589 lines per minute. Not bad for a Core i7 box. :)
However, I don't actually need the database I/O code for CFBam 2.0, so I've taken a snapshot in the branches as of 2013-02-12, and will be pruning the excess code shortly. Then I can start working on MSS Code Factory 2.0 itself, which will incorporate the manufactured code of CFCore 2.0 and CFBam 2.0 and extend it with business logic and custom code.
The Beta 5 test suite incorporates functioning SQL Server 2012 support.
See the 1.11.6000 notes about the state of the test suite.
The Beta 4 test suite runes successfully for DB/2 LUW 10.1, MySQL 5.5, Oracle 11gR2, PostgreSQL 9.1, and now Sybase ASE 15.7.
The Beta 3 test suite runs successfully for DB/2 LUW 10.1, MySQL 5.5, Oracle 11gR2, and PostgreSQL 9.x.
This build of CFDbTest 2.0 passes 27 of the 33 Oracle regression tests required for the next beta release.
I'll also be switching back to PreparedStatement buffering before issuing the beta release.
12 of 33 tests still failing.
The .bat files have been corrected to use the semicolon DOS/.BAT path separator character instead of the Linux/Unix colon character.
Oracle testing is being done under Windows with Cygwin providing a bash environment for the database creation scripts (just make sure Oracle sqlplus.exe is on the path), and .BAT files for running the executables.
The Oracle stored procedures aren't wired to the JDBC layer yet, but you can install them to an Oracle database without errors. This release is primarily provided as a preview of that code, but it also incorporates model changes/fixes that were discovered during the debugging of the Oracle stored procedures. (The bugs should have been found earlier, but different databases respond in different ways to naming conflicts and name duplication.)
This release of the CFDbTest 2.0 test suite incorporates the latest DB/2 LUW code and demonstrates the correct functionality of that code.
The DB/2 LUW stored procs now clean compile, though the JDBC code hasn't been updated to use them yet.
All regression tests for the PostgreSQL stored procedures and tuning have passed. This test suite is the verification for beta 1 of MSS Code Factory 1.11.
The communications for floats and doubles has been made consistent and now works. The problem with the deletes has been corrected.
And an annoyingly large number of those are due to float/double errors.
Roughly half the PostgreSQL tests run successfully using the stored procs.
Half of the failing tests are caused by one error -- some sort of data conversion problem for floats and doubles that results in basic values like 100.0 being rejected. Most likely I need to tweak some formatting in either the database marshalling or unmarshalling code.
The PostgreSQL stored procedures load cleanly during the database creation. There are no errors creating the PostgreSQL database.
All of the CFBam, CFDbTest, and CFSme projects compile clean with the code produced by MSS Code Factory 1.11.5215.
The table-dispensed id generators now use their PostgreSQL stored procedure implementations instead of inline SQL, a 3:1 reduction in the number of database I/O requests to be performed. Even the remaining 1 will be eliminated for objects which don't specify BLOB attributes in the base class table, as they will invoke the id generator stored proc directly during the create_dbtablename() processing.
This is the first cut of the conversion to using the PostgreSQL stored procedures instead of just pre-compiled SQL statements. It also includes a fresh snapshot of the latest MySQL 5.5 and DB/2 LUW 10.1 support.
The "lock" procedures were missing.
The "lock" procedures were missing. They look like the "read" procedures, except that they specify the "for update" clause.
This was a beefy manufacturing of 349,541 lines of stored procedures spread across 3,715 files. So I effectively wrote 350,000 lines of code in a week, or 50,000 lines per day. Top that!
The complete set of stored procedures for PostgreSQL has been manufactured and is ready to be creation-tested against the database. BLOB arguments are no longer passed by any stored procedures, so they should create ok.
The CFDbTest 2.0 database integration tests have been passed by DB/2 LUW 10.1.
The CFDbTest 2.0 database integration tests have been passed by MySql 5.5.
There are some cases where exceptions are being thrown due to date/time/timestamp range validations failing. I suspect this may be due to data being read from the database incorrectly, resulting in invalid data while the SAX Loader is trying to read and update records in the database.
I'll figure it out in due time. But the core create/insert/update code is working fine, as are the id generators. MSS Code Factory 1.11.5035 is almost ready for a FreeCode release notice.
The database creation scripts were running cleanly last night.
The JDBC support from PostgreSQL has been refreshed and updated to use MySql syntax, and is ready for integration test as it now clean compiles.
The MySql support has been manufactured. It doesn't work, of course, because it's only the initial migration of the PostgreSQL code.
The initial set of stored procedures for the PostgreSQL performance tuning have been completed with the addition and enabling of the "delete" procedures.
The read procs have been coded.
The "read" procs have been coded. All 210,000 lines worth. :)
The create and update stored procedures for PostgreSQL look correct to me. I haven't tried loading them into the database yet. I'm in mid coding binge and will test later.
This code should bring the create and update efficiency down to O(n) database network IOs regardless of the depth of the class hierarchy, provided that there are no BLOBs in the hierarchy.
The PostgreSQL stored procedures for "create" and id generation have been manufactured.
The DB/2 LUW code and PostgreSQL database creation scripts were updated by remanufacturing CFBam. The PostgreSQL stored procedures look good.
The stored procedure for creating a table is actually a multi-table insert from the base table on through to the specified subclass table.
The procs aren't ready for testing yet. I still need to rework the final return-select of the inserted object to pass back any changes imposed by the firing of database triggers.
The first couple of tests now run, though they've highlighted problems with DB/2's approach to VARCHAR types (a cent sign doesn't fit a single-character VARCHAR.) I'm sure I'll find a fix for it.
There is a more pressing problem with the storage of the test data records themselves, which I need to look into further. It's a work in progress.
The code for CFCore 2.0, CFDbTest 2.0, CFSme 2.0, and CFBam 2.0 has been refreshed with the latest "first cut" DB/2 LUW 10.1 support and with the most recent changes to the SME template.
There are some problems with bad relationships being detected by DB/2, but the core of the database creation scripts work now.
Since I worked on the timezone support, the meaning of Min/Max values changed, so the values in the tests had to change to match.
The tests run successfully, but this will be the last time this test suite is updated, because CFDbTest incorporates the tests and more. CFTypes was just a "holding" set of tests until I had time to work on the timezone code.
Apparently I hadn't been properly checking in the CFBam 2.0 code, because when I tried to extract it under Ubuntu it wasn't in the repository.
So here's all 8,027 files comprising 2,296,581 lines of code.
The current version of CFTypes 2.0.4860 is also checked in with this release, as are the other 2.0 projects.
The Business Application Model is still manufacturing. I'm checking in the rest with the latest PostgreSQL and Oracle code for now.
And with that, I'm ready to begin my foray into taming the killer rabbit of Oracle 11gR2 support...
Support for the timezone-aware data types has passed the PostgreSQL regression tests.
There seems to be some spots where I've overlooked the switch to full date-time values for all date/time/timestamp variants with PostgreSQL. I'll have to look into that and get it fixed before I can move on to the next step. It's somewhere in the reader code that the problem lies.
See 1.11.4781 release notes for details.
TZ types still need to have timezone adjustments applied when loading, and the persistence needs to be reworked. See 1.11.4768 for details.
It's been a while since I remanufactured all the projects, so that's been done so they're all using the latest code.
The same test suite is now passed by both the Ram and PostgreSQL persistence implementations.
Next I need to address the issues with Blobs and TZ Date/Time/Timestamp data types so the CFDbTest suite can be passed instead of just the basic data types in CFTypes.
Most of the insert and update tests pass as they should, but there are some inconsistencies between the range checking done at the database and in the front end code that have to be resolved.
Also there seems to be an issue with binding 16 bit integer values, because they're reporting that the value is a smallint but the binding is a varchar. Probably a minor typo in the rules. There are a lot of rules that can have typos. :)
The TZ and Blob types have problems, but you can now use PostgreSQL to persist models that include Bool, [U]Int[16/32/64], IdGen[16/32/64], UuidGen, Float, Double, Number, String, Text, Token, NmToken, NmTokens, Date, Time, Timestamp, and Uuid types for their attributes.
The PostgreSQL SAX Loader does successfully insert data, but a later update fails. There are also problems with the data/time/timestamp functions, as they were designed for dynamic SQL not binding raw values (so they have quotes in the strings.) This won't take much time or effort to fix. By the end of the week the basics of PostgreSQL persistence should be tested and ready to rock 'n roll.
All of the test cases created to date execute with the expected sucessful status or with the expected exceptions, validating the initialization and constraint checking code for the system.
There are two tests complaining about a null Id assignment. I'll need to debug those two cases more thoroughly. But in the meantime, this is the broadest set of test conditions the code has ever been exercised for.
As it turns out, there had been a significant amount of work to complete before those test cases could be passed properly.
CFDbTest clean compiles and is ready for another round of testing. Most of the tests now run properly again, though there are 3-4 exceptions (maybe more) that shouldn't be getting thrown. Still, it's a vast improvement since yesterday's crash-happy versions.
The range checks for numeric underflows are exercised by the new test cases for required attributes. Unfortunately, the [TZ]Date/Time/Timestamp attribute range checks can't be implemented properly until I implement the Format verbs in the engine, which won't be happening in the near future, possibly not until 2.0 is running. We'll see.
All of the code has been remanufactured to incorporate the last month's changes.
The model has been updated to provide 10-100 ranges for numerics, and 0001-12-31 to 2999-12-31 date ranges, as well as 12:00:00 to 23:00:00 ranges for times. This gives me more flexibility to implement tests to verify that range checking on values is correctly implemented, a key feature of the fast-fail architecture.
Rather than risk incompatabilities with the RHEL code base, I've reverted to Xerces 2.9.0 as shipped with Oracle Linux.
The non-validating version of the SAX Loader runs clean as expected. Someday I'll work on the validating parsers again, but in the meantime the non-validating parsers seem to run several times as fast as the validating version, so perhaps I'd best stick with non-validating parsers as I'll be more interested in the performance boost for XML RPC type functionality than I am in automated validation. I re-validate everything in code anyhow.
The Tenant Container reference to a Cluster is supposed to be required, not optional.
The ISOTimezone object is now properly referenced by a Contact.
The XSD specification has been tweaked slightly in hopes of correcting the errors being reported at runtime. Clearly there is some niggling difference between the relationship of the XSD, Xerces, and the parser for the MSS Code Factory implementation itself and the one being manufactured for the SAX Loaders. I just need to comb through the code and figure out where the difference is. I decided to start with the XSDs.
Unfortunately the tweaks to the XSD specification have made no difference in the runtime reporting of errors by the parser.
I am debating modifying the CFLib code to artificially produce a stack trace when logging the errors, just so I can see what the code stack looks like, seeing as I can't do so using Eclipse.
The test data should reference the Tenant as its Container.
The deployment package now includes the dbcreate scripts in the extracted distribution.
There is still that annoying command line runtime Xerces bug that is reporting attributes as being illegal, then immediately after demanding that the attribute be present. Meanwhile, the attribute in question is specified by the XML document.
No such messages appear when running under Eclipse, so I can't debug the problem very easily. In fact, I can't imagine how I can debug it at all.
There were some indentation problems and TODOs in the rule base which have been cleaned up. The XML parsers should now be able to deal with Base64-encoded BLOBs.
The named lookup support has been repaired and tested for at least one case. Provided the definitions in the model are sane, it should work for all named lookups.
Currently the test is failing, but at least the named lookup code is now being exercised by the latest test, 0004-TestNamedLookup.xml.
There are still bizarre error messages from the Xerces parser when running under the command line, where the parser complains that an attribute is not allowed and then immediately complains that the same attribute is required. No such errors occur when running under Eclipse, and the Eclipse Indigo jars are now used throughout the runtime variants, so I'm completely baffled by this problem.
The command line version runs properly now. There was a bug in the .xsd, but for some reason that bug was throwing an exception under the command line but not in Eclipse. Go figure.
I hate hacks. So I implemented a proper interjection class to join between the document and the top level elements of the schema, and named it "SaxDoc", appropriately enough.
The parser still doesn't run in command-line mode, only under Eclipse. An initialization race condition, perhaps? Something that isn't going to occur in a reliable order from one JVM to the next?
Properly implemented, there should be another level of parser code generated, but for now I just recursively define the Schema node as being the root document itself. I don't like it, but I need to get on with testing the databases.
You can run CFDbTestRunAllTests under Eclipse now, but there seems to be some sort of packaging error or something with the command line runtime. I need to look into it further; the packages in the installer are all up to date so things *should* work.
The Small-Medium-Enterprise template has been updated to ensure that all the security information is owned/contained by the Cluster.
The latest code has been used to refresh the CFDbTest 2.0 installer.
All of the 2.0 projects (CFCore20, CFDbTest20, CFSme20, and CFBam20) have been remanufactured and checked in with the DB/2 UDB 10.1 prototype/skeleton code.
There were some files going to the wrong directories for the manufactured DB/2 schema scripts. This has been corrected.
An initial migration of the PostgreSQL JDBC implementation as the base of IBM DB/2 UDB 10.1 support has been completed. Once again I need to check and update the date/time/timestamp to/from string conversion code to map in UDB syntax instead of PostgreSQL syntax. Unlike Oracle, I'm quite certain there are differences.
In other words, the code will not work, though it does clean compile.
The Oracle framework with an implementation of an Oracle-specific CFDbTestSaxOracleLoader20 execution script is complete.
Oracle will allow for a full implementation of TZDate/TZTime/TZTimestamp, but that hasn't been done yet. Conveniently enough, PostgreSQL date-time-timestamp to/from string conversions use the same syntax as Oracle. Oracle just has an additional TO_TIMESTAMP_TZ function that will let me provide the extra timezone support.
The launcher scripts are now "clean."
Next up: I need to create an Oracle version of the loader with the manufactured code base that I added yesterday.
I've completed the initial deployment package skeleton for CFDbTest20. It installs in a similar fashion to MSSCodeFactory itself. Just extract the .zip and add the enclosed bin directory to your path on *nix. The rest should be automagic.
With the remanufacturing of CFBam20, the Oracle refresh is now complete.
The SME 2.0 implementation has been refreshed with the latest Oracle starting-point snapshot.
The Oracle implementation has been refreshed based on the lastest PostgreSQL implementation, providing support for prepared statements wherever feasible.
I do not believe this code will actually work because Oracle uses a different syntax for doing date-time-timestamp to/from string conversions.
The exceptions thrown during the SAX Loader parser processing is now wrapped with location information and rethrown so that the user has some idea where the source of the problem is located.
The changes have been applied to S1DbTest20, which clean compiles.
The CFCore support in cfbam20 has been checked in, as has the code for cfbampgsql20 and the SAX Loader implementations.
The CFCore20, CFDbTest20, and CFSme20 code has all been checked in.
This is a partial checkin of the CFBam20 code. It will take several more hours for the manufacturing run to complete, so I may as well get what's been finished into the repository.
The cfbam20, cfbamram20, and cfbamoracle20 sub-projects are checked in with this release.
Initial release of Singularity One Small-Medium-Enterprise application template under the BSD 3-Clause License so that it can be copied and extended by any other application by simply providing a copy of the specified license.
I'll need to add references to the owning Company as the license grantor for a BSD license, and specification of the current year as the license-granting date as well. I'll probably need to add a couple of GEL bindings to accomplish this.
I may need to add a Licensor object reference to the BAM Schema so that you can specify license references as well as specifications. On the other hand, the license headers are freeform text so it's merely a matter of including appropriate boilerplate when specifying a model. Perhaps the simplest approach is the best approach in this case.
That would mean adding a license recognition clause to the S1Core, S1DbTest20, and S1Bam20 models manually. It should be easy enough to do in practice, but I think I'll wait until I shift over to the new Oracle stack partition on my box.
The message log wrapper has been moved to the LGPL CFLib package so that it can be used by non-GPL SAX Loader parsers manufactured by 1.11. That doesn't really affect S1DbTest20 nor S1Bam20, but it had to be done for the sake of the general public.
The PostgreSQL implementation now uses PreparedStatements everywhere that it is feasible to do so. Only the queries whose keys optional/nullable columns and the cursor APIs still use dynamic SQL.
Implementations of readDerivedBySuffix() now use PreparedStatements if the index is comprised of mandatory columns, otherwise they use dynamic SQL.
Add the production of the readClassCode PreparedStatements to the PostgreSQL Table IOs.
Implemented the PreparedStatements for readAllDerived() and readDerived().
The readBuff(), lockBuff(), create(), update(), and delete() methods all use PreparedStatements now. These accessors all are keyed by the primary index, which never has optional columns, so the statements can always be prepared and then have runtime values bound.
The update() implementation now uses PreparedStatements.
The lockBuff() and readBuff() implementations now use PreparedStatements. Any statement which is keyed by the primary key can use a prepared statement, because the attributes of a primary key cannot be nullable.
The PostgreSQL implementation of the create() methods now use PreparedStatements for S1DbTest20 and S1Bam20.
BigInteger has been switched over to BigDecimal for the sake of JDBC ease-of-use over the "natural" interpretation of a UInt64 as a BigInteger.
Added releasePreparedStatements() to PostgreSQL Table objects. The schema will invoke these methods when it releases a connection.
The PreparedStatement attributes of a table are now emitted for S1DbTest20 and S1Bam20. Queries which have nullable attributes cannot be done as prepared statements, as you need to be able to switch between "IS NULL" and "= value" syntax on a per-query basis.
The loading of an enumerated type lookup requires that the id of the enum be specified in the loaded data. This is now supported, so the loader can be used to populate SME data like the Timezone enum table.
There were still some file headers that had the MSS Code Factory version number in them. These oversights have been corrected.
The XSD was using the $Name$ of the named lookup relations instead of the $Suffix$ as it should in order to match the parser code.
The CommitAll script has been updated to manage the new SAX RAM and PostgreSQL Loader CLI programs, and the new programs have been added to the repository.
The file headers no longer include the MSS Code Factory version number, so the subversion update churn should cease.
The CLI for the S1DbTest20 PostgreSQL SAX Loader has been coded and clean compiles, but has not been run yet.
The S1DbTest20 RAM Loader package has been renamed to SchemaSaxRamLoaderCLI. It, too, clean compiles but has not been run yet.
The loaders for S1Bam20 clean compile, but there are problems with the SAX parser itself that prevent it from compiling. At this point, the BAM is just too complex to completely automate. But it's a good start towards full automation.
The core code for evaluating the loader configuration options is in S1DbTestSaxLoaderCLI, and a sample per-database specialization is in S1DbTestSaxRamLoaderCLI.
All the pieces are in place to spin Pg8 and Oracle variations on the loader (SchemaSaxPgsqlLoader and SchemaSaxOracleLoader.)
Note that the RamLoader is in its own project so that Eclipse can partially seal the jar.
The CLI skeleton is complete and should properly instantiate a RAM schema to load the data into.
The RAM implementation can't be used to run tests that require the presence of data loaded by a previous test.
Next I need to split out the RAM details into a seperate project so Eclipse can partially seal the jar containing the mainline. When I do that, the main() will move out of the CLI skeleton to the RAM main, which will then be copy-pasted for the Pg8, Oracle, etc. mains.
The CLI now includes the code for parsing the first argument to the program, the loader options.
The second argument is expected to the be file name, and then any remaining arguments will be expected to be parsed by the database-specific command line interfaces as a means of specifying command line database connection arguments. There are, of course, no connection arguments for the default Ram database version used by the SchemaSaxLoaderCLI itself.
The CLI's for S1DbTest20 and S1Bam20 provides the command line instantiation of a parser with a backing store interface. The default CLI uses Ram storage, and then there will be CLI variants for each of the databases.
The goal is to have a common set of driver data that gets loaded by the CLIs for the different databases using a common scripting architecture. I believe it's possible for the high level script to use some sort of command line tag to switch between the different runtime CLIs on the path so that one set of driver scripts can exercise multiple databases consistently.
Oops. Kind of useless without constructors, isn't it?
The Structured XSD for S1DbTest20 and S1Bam20 are produced to match the SAX Loader parser support. The one for the S1Bam20 is invalid because its model is too complex for the current rule base and engine, but it still provides a valuable template code base from which to build a hand-tweaked parser that can deal with a BAM.
There are a couple of niggling naming conventions that I want to change, in particular switching over to using the Suffix of a relation instead of its Name for the lookup name attributes. Some of the names are uglier than their suffixes in the model.
The S1DbTest XSD now gets manufactured cleanly as I've added support for BLOB/base64Binary types as well as verified the Enums.
The attributes of the buffers need to be reworked to incorporate the named lookup attributes supported by the parser, and to remove the Id attributes of those named lookups, especially if they are required attributes.
The BuffType is going to get renamed ObjType as well, because we're dealing with structured objects in this XSD, not RPC buffers and index keys.
The AlternateIndex support for 1.11.4088 has been applied to the manufactured models, and S1DbTest 2.0 clean compiles.
The rules have been enhanced to produce AlternateIndex support when a LookupIndex isn't specified for a modelled object table.
Only object tables which don't specify neither a LookupIndex nor an AlternateIndex are blindly inserted by the SAX Loaders now. Although the loaders produced are not capable of dealing with something as complex as the BAM, they are suitable for the data persistence test framework implementation and for priming and updating databases that don't use self-referencing hierarchical data structures (dot-name trees.)
Once I'm done actually writing the main for the loader and have implemented and exercised the database tests against PostgreSQL (which will take time as I've no doubt there are errors in the code.) I already know I need to use alternate between fully dynamic and parameterized SQL statements for BLOBs and TEXT to allow their full data size to be implemented, and should use parameterized SQL statements if there is more than 8K of text data to be bound to the statement.
Remanufactured the models with 1.11.4064.
It's time for me to pause and read through the code being produced to see if it makes sense now that I've run out of little ideas that needed to be fleshed out for the code to be "complete" for the SAX Loader.
Note that the loaders produced can only deal with structured hierarchies of data, not recursively named objects or dot-naming hierarchies. Yet. But I do know that is doable, it's just not needed right now so it's being put on the back burner.
Corrected errors in determining whether a relation participates in a chain or not. The root cause was an error in the definition of the 1.11 BAM model which mistakenly specified that the Prev/Next relations of a chain narrowed the ScopeDef of the Chain.
There are now fewer than 70 errors remaining in S1Bam 2.0, and S1Core and S1DbTest both clean compile. That's not to say their code is correct, but at least they compile.
S1DbTest 2.0 clean compiles, so the model corrections for the SME template have been propagated to all the models and they have been regenerated with MSS Code Factory 1.11.4035.
The GEL verb SatisfyWidestLookupColumn is invoked while iterating the index columns of a targetted lookup. The verb does a hidden "poptop Table" to find the table to be probed for matching columns.
The shallowest definition of a required Owner, Container, Master, Parent, or Lookup relation whose DataDef matches the indexed column is probed first. If not found, then the probes are repeated for optional columns.
It is possible for the reference to fail to resolve if the model isn't "clean".
Currently it's failing for a number of the SME template tags and for the SecGroupForm of the SME template. Still, for a first cut at solving the problem it addresses a surprisingly wide set of cases.
Remanufactured all 2.0 models using MSS Code Factory 1.11.4000.
The UName index for the AnyObj and GuiObj specifications has been corrected to properly include the required TenantId, the optional ScopeTenantId, optional ScopeId, and required Name.
The Scope of an AnyObj is now treated as an XsdContainer relation.
The naming of the methods for the Owner and Container reference setters include the RelationType in the name.
The editBuff is an instance of the object for the table parser. It gets populated with the parsed attributes and resolved references in a long-winded route to being able to query the database by the named lookup index of the object during the load/merge processing performed by the loader.
In the event that the object doesn't exist, we have a ready-made instance for creating.
In the event that the loader is to to replace an object, the existing instance can be deleted, then the editBuff instance is Created as its replacement.
If the loader is to update an existing object, the editBuff is applied to an edition of the existing object using the copy() method (which may need to be written -- I forget whether it exists or not.)
Only for insert-only objects is the construction of the editBuff a "waste of time."
The Container and Owner relationships are inferred from the scoping object of the parse and applied to the editBuff.
There were some missing pieces to the implementation of the HasContainer, HasOwner, ContainerRelation, and OwnerRelation GEL verbs.
I also made use of those verbs in the SAX Loader Parser rules to sketch out the initial Container and Owner resolution.
Applied NullValue specifications of 0 for all ids and enums in the S1Bam, S1Core, and S1DbTest models including the SME template, and propagated the template changes to all the models.
The NullValues have not been specified for the core schemas of the S1DbTest and S1Core models, however.
The Chain specifications were using optional prev/next relation ids, but mandatory relationships. This inconsistency was uncovered while working with the Chain objects for 1.11.
S1Bam20 has been test compiled, and with the changes to the way singleton relationship candidates are selected, the code is now down to roughly 75 errors.
There are now under 200 errors for the S1Bam20 SAX Loader. However, most of the remaining errors won't go away until I implement chains properly.
The manufactured S1DbTest20 code clean compiles.
There are problems with the S1Bam20 code, however, due to failing to do a "popto TableDef" before emitting some variable references in the rules. That will be corrected later.
S1Core20 should be ok now, too.
S1Core20 and S1Bam20 have been remanufactured with 1.11.3914.
They have not been test-compiled.
The line counts are now:
S1Core 296,655 lines
S1DbTest 322,893 lines
S1Bam 1,992,702 lines
TOTAL 2,612,250 lines of manufactured code for 2.0
One error remains in the current outline of the XML SAX Loader implementation, and that seems to be due to a modelling error rather than a ruleset error.
The XML SAX Loader now converts the Date, Time, Timestamp, TZDate, TZTime, and TZTimestamp attributes using the CFLibXmlUtil services.
The InvalidArgument exception is used to wrap the runtime exceptions thrown by attempting to parse an invalid Date, Time, Timestamp, TZDate, TZTime, or TZTimestamp when converting an XML format string.
The InvalidArgument exception is normally used to wrap and rethrow a formatting or value checking exception thrown by a subfunction, adding in the detailed information about the parameter number, name, and value to the general exception that was caught.
The attribute assignments no longer throw type mismatch exceptions on compile, but they still throw uninitialized variable errors. There also seems to be a problem with the definition of group includes in the SME portion of the model.
The SAX Loader Parser now declares a clean set of relation variables and applies them to the created or updated objects as it should. The data attributes which are not hidden by relationships are also applied.
I really should prune the hidden and primary key attributes from the XML object specifications and eliminate them from the parsed attributes.
The SAX Loader parser now applies the declared references to the edit objects.
Attribute optionality no longer evaluates primary index attributes of the base table, as those attributes are not normally present in a structured document.
The Oracle JDBC rule base has been restored and refreshed from the PostgreSQL version, and the entire code base has been remanufactured under Ubuntu 12.04.
All four projects (S1DbTest, S1Core, S1Bam, and S1Sme) have been remanufactured with 1.11.3858. S1DbTest clean compiles, so the changes to the SME template have been propagated from there to the other models, so they should all be in pretty good shape at this time.
I extended the test coverage of the S1DbTest model by using both schema-level and concatenated id forms of primary ids for the various tables of the test model. UuidGenDefs are only valid at the schema level, and never specify a Dispenser.
Note that the first cut of Owner relation support is implemented by this iteration of the model.
While working on the SAX Loader, I realized I need to distinguish between the Container relationship of an object and its Owner relationship, because the Owner (usually a Tenant) is not necessarily the same as the Container.
I've added support for Owner definitions to the system and remanufactured S1DbTest to verify that the change didn't break anything -- yet.
I'll be using the Owner relationships in the models shortly, then all hell will break loose with the rules because they aren't written to support the Owner relationships yet.
There was some obsolete code in the Subversion hierarchy that is no longer manufactured. The rules aren't lost, they're just disabled pending a rethink of exactly what I want to shift to stored procedures.
I also want to add a heuristic such that if the total value string length of an inserted or updated object is over 8000 bytes (arbitrarily), then the code should implement compiled statements with argument binding instead of encoding the operation as a dynamic SQL statement.
Furthermore, similar compiled statements can be used to implement any read whose key attributes are all required as compiled statements as well.
Any statements which pass optional attributes have to be able to dynamically build their where clause. If there prove to be actual limits on SQL command length, then those statements can be implemented as single-use precompiled statements that are immediately forgotten by the JDBC implementation layer.
My one concern is that there may be server-side resources associated with a compiled statement for some implementations.
The Table relationships didn't incorporate the TenantId, which resulted in compile-time errors.
The Relation didn't properly reference the RelationType, which is a global and not owned by the Tenant.
One of the references to a TypeSpec neglected to incorporate the TenantId.
The S1Bam 2.0 Tenant now owns its component data properly, with concatenated-key identifiers that combine the TenantId with the object id allocated within the scope of the tenant. The object ids only have meaning within the scope of a tenant.
The extra optional TenantId columns have been added to implement the optional relationships, and the indexes have been updated to include the appropriate TenantId in their evaluation.
The model relationships have been updated to specify the additional TenantId composite key attributes that replaced the previous Id64Gen based id's.
Model consistency errors are likely to crop up once I'm ready to start running and debugging the manufactured code.
The resulting code has not even been test compiled yet.
The S1Bam 2.0 model relationships have been updated to specify the composite keys that replaced the previous index definitions.
Model consistency errors are likely to crop up once I'm ready to start running and debugging the manufactured code.
I think I'm done adding the additional TenantId members for the optional relationships. I've gone through all the major object hierarchies.
Next up is the thorny and painstaking process of updating the relationship definitions.
The code has grown a bit as a result of the composite keys. It now weighs in for a grand total of 1,949,091 lines of code.
I'm hoping to break the 2 million mark by Monday. :D :D :D
There were some unused id generators defined. Those have been removed because I decided to stick with the AnyObj hierarchy instead. Why mess with something that works?
In order to check files in and out of the project now, you need to check it out in pieces, as the total line count of the system blows the limits that SourceForge's SubVersion servers can handle.
I've checked out the following as sub-extractions:
There were some unused id generators defined. Those have been removed because I decided to stick with the AnyObj hierarchy instead. Why mess with something that works?
The S1Bam 2.0 model has been partially updated to incorporate concatenated keys where the TenantId always has to be carried down to the sub-objects. The Ids of the sub-objects are only valid within the scope of a Tenant, and are allocated at the Tenant level.
None of the other database naming conventions include version-specific naming, so I eliminated that oversight for PostgreSQL.
Remanufactured by 1.11.3748 to reflect updates to LookupIndex usage.
The 2.0 models have been updated to use the LookupIndex instead of the LookupColumn.
The Tenant needs to Contain the test data, not parent it. Container relationships are special -- they don't get emitted as attributes, and only one Container relationship for an object is allowed.
The attribute values of the element are now saved as named attributes of the current parse context so that they can be referenced by the endElement() implementation if necessary.
Code has been added to verify that required attributes have values, including required named singleton relations.
S1Core and S1Bam have been remanufactured with 1.11.3707.
Refactored the S1BamXml package to S1BamSaxLoader. Just because the cartridge is named XML doesn't mean I need to include XML explicitly in the package name, and with the number of table objects being produced, each SAX parser should be in a seperate package.
Move the S1DbTestXml package to S1DbTestSaxLoader. Just because the cartridge is named XML doesn't mean I need to include XML explicitly in the package name, and with the number of table objects being produced, each SAX parser should be in a seperate package.
The Oracle and BLOracle packages have been refactored to the s1dbtestoracle20 project. Once I have S1DbTest20 exercising the PostgreSQL database persistence via the SaxLoader (TBD, needs refactoring and a main executable), I'll overwrite the Oracle implementation with the PostgreSQL implementation again. Then I can start working on Oracle in parallel with PostgreSQL, testing and enhancing both until I reach a beta-ready state of testing.
There were a few modelling errors to be corrected (duplicate attributes named "Name" in the GuiObj, Method, and Statement hierarchies.) S1Bam 2.0 now produces clean-compiling code.
The S1Sme20 (Small-Medium-Enterprise) template model has been extracted from the S1Bam20 model and propagated to the S1DbTest20 and S1Core20 models.
All of the models now specify some reasonable first-cut values for LoaderBehaviour.
The binding IsColumnInContainerOrNamedLookupRelation evaluates the complex conditions used to determine if an attribute is "hidden" in a structured XML document. If the attribute participates in the from index of a relation which references a unique index and the to table of the relation has a LookupColumnName, then the relationship is a candidate to be evaluated. This implicily includes container relationships, though the single explicit container relationship is always included even if it doesn't have a LookupColumnName.
Once the set of candidate relationships is built, their columns are checked to see if they reference the column being considered.
The manufactured code has been split up into the sub-projects s1bam20, s1bampg820, s1bamram20, and s1bamxml20. This makes it easy to prepare deliver .jar files with Eclipse.
There are 9,467 source code files in S1Bam20. If you were to write an astronomical 20 files per day, that would still be 473 days worth of work. If you take a base billing rate of $5/hour (less than half the Saskatchewan minimum wage), or $40/day, that works out to a $18,920 manufacturing fee for the code, and a $1,892 annual maintenance fee to keep applying bug fixes and updates.
There are 1,910,160 lines of manufactured source code. If you were to presume someone could write 4,000 lines of code per day (double the known maximum of 2,000 lines per day for an average programmer), it would take 477 days to write the code, for a fee of $19,800 and a $1,908 annual maintenance fee.
The manufactured code has been split up into the sub-projects s1core20 and s1coreram20. This makes it easy to prepare deliver .jar files with Eclipse.
The manufactured code has been split up into the sub-projects s1dbtest20, s1dbtestpg820, s1dbtestram20, and s1dbtestxml20. This makes it easy to prepare deliver .jar files with Eclipse.
All sub-projects other than s1dbtest20 depend on the s1dbtest20 project. Only the s1dbtestpg820 project includes the references to the PostgreSQL JDBC implementation library.
Fleshed out the parsing/extraction of the element attributes in the table element parsers, and fleshed out the specification for the SaxRoot element parser also known as the document element.
The manufactured SAX parsers now clean compiles and shows the structure of the targettted parser code, but it is far from complete.
The code for S1Core, S1DbTest, and S1Bam has been remanufactured by MSS Code Factory 1.11.3632 (Alpha 4). It now clean compiles except for the new SAX parser code that's being worked on.
The 1.10 debugging fixes have been brought forward, so this version should produce clean-compiling code that can be successfully persisted to a PostgreSQL storage server.
Remanufactured with 1.11.3623, correcting a major defect in the way singleton relationships were being edited. The 1.11 database persistence tests have been passed for create, read, update, and deletes.
Remanufactured with 1.11.3603 Alpha3.
All three of S1Core20, S1DbTest20, and S1Bam20 now clean compile except for the work-in-progress SAX parser code. The UUID generators have been implemented and wired.
There are 2,465,700 lines of code in those three projects, manufactured in under 7 minutes for an overall speed of 5,870 lines per second.
S1Core20, S1DbTest20, and S1Bam20 have been remanufactured with MSS Code Factory 1.11.3592.
The Xml parser support is a work in progress and does not even approach clean compiling at this point in time.
The UuidGenDef id generators need to be implemented for the Java PostgreSQL, Oracle, and Ram persistence layers. For now, this only affects S1Bam20. Once the generators have been implemented, the new SME template will be propagated from S1Bam20 to S1Core20 and S1DbTest20.
The S1Bam20 model did not properly specify a Tag relationship, using an old style key instead of the new composite keys.
Remanufactured by 1.11.3573.
Instead of returning the internal SortedMap that is maintained by the object cache, the APIs now return a copy of the SortedMap to prevent concurrent modification exceptions while iterating through the results to update the database.
Reworked Business Application Model template section significantly. This still won't quite work because all of the AnyObj derivatives still need to be modified to use a component key referencing the TenantId as well as their existing id components.
The Cluster and Tenant are now identified by UUIDs, and the sub-objects they own are identified by concatenated keys now. This should provide for pretty much unlimited data space within a cluster, subject to the technological limitations of the database engine rather than the design of the schema.
That's 109 new files today.
There are now 1,851,479 lines of manufactured code in S1Bam20. Thats about 62 reams of paper to print it out, or almost three CASES of paper!
Refactored URLProtocolIdGen to URLProtocolEnum.
Shifted the services registration. Note that we are not necessarily talking about _network_ services.
Made the ClusterIdGen and TenantIdGen types UUID generators instead of 64-bit id generators.
All Distributed data has to be keyed either by an Enum or a UUID, otherwise you can't deal with resolving multiple update sources. Not that I'll be doing that any time soon, but it's a consideration for the future.
I've added some documentation comments to the Template section of the S1Bam20 model as well. This will eventually form the 2.0 SME model that every application is to use as its starting framework so that a common security and data distribution model can be implemented.
I also seem to have blown the SourceForge project size limits again... :D
ISOCurrencyIdGen is now ISOCurrencyEnum, and ISOLanguageIdGen is now ISOLanguageEnum. The tags for those two have only been sparsely populated for now as place holders. If there is no such definition, I'll use whatever configuration files I can find on a Debian-based Linux distro (i.e. Ubuntu) or on the Toronto Stock Exchange.
The ISOTimezoneEnum values are the ISO numbers, with compacted camel-case English names for the Enum tags.
Added some first-cut access and data scope specifications to the S1Bam20 model, and converted the ISOTimezoneIdGen to an ISOTimezoneEnum.
The 2.0 Business Application Model has been updated to add the GroupOperatorEnum, and RelGroupValue components of the Relation definitions. The code has all been remanufactured by 1.11.3527.
The GroupOperator specifies operations available over a group of objects, normally indicated by a relationship targeting a duplicate index.
The type of the referenced column affects which group operators can actually be used for the referenced columns, and straying from that supported set may result in code manufacturing errors.
The RelGroupValue is an optional detail of a relationship which targets a duplicate index. There are no RelGroupValue instances expected for relationships which target a unique index, though I suppose it would be possible to manufacture degenerate cases to allow RelGroupValues to be specified for uniquely indexed targets as well.
The UAltIdx specifies a unique multi-part key comprised of the RelationId, GroupOperatorId, and ToColId, effectively forming a sparse binary matrix of the operations to be supported and cached by the manufactured code (the presence of an instance implies a true state on the appropriate matrix cell entry.)
Wired in the Oracle 11gR2 schema creation script support.
The first cut of Java-Oracle integration code is just a quick copy-paste-edit of the PostgreSQL 8.4 code, substituting Oracle for Pg8. Still, it's a step. And a pretty big one at that, as S1Bam20 is now 1,810,349 lines of code. S1DbTest20 has grown as well.
Refreshed the manufacturing of S1Bam20, adding the new outline for an XML SAX Parser based on the manufactured XSDs. There will be a substantial amount of work before that's functional, but it's going to be needed to implement database initialization loaders.
All of the S1 code has been remanufactured -- S1Core20, S1Bam20, and S1DbTest20.
The S1DbTest20 PostgreSQL database now instantiates cleanly.
The S1DbTest20 model now includes both Req and Opt variants on all the combinations of full-range, min-constrained, max-constrained, and min-max constrainted values. This should provide a complete exercising of all the combinations of attributes supported by the manufactured code.
Remanufactured with 1.11.3486.
Corrected some typos and errors in the 1.11 rules that were caught by the full data type exercising performed by S1DbTest20. The manufactured code now clean-compiles.
However, I've decided I'll also be adding an "Opt" variant of the current required columns, just to ensure that the full range of expected variants is exercised. I'm half way to complete rule/code coverage, so why stop part way to the finish line?
This is the initial checkin of the S1DbTest 2.0 code.
It has not even been test-compiled yet.
The S1Bam20 model has been enhanced to specify ManufactureToolSet children of the Project, MajorVersion, and MinorVersion objects.
There should be no significant changes, I just wanted to flag the fact that this was produced by a version that had passed the first of the database tests.
Both S1Core and S1Bam were manufactured using 1.11.3445.
MSSBam20 now clean compiles, thanks to the rule corrections incorporated in 1.10 and 1.11.
The S1Core20 code clean compiles, but there are still 12 remaining errors when compiling S1Bam20. However, I suspect it's a modelling problem rather than a manufacturing problem, as only one set of relationship attributes are exhibiting the problem (switch limbs.)
279,802 lines of code in S1Core20
1,305,895 lines of code in S1Bam20
1,585,697 lines of manufactured code in total
The 2.0 code manufactures as follows:
279,802 lines in S1Core20
1,306,657 lines in S1Bam20
1,586,459 lines total
1.11.3382 weighs in at 1,265,321 lines including the hand-written components. That's a 321,138 line difference.
Thes 1.11 model includes enhancements which have not been brought forward to 2.0 yet, so will grow at some point in the future.
Remanufactured with the latest version of 1.9.
It's been a busy couple of days, but I've coded the GEL Compiler and runtime implementations. It clean compiles, but is completely untested and has not been wired to actually run yet.
Without the compiler in use, 1.9.2977 is already reasonably fast, and only takes:
The columns, indexes, and relationships for the DataScope, ViewSecurity, EditSecurity, ViewFrequency, and EditFrequency have been fleshed out for SchemaDef, Table, and Value.
The narrowest scope to define a non-null value for these attributes is the effective attribute value. The defaults will be as follows:
The Program has exceeded its Programmer in both speed and quality of code.
I spent 3 hours working on the S1Bam20 model. The manufactured code for that model now weighs in at 1,225,478 lines, an increase of 69,909 lines of code in about 3 hours. That's 23,303 lines of code per hour, 388 lines of code per minute, or 6 lines of code per second for my effort.
Now here's an interesting way to look at that six lines per second:
If I hold down the return key on my keyboard, it will generate new lines in a text editor at a BIOS-coded rate of about 5 per second.
So I can:
- Use vi to edit the XML business application model for S1Bam20
- Repeatedly run MSS Code Factory to validate the model and eventually manufacture the updated code
- Fire up an existing Eclipse project, do a refresh, a clean-all, and a build-all
- Check the clean-compiling code in to SubVersion with a 60Kbyte/sec upload
- And I can do all that work faster than my keyboard could produce blank lines in a text editor in the same amount of time!
My fingers hurt -- NOT!
The 1.9 rule base has been extended and fixed so that it properly manufactures the S1Core customization extensions. The code in its current state clean compiles, but it's not done yet by any means.
S1CFCore 2.0 weighs in at 306,017 lines
S1Bam 2.0 weighs in at 1,155,869 lines
S1Eng 2.0 weighs in at 234,892 lines (not bad for an outline)
Grand total for 2.0: 1,696,778 lines
The new Formatter family of atom executors will be used to support the new "format" directive I have planned for the GEL syntax. The new syntax will look something like:
$formatter ValueName FormatSpec$
FormatSpec is another expansion which provides a format specification similar to what Java Formatters use, except that because there is only one argument to a GenKbFormatter at runtime, the format strings should look like "%02d" not "%1$02d". Just skip the "1$" fragment that normally specifies the formatting request argument number when using Java syntax. If you use a format string that doesn't require spaces, you can effectively embed the format specifier as unquoted text in GEL, because GEL doesn't mandate Java/C/C++/C# style naming for its expansion
S1Core now weighs in at 298,843 lines of code, thanks to the addition of all the new Formatter objects. The implementation will actually be quite simple. For every column of a table that is currently mapped as a [Has]Binding, I'll map a Formatter that inherits from the appropriate S1Core type-specific formatter subclass.
At runtime, the general Formatter implementation will be invoked by the GEL interpreter with the format specification passed on down to the format( specifier ) method instead of the expand() method.
Being able to specify the format specifiers as either embedded values or named expansions is extremely powerful for doing NLS translations of a set of business rule expansions.
The formatter implementation locates its object the same as a binding, and invokes an appropriate Java Formatter with the format specifier and the bound column as arguments, passing back the resulting formatted string as the expansion of the item.
I've already stubbed out the base objects that implement the CFCore formatter objects as well.
I decided that although I'd like to do some intelligent parsing of the English language at some point, it does not make sense to merge that model with S1Bam. I need to stay focused on business application models and code with S1Bam, not get distracted. So I split out an S1Eng model.
Added some of the object names for parsing the English language. English has such a challenging structure of words, never mind meaning. But it's something I want to play with during the 2.x series, I have some ideas on how to make it parse and structure the sentences properly to provide hints for dictionary word meaning lookups. As it turns out, I remember very little of my Canadian High School English classes, even though they hammered grammar rules at us for three years running.
No modelling changes since last night, but a clean build has been confirmed.
AccessFrequency specifies when the user has to meet the AccessSecurity requirements.
Woohoo! There's nothing quite so satisfying as refactoring and enhancing a model to bring it up over the 1,000,000 line mark. There's just something magical about reaching a million lines of code... without including the web form prototypes that I've disabled for the CF manufacturings.
AccessSecurity specifies what level of access is required to view information. Both ViewSecurity and EditSecurity attributes are AccessSecurity values. EditSecurity must be at least as high as ViewSecurity, if specified. If EditSecurity is not specified, the ViewSecurity level is evaluated.
The default ViewSecurity will be Session.
The DataScope specifies where a particular piece of Data is to reside, ranging from globally Distributed data like DNS lookups to volatile object instance attributes that don't even survive JEE serialization. Remember the goal is to model distributed clusters, not just process or database information.
CacheScope values are DataScope values that specify what level of caching to use for the data items. The item will be cached in memory for any level more detailed than Host. Coarser caching levels will cache the data in a database instance running at the specified distribution level of the system.
The ParseObj hierarchy has been cleaned up so that it produces clean code now, and an abstract GUI object hierarchy has been sketched out as well. The hierarchy is not platform-specific at all; some of the features don't exist for some GUIs, and some GUIs have additional features that are not portable. I tried to strike a balance with the abstract GUI hierarchy.
The goal is to use XML modelling to define the layout of a GUI without getting down to nuts and bolts like physical screen coordinates. Instead, the widgets will have Left, Right, Top, Bottom, HCenter, and VCenter Anchor references and either the manufacturing process or the GUI's layout manager will take care of following that organization of widgets using the native toolkit's features.
The test build is a bust due to modelling errors with the new code base, such as duplicate Scope definitions. While the tools don't detect such erroneous conditions yet, they do cause problems in the manufactured code. Most of the issues look like they'll be easy to fix.
As part of the migration, I changed the CFCore references to S1Core references, supporting the 2.0 internal development code. This will allow me to test the 2.0 build process as a checkpoint item before releasing 1.9 as the current builds instead of 1.8. I think 1.8 is doing a stable job of providing the functionality planned for that release, so it's almost time to prepare a branch snapshot and announce the conclusion of 1.8 development.
The CFCore custom code from MssCF has been refactored to S1CF. I'm now ready to take a snapshot of the 1.8 rules as 1.9, and update the references to the new 2.0 S1Core instead of the 1.9 CFCore.
With 2.0, the CFCore is being renamed S1Core to make it easier to share a common code base between my upcoming commercial products and services and the existing MSS Code Factory open source core. This way all I'll need to do is change the names of the packages and hierarchy of directories to do mass copies of code from the S1Core commercial product to the GPL MSS Code Factory code base in the future, similar to the way Google does bulk releases of the Android code rather than making the development repositories publicly accessible.
MSSBam is being similarly renamed to S1Bam, and has had its package hierarchy tweaked a bit so that the S1Core and S1Bam can be built as jar libraries to be used by an encompassing code factory project that uses those libraries.
As to what S1 stands for, you'll just have to be patient. :)
I restructured the Bam object hierarchy to take advantage of the new CFCore capabilities to navigate generic object hierarchies without needing a fundamental AnyObj definition any more. AnyDef, ScopeDef, and a few others are gone. In their place are DbObj as the base of all schema objects, including Table Methods and the code components of a method.
I spent the weekend working on something a little different -- the internal object model parse structures for a generic language in the C/C++/Java/C# family.
The idea is to define the object structure for storing the results of parsing business logic code written in a generic language syntax, but to produce appropriate native code in C/C++/Java/C#, and possibly other languages if I can figure out how.
This is completely seperate from GEL. GEL is used to produce source code or text files, and is powerful enough for its requirements as is.
The generic code came about from the desire to add Methods to Tables, bringing them that much closer to being Classes in the native languages. They are implemented as classes now, but I had to temporarily add a BLObj layer to capture custom business logic requirements. With Methods added, it will only be necessary to use the BL layer to implement custom SQL. Realistically, I haven't used the custom SQL capabilities of BL abstraction at all yet.
So far I've added the following language components: