Using IBM i? Need to create Excel, CSV, HTML, JSON, PDF, SPOOL reports? Learn more about the fastest and least expensive tool for the job: SQL iQuery.
In setting up new tables I am using DDL (sql commands). My question is what are folks doing to save this definition. Granted it is associated with the file and any time you can go into inav and generate it, but I had a person delete one of the tables as they wanted to clear it and well you lose your commands.
It would seem to me that it would be easiest to simply go and run inav and generate the sql for each and store these text somewhere.
But what do others do?
Keep them in a source file. Use RUNSQLSTM to execute them. RUNSQLSTM doesn't have a default for the file name, so I use QSQLSRC. Keeping them in source members also helps with version control.
If you're using TurnOver, see types PFTBL, LFIDX, LFVIEW, and the related SQLSTM. They all have RUNSQLSTM as the create command. PFTBL LFIDX, and LFVIEW also have the extra processing you get with the equivalent DDS files; e.g., PFTBL has CPYF options to preserve data.
I agree with Dale. I always use QSQLSRC to store the SQL definitions then use RUNSQLSTM to "compile" the source. Make sure you use a semi colon between each SQL statement.
The other thing is, if you do NOT qualify your file names in the SQL Source, it will use your *CURLIB as the target for the creation... so change your *CURLIB before running the RUNSQLSTM command.
I created a PDM option named SQ that compiles SQL for me:
RUNSQLSTM SRCFILE(&L/&F) SRCMBR(&N) COMMIT(*NONE)
When you specify a library name on the Default Collection (DFTRDBCOL) parameter of the RUNSQLSTM command, then any unqualified files in create statements will be created in that library.
We are a complete DDL shop - no file DDS whatsoever.
We have a special source physical file QDB2TBL to store the DDL for every table in production. In each source member there is first a DROP TABLE statement to drop the table (which would also drop all its data and its DB2 dependencies). Then a CREATE TABLE statement for all columns, including CONSTRAINTS (Primary Key, Foreign Key, CASCADE's if any, etc.). Our standards also then have a CREATE UNIQUE INDEX statement for the keys, to start off the QRYMGR in the right direction. We do a few other statements to help the table show up correctly in some of our Windows and/or Green Screen tools.
We have also created 3 shortcuts in PDM RD, RT, RP to execute the DDL in DEV, TEST and PROD environments as RUNSQLSTM's using the DFTRDBCOL parm.
You won't always want to DROP and CREATE columns or attributes, so we also have QDB2ALT source physical file that holds any ALTER TABLE statements to add columns and/or attributes without killing the table or its data. These are executed using RD, RT, RP commands to make the changes in all three environments as well.
In our shop, making changes to tables without maintaining these source members is a mortal sin.
Places I've worked used QDDLSRC for the table/index/view creation sources - QSQLSRC was used for runtime SQL functions, ie initial table load, add rows to a table, update file data, etc. We'd use Turnover (or similar) to rename/move the old table, runsqlstm over the new source, then copy the data back into the new table format (use default for new fields and there's no problem there).
I admit I do like the idea of separating the CREATE TABLE and ALTER TABLE sources, though it does leave me wondering how you handle successive ALTER TABLES? Or do you generate an ALTER TABLE when you need to add a few fields (or whatever other ALTERing you needed) AND edit the CREATE TABLE to include the new changes, so that CREATE TABLE always will generate the current table version, without having to run successive ALTER TABLE members when you need to create it new somewhere?
in reference to additional alters, would it not be the situation of going like into i nav and simply generating a new definition so that you have a perfectly accurate representation that has it all?
The QDB2ALT member ends up looking like this over time:
ALTER TABLE MyTable
-- ADD CD_STATUS_SCAN CHAR (2) -- STEVECCNJ 2010-07-05
-- ADD CD_STATUS_PART CHAR (2) -- GEORGE 2011-08-05
ADD CD_SKU_DESC CHAR (2) -- STEVECCNJ 2011-12-12
;
...and we also maintain the QDB2TBL source to show the new column, even though we don't run it.
Since we maintain the DDL for the tables in the source members, it is a NO-NO to use iNav to go "back door" to make changes. You can use it to research whatever you want, you just can't use it to Generate SQL and then RUN it.
ok, that makes sense. I remember at my last place they ALTERed a table using i nav, but never modified the base file source. So that when I needed a copy of the file in my library I tried to create the file from source and copy data and got into all sorts of trouble, due to the missing fields in my file copy. Even if you never use it again, it's always good to keep a copy of source that will generate a file - just in case.
I am not saying that you would use inav to make the changes, I am saying you would use inav to generate a sql definition for the file that would replace the existing one.
Maybe I am all wet, but if you do an initial create of the table, then you do a few alters, and then you do a few more alters, etc. before you know it, your "source definition" is really a mess.
So what i am saying, is you go generate a new "definition" out of inav so that what you would have is a nice clean complete create without all the alters, etc.
I use a source file named QFILSRC for all my DDS and DDL that relates to PF, LF, Tables(PF), Index(LF) and View(LF) sources. I create the table from the source I save in there using RUNSQLSTM or if I use something else to create/alter the table, I always retrieve the SQL into that source file.
I keep the DSPF and PRTF stuff in QDDSSRC.
QSQLSRC is used for function and stored procedure sources.
We break our source out into separate files for each type:
SQLTBLSRC for table source,
SQLIDXSRC for index source,
SQLVIEWSRC for view source, etc.
Source members are modified and tagged like any other source as changes are required in production. Changes are managed by Turnover. How the source is created is up to the programmer, but most of us just do it in SEU or RDP.