Frequently asked questions about SURE
Click on a question for details
Questions
are in color blue
Answers
are in color black
The last
update of this list was at November 19, 2003
Where can
I find the installation instructions?
We
installed SURE on a PC but we don’t get a connection with the host
How to
uninstall SURE applications from client if need to re-install it?
Is SURE compatible with our MCP
release?
We want
to upgrade to another MCP release. What is the best procedure?
How can I
connect to another host?
Can SURE be used by programmers that
work at home?
How must I add a new license key?
Is it
wise to load files of multiple application systems in one repository?
Where is a source placed on disk
when I do a check-out of that source?
Is it
possible that multiple users log on with the same usercode?
How can I
assign a task that will cross more than one system?
Can
we set options ’independenttrans’ and ‘reapplycomplete’ for INFDB?
What
is the best method to backup database INFDB?
How
must I define a ‘historical environment’?
What is the purpose of options
‘Phase 1, 2, 3 development’?
How can I
define a separate test environment in SURE?
Why
does a baseline task does not overlap with other tasks?
How does a file get an object
location, and who can change it?
We get many syntax errors because
SURE compiles copylib’s. Why?
Do
you have templates or examples of common employee functions?
How does
SURE handle a mass compilation?
Who
starts the evening batch? Can I change the executing time?
How do we
recompile all programs that use a changed include file?
Our
object-names are not of the form OBJECT/<source>. Can we change it?
It is possible to compile a dasdl and reorganize
a database via SURE?
How can we control that all production objects
are deployed by SURE?
The SURE
evening batch goes in wait status for a tape OVZDISK, why?
How can we set the priority of the
batch-compilations?
Which version of a copy-file is used
at compilations in the evening batch?
What is the purpose of
COMPILE-STATUS ABORT-START-JOB?
Is it
possible to load files from another usercode into SURE?
How can
we copy all files of a system out of SURE to my usercode?
Is it
possible to use NX/Edit in ‘local mode’ on the PC?
Where can
I find a task after I transferred it?
I transferred a task and now I don’t
see it anymore. Where can I find it?
How can I
find the programs that use an include file.
I loaded
some sources but I don’t see the include files of these sources.
How can I
roll back a source to a specific
version in SURE?
Which printer backup files appear in
folder ‘Printer Output’
What happens with the sources in
acceptance when I reactivate the task?
Should we load the develop- or the
production-sources in SURE?
Is
the action ‘multi-compile’ somewhere logged in SURE?
How can I undo the function ‘make
this my current task’?
A compilation started by a developer
goes waiting for a copy-file. Why?
My source and object are removed
from disk after a transfer. Why?
What is the best way to load a
printer-backup file in Word?
How can
we keep the compile-listings on disk for later usage?
The following documents are present in the root directory of the CD-ROM:
Document Installation of SURE.doc gives all details about the
installation procedure.
Document Configuration
of SURE.doc gives a global overview of the main entities in SURE:
environments, systems, projects, task-types, tasks, employee-functions, users,
file-types and files.
Document Tutorial
Configure.doc offers a quick introduction to SURE: When you work trough
this tutorial (4-8 hours) you will learn how to define extra environments,
systems, task-types, tasks, employee-functions, users, file-types and how to
load files. The 'demo and quick-introduction' part will be very helpful for
you, especially in combination with the other document Configuration of
SURE.doc
Document Tutorial
Lifecycle.doc repeats many things from the other two documents. This
documents describes a SURE test-environment that was installed on our machine
in Holland and which could be accessed via the Internet. We disabled this
test-environment for the moment, but this manual still gives valuable
information.
Notice also button Read Me on the installation dialogue (when you
start the installation). This gives information how the software will be
installed (shortcuts, directories, etc).
The CD contains:
-
The
SURE software.
-
A SURE
manual (acrobat reader format).
-
Various
tutorials (word documents).
-
Presentations
and examples (PowerPoint).
The SURE manual and software are always
installed. We advise you to install the tutorials and presentations too. To do
so you must click on components ‘Tutorial’ and ‘Presentation’ on the install
dialogue.
The PowerPoint
presentations give detailed examples how to work with SURE.
[A
hardcopy with the active mix-entries on the host was attached]
I see
that you are doing the installation with port number 999. The problem is that
port-numbers lower then 1024 cannot be used anymore since MCP release
47.1. If you use a port number lower
then 1024 then nothing will happen.
So the solution is
simple: change the port-number to value greater then 1024.
This has to be done
on the mainframe and on the PC.
On the mainframe:
-
Logon to Cande with the usercode where you installed
SURE (I presume that usercode is SURE),
-
Disable in Coms the SURE application-program-interface
(API).
-
Open file RISAPPLICATION/TITLES/WINDOWS.
-
Change the port number on the 17th record from 999 to
your-number (please place a @ sign directly after the port number without
intermediate spaces.
-
Save the file.
-
Enable on Coms the SURE-API.
On the PC:
-
Close SURE on the PC.
-
Browse to the folder where you installed SURE and open
file AW_OBJ.INI
-
Search for 999 and change that to your-number (this
has to be done 2 times!).
-
Save AW_OBJ.INI
-
Log on with SURE and now it will work.
Start the install wizard again and you will see the
following options:
-
Install new application: Use this function to install
SURE for the first time, or to install an extra copy of the SURE application.
-
Update existing application: Use this function to
install a new release of SURE. The current settings of the initialization file
AW_OBJ.INI will be kept, also the manual modifications in this file.
-
Un-install existing application: All files that were
placed on your PC when SURE was installed will be removed. Other files will not
be removed, even when they are in the same directory on your PC.
-
Re-install existing application: The SURE application
is first un-installed (see above) and then installed again (as was it a new
installation). The installation is done with the same parameters (directory,
IP-address, etc) as before. The initialization file AW_OBJ.INI will be
initialized with the installation parameters. All manual modifications in the
AW_OBJ.INI file are gone after the installation.
In this case you have two options:
1. Function
Update, which keeps the current AW_OBJ.INI file intact.
2. Function
Re-install, which refreshes the AW_OBJ.INI file.
We are going to upgrade the MCP level of
our mainframe. What problems can we expect?
Release
SURE 5.0.1 |
|
|
||
|
- MCP
4.6.x |
Fully
compatible |
||
|
- MCP
4.7.x |
Problems
with SYSTEM/BINDER if OBJECT/RIS/API/DRIVER is bound. This problem is only
applicable for you if you received the sources of SURE. The problem is
solved via a specific patch version of SYSTEM/BINDER (bind problems with
global strings). |
||
|
- MCP
4.8.x |
Some SURE
sources get suddenly syntax errors when they are compiled with the 4.8.1
DMALGOL compiler. The same sources compiled without errors with the 4.7.1
compiler. This problem is only applicable for you if you received the
sources of SURE. Patches are available. |
||
|
- MCP
4.9.x |
The
following COMS problem is encounted with this SURE release: -
Unisys COMS warning 161:
Deimplementation Warning for level 1 CDs. PRI number 10269521 These COMS warnings
will be replaced by fatal errors with the first release of COMS after May,
2004. As a result: al
SURE-releases older than 5.1.2 can not run with a version of COMS, released
after May 2004. A
patch is available. |
||
Release
SURE 5.0.2 |
|
|
||
|
- MCP
4.6.x |
Fully
compatible |
||
|
- MCP
4.7.x |
Problems
with SYSTEM/BINDER if OBJECT/RIS/API/DRIVER is bound. This problem is only
applicable for you if you received the sources of SURE. The problem is
solved via a specific patch version of SYSTEM/BINDER (bind problems with
global strings). |
||
|
- MCP
4.8.x |
Fully
compatible |
||
|
- MCP
4.9.x |
The
following COMS problem is encounted with this SURE release: -
Unisys COMS warning 161:
Deimplementation Warning for level 1 CDs. PRI number 10269521 These COMS
warnings will be replaced by fatal errors with the first release of COMS
after May, 2004. As a result: al
SURE-releases older than 5.1.2 can not run with a version of COMS, released
after May 2004. A
patch is available. |
||
Release
SURE 5.1.1 |
|
|
||
|
- MCP
4.6.x |
Fully
compatible |
||
|
- MCP
4.7.x |
Problems
with SYSTEM/BINDER if OBJECT/RIS/API/DRIVER is bound. This problem is only
applicable for you if you received the sources of SURE. The problem is
solved via a specific patch version of SYSTEM/BINDER (bind problems with
global strings). |
||
|
- MCP
4.8.x |
Fully
compatible |
||
|
- MCP
4.9.x |
The
following COMS problem is encounted with this SURE release: -
Unisys COMS warning 161:
Deimplementation Warning for level 1 CDs. PRI number 10269521 These COMS
warnings will be replaced by fatal errors with the first release of COMS
after May, 2004. As a result: al
SURE-releases older than 5.1.2 can not run with a version of COMS, released
after May 2004. A
patch is available. |
||
Release
SURE 5.1.2 |
|
|
||
|
- MCP
4.6.x |
Fully
compatible |
||
|
- MCP
4.7.x |
Problems
with SYSTEM/BINDER if OBJECT/RIS/API/DRIVER is bound. This problem is only
applicable for you if you received the sources of SURE. The problem is
solved via a specific patch version of SYSTEM/BINDER (bind problems with
global strings). |
||
|
- MCP
4.8.x |
Fully
compatible |
||
|
- MCP
4.9.x |
Fully
compatible |
||
We have the
following remark:
-
Port
numbers lower then 1024 won’t work anymore since MCP release 471. Choose are
port number greater then 1024, both on the mainframe and on the PC. You must
also do this for your other client-server applications.
The SURE objects
work with a DMSII database (INFDB). The Unisys DMSII software has a strong
relationship with the objects that use a DMSII database. At compilation time,
the update timestamps of database structures are compiled into the object. This
mechanism is very secure, but rather static when such object files need to be
delivered to a customer.
If a customer
installs new SURE objects, then these new objects must be capable to
communicate with any existing INFDB of that customer. The objects may not get a
DMOPEN error. The SURE sources are NOT delivered to the customer, so it is NOT
possible to recompile the SURE objects at the customer’s site.
A SURE object is
compiled at a central place, and then delivered to many customers. Each
customer works with its own MCP-level and DMS-level. A customer may have
upgraded some physical dataset attributes (such as areasize, population) to
avoid limiterrors. Most customers have SURE installed under a deviating
usercode and/or pack.
Summarized: A SURE
object must be capable to communicate with all INFDB’s, on any DMS-level,
installed at any customer, with any usercode/pack.
This is possible
under the following conditions:
-
All these INFDBs are ‘direct children’ of the original
INFDB, and the SURE objects are compiled against the description file of the
original INFDB.
-
The customer is NOT allowed to do database
reorganizations with 'Record Format Conversion' . Only 'Garbage Collection' and
'File Format Conversions' are allowed (see chapter 7.2 of the DMUtilities
Manual). This allows a user to update AREASIZES, TABLESIZES, PACKNAMES,
USERCODES and to do DMS upgrades).
-
The customer must start the DMS-upgrade with an
original precompiled description file that is supplied by us.
-
The customer must save the correct dasdl (with his
last site-specific updates) that belongs to the INFDB that is going to be
upgraded.
Read chapter 9.2.3
for more information about this subject.
Upgrade steps.
(INFRA)RELEASE/DESCRIPTION/INFDB/INFRA/DMS46
(INFRA)RELEASE/DESCRIPTION/INFDB/INFRA/DMS47
(INFRA)RELEASE/DESCRIPTION/INFDB/INFRA/DMS48
(INFRA)RELEASE/DESCRIPTION/INFDB/INFRA/DMS49
COPY (INFRA)RELEASE/DESCRIPTION/INFDB/INFRA/DMS49
AS
DESCRIPTION/INFDB
$ SET ZIP
$ RESET DMCONTROL
UPDATE
COMPILE DASDL/INFDB AS $INFDB
FILE INFDB ON <auditfile-pack>.
RUN $SYSTEM/DMCONTROL("DB=INFDB RECOVER UPDATE");
DMCONTROL asks if a tapedirectory is required. Reply <mix>AX NO
DMCONTROL asks for the last auditfile number. Reply <mix>AX <nr>
SECURITY DMCONTROL/INFDB PUBLIC IO
SECURITY
DMSUPPORT/INFDB PUBLIC IO
SECURITY RECONSTRUCT/INFDB PUBLIC IO
Explanation.
-
At step 5 and 6 you check that your current dasdl
source belongs to your current database. So all the definitions in your dasdl
source are really in your database.
-
At step 7 and 8 you overwrite your current description
file with an 'original' description file from the CD. This original description
file is already compiled for the correct DMS level, so you won't have to do the
DMS-level upgrade anymore. The original description file does not contain your site-specific
declarations (usercode, packname, etc).
-
At step 9, 10 and 11 you upgrade the original
description file with your site-specific declarations which are in the dasdl
source. The result is a description file with our original dataset timestamps, and
with your physical attributes (usercode, pack, areasizes).
-
At step 12 you ignore the message 'reorganization
required', because your database has already the layout that is described in
your dasdl. You checked that already at step 5 and 6.
-
At step 13 and 14, you update the controlfile. The
dataset timestamps are copied from the description file to the control file and
that makes the database available for our objects.
-
At step 15 you make the database public, because any
user can start it up.
This procedure does
not do anything with the database-files itself, only the controlfile,
descriptionfile, dmsupport library and reconstruct program are involved.
Do I only have to modify the AW_OBJ
configuration file if we need to connect to another host?
Basically you are
right. If you change the IP-address and port-number in the AW_OBJ.INI file then
you will connect to another host. And then you will also be connected to
another repository (because the INFDB-repository can not be placed on two
different hosts).
Notice that you must change the IP-address two times in the AW_OBJ.INI
file.
There are
two (minor) problems: some temporarily settings of your SURE session are also
kept in the AW_OBJ.INI (for example the contents of folder 'last-edited-files).
These settings are repository-dependent. Usually it won't give any problems if
some unknown stuff is placed in the AW_OBJ.INI.
The
second problem has to do with the repository on the mainframe. A copy of the
mainframe repository is placed on the PC (in directory
C:\SURE\RIS\DATA\INFDB\*.*. If you connect to another host, then you will also
connect to another repository, and then the contents of your local repository
won't match anymore with the contents of the mainframe repository. SURE gives
you a warning about this at log-on time, and the local-repository on your PC
will then be refreshed with a newly created version. Therefore, this second
problem is handled automatically by SURE, but you will get a message at log-on
time.
Here is the scenario.
-
A programmer is at home when he gets a
call to fix a program at work.
-
He wants to fix the program from home.
-
He does not have a TD830 terminal
emulator and does not have forms mode capability.
-
He does not have the SURE client
installed.
-
He has access to Cande and does not have
access to Marc (because of the forms mode).
-
He can start jobs, run programs, and
compile.
-
He needs to be able check files out and
in and make quick fixes.
-
This also requires creating a task and
solving / promoting the task.
How can these requirements be met?
Make it possible for the user in install the SURE
client software directly from the Internet.
That should be a special thin client installation (no
help files, no presentations, etc). This installation will take about 15
minutes. The installation downloads the client software, installs it on the PC
in a fixed directory, and links the SURE client automatically to your host.
All the programmer has to do is go on the Internet to
the address where you placed the link to the thin client installation and start
the installation.
Notice that the programmer only has to do this once.
The second time the client software is already there. There is also a check in
SURE that the client software is compatible with the mainframe software. A
warning will be given at log-on time if the mainframe software is changed
(because of a new release) but the client software is not updated.
We can help you to set-up such an installation.
I received a license key for SURE but
how must I add it?
Use function Menu/Options/License../Button:New/’for
CBS order number’ to add a new license key to SURE.
Enter the following fields:
CBS order number |
The order number that you received from Unisys. If
the order number consists of 2 parts then enter 0 in the third field. |
Date license issued |
The key date that you received from Unisys |
Key |
The key that you received from Unisys |
Number of clients / Site license |
Enter / choose the correct option |
We
have several separate application systems.
It is almost as if they are on separate machines. Very few files are shared between the
systems and they can be reconciled manually if necessary.
Each system has a team of people
working on it. It is essential that
people cannot access source code for another team.
The
object code for each system must go to separate usercodes on separate packs.
Do we
need separate repositories?
It is possible to install multiple SURE repositories
on the same machine (for example: if two teams are working on completely
separated products, and these products do not use any resources of each other,
and the release planning of each product is completely unique, etc. Summarized: if the two products could have
been developed on totally different machines). However: most of our customers
have chosen to store the sources of all their application systems in one
and the same SURE repository.
There are two basic
rules:
-
Try to load all application systems in the same
repository.
-
Load an application system in a separate repository if
the release mechanism differs from the other repositories. The release
mechanism means: the SURE environments that are applicable for that application
system and more important: the moment that new software is released for
production.
Considerations of one repository or multiple
repositories are:
1. SURE
creates a data dictionary on file-level: which copy-files, datasets, databases,
libraries, data-files etc. are in use by which programs. You write that in your
case various application-systems access other application-systems (for inquiry
purposes). This will be detected by SURE and you can make queries on that, but
this is only possible if all sources of all application systems are loaded in
the same repository.
2. Integrated
views of the status of all projects are possible when all sources are loaded in
the same repository.
3. One
repository means: All sources are available in one place: SURE. You never have
to ask yourself the question “where can I find the correct version of that
source”, because the source is in SURE.
4. Each
repository triggers a number of batch processes:
- A backup procedure for the repository database itself (a database dump
via dmutility).
- An evening batch procedure (compilations, object-transfer,
repository-cleanup) for each environment of that repository.
SURE has an evening batch (per environment) where changed files are compiled and the objects released to their final object locations. These evening batches are usually monitored by the operating staff: when the SURE evening batch is completed and all necessary application objects are compiled and installed then the operator can start the evening batch of the application system. “Multiple repositories” means more complications for the operating staff.
Suppose
that you have 20 separate application systems, and each system requires 3
environments (develop, test and production), and you load each application
system in a separate repository. In this case, you will have 20 database backup
procedures, and 60 evening batches. One of the main reasons to install SURE is
to improve the procedures that have to do with source maintenance and
application deployment. But in this example everything is complex and the
operating staff will make mistakes easily. If all 20 application systems are
loaded in the same repository, then only one repository has to be backed-up, and
the number of SURE evening batches is also limited to 1 per environment. That
is a simple procedure that can easily be understood by everybody in the
building.
5. Security
aspects:
It may be a requirement that developers of
system AAA cannot access files of system BBB. It is possible to setup the SURE
authorizations in such a way that developers of application-system AAA cannot
change files that belong to application system BBB. It is also possible to hide
files that don’t belong to the system of the developer.
The
authorization mechanism of SURE is very flexible. It is possible to define
authorizations per environment en per application system. So it is possible to
set authorizations such as: allow person XX to access (check-out, check-in,
compile, find, replace, list, etc.) of system AAA but not of any other system.
Allow person YY to access (check-out, check-in, compile, find, replace, list,
etc.) of system BBB but not of any other system. Etcetera.
It is
possible to link a person to one or more application systems. This limits the
various task-lists (ToDo, TaskBusy, TaskReady, etc) of that user to tasks of
those systems.
All this
means that authorization aspects are not very relevant in this discussion.
6. Timing
aspects:
If all application systems
are stored in one repository then you have only one evening batch for each
environment of that repository. All objects that were created during that
evening batch are released at the same moment.
Suppose
that you have 1 program of system AAA in the compile queue and 100 programs of
system BBB. Compiling the single AAA-program takes 1 minute, but compiling the
100 BBB-sources takes 1 hour. If the AAA program is important for today's
evening batch of system AAA, then that evening batch has to wait an extra hour
until the compilations for the application system (BBB) are completed.
A compile-fast procedure is available to
avoid this problem.
7. Location
aspects:
- The development
packs two system may differ.
- The location of the object files of two
systems may differ.
- The location of the copy-files of two
systems may differ.
All these options can be addressed by
SURE.
8. Environment
aspects:
The repository is sub-divided in
environments. Some environments are meaningless for some application systems.
It is possible to exclude environments for specific systems.
Basically,
there are two methods to bring new software to production: the ongoing method,
and the frozen-release method.
Suppliers
of software packages mostly use the frozen-release method: once a year a new
version of the package is made. This release contains new features and fixes
for all reported problems. Urgent errors in a release are fixed via patches. A
release is a 'frozen' state of the total application system.
Sites
with own written applications mostly use the ongoing method. Each day some new
features are transferred to production. Errors are fixed immediately. The total
application system is never frozen but always in progress.
Application
systems that are developed via the frozen-release method are usually loaded in
a separate repository. The benefit is then that each frozen (and historical)
release can be kept in a separate SURE-environment of that repository. The
release interval is usually unique for each application system and that
requires a separate repository.
If all
application systems are developed via the ongoing method then there is no
reason to load those systems in separate repositories. The set of required
SURE-environments for each system is not really relevant. Suppose that you have
two systems. System AAA needs a DEVELOP, a TEST and a PRODUCTION environment.
System BBB needs only a DEVELOP and a PRODUCTION environment. In that case you
have two options: you can exclude environment TEST for system BBB, or you can
remove the default object-locations of system BBB for environment TEST (the
exclude method is very static, the object-locations method is flexible). In
both cases nothing of system BBB will be compiled for the TEST environment.
Developers of the BBB-team can hide environment TEST in the SURE browser.
9. Ownership
aspects:
If
multiple application systems are loaded in the same repository, and a separate
team develops each of those systems, then there must be a SURE coordinator who
stands above those teams. This coordinator is responsible for the
authorizations, for repository configuration aspects, etcetera. If each
application system is loaded in a separate repository, then the team leader can
do this job.
10. Etcetera.
Conclusion:
The fact that two
separate application systems don't overlap is not very relevant in this
discussion.
Use a separate
repository for application systems that are developed via the frozen release
method.
Load all application
systems that are developed via the ongoing method in the same repository.
The file is placed in the work environment of the
person who did the check-out.
The work environment is the place in Cande where the
developer does his work. An imported part of the programmer’s job is ‘compile
the source where he is working on’. This compilation requires that the correct
versions of copy-files and description files are visible from his
work-location. Another important part of the programmer’s job is ‘test the newly
compiled object’. This test requires that a correct set of other application
objects is visible from his work-location.
The non-SURE situation: A developer who is not working
with SURE arrives in Cande in his work environment when he logs on to Cande. He
edits his files in this place, does his compilations, test work, etc. The
developer knows on which application system he is working, so he logs on with a
usercode that ‘belongs’ to that application system. In that case the user
decides his work environment.
The SURE situation: In SURE it is possible to link a
work environment to an application system. For example: the work-environment of
system AA is usercode (DEVAA) ON DEVPACK. The work environment of system BB is
usercode (DEVBB) ON DEVPACK. Different systems may have the same
work-environment.
The developer always logs on to SURE with his personal
usercode (for example usercode SIMON). When he is going to work on a file that
belongs to system AA then his work-environment on the mainframe must be (DEVAA)
ON DEVPACK, but when he is going to work on a file that belongs to system BB
then his work-environment on the mainframe must be (DEVBB) ON DEVPACK.
Summarized: SURE must link a user to the work-environment that belongs to the
system of file on which that developer is working.
If the file of system AA is checked-out from SURE then
that file is placed on the mainframe in the work-environment (DEVAA) ON
DEVPACK. When the user is going to compile that program, then the compilation
is started in the same work-environment.
In this case the work-environment of a user is
determined by SURE (and thus by the system administrator who defined the
work-environments in SURE).
A special work-environment is the global directory ‘*’
(for example: * ON DEVPACK). In that
case the work-environment consists of all usercodes on DEVPACK.
The programmer still logs on with his personal
usercode SIMON. If a file is checked-out from SURE then the source will be
placed on the mainframe under usercode (SIMON) ON DEVPACK. If the source is
compiled, then the compilation is started under usercode (SIMON) ON DEVPACK,
but the resulting object is copied to the global directory * ON DEVPACK where
it is visible for all other developers (Notice that this is similar to the
situation that the work-environment is DEVAA: if a program is compiled, then
the object remains under usercode DEVAA where it is visible for all other
programmers that work on system AA, because all these programmers work in the
same work-environment).
If the developer is changing a copy-file then the
temporary (work) version of that copy-file is on the mainframe under his
personal usercode (in this example SIMON). When the final version of the
copy-file is checked-in to SURE, then that version is also copied to the defined
work-environment (* ON DEVPACK) where it becomes available for the other
developers of his team.
Summarized:
If the work-usercode is not * but (for example) DEVAA,
then all checked-out files are available under that usercode. If the
work-usercode is *, then the checked-out files are placed on disk under the
developers personal usercode.
The defined work-environment also is the location
where the copy-files are placed (after check-in) and where the test-objects are
placed (after test-compilation).
Default work environment
It is not mandatory to define a work-environment for
an application system. If no work-environment is defined for a system, or if an
invalid (unknown) usercode is used in the work-environment, then the default
work-environment is used. The default work-environment can be defined per
Sure-environment.
Roles are used to
customize the appearance of the SURE-browser to a users needs. Menu-choices and
browser-folders are made hidden or visible according to the role of the user.
If the user did not choose a specific role, then all menu-choices and folders
are visible.
The following list
explains the purpose and some characteristics of roles:
-
The purpose of a role is to make the SURE browser
interface easier to understand for the user. Folders that are not necessary for
a user’s role are hided.
-
It is not possible to customize the set of
folders that are activated for a specific role. The links between folders and
roles are hard-coded in the software. A user’s authorization scheme is not
used to hide/unhide folders, with one exception: folder Configuration is hided
if the user does not have authorization ‘Global/Options’.
-
The authorization mechanism remains active. If a
function is linked to a role, but the user is not allowed to perform that
function, then he will get an authorization error.
-
The user can still hide specific folders.
-
If a user has always the same role (e.g. always
‘developer’), then that role can be linked to his usercode. In that case, the
user won’t have the possibility to choose a role on the logon screen.
-
If a user can work with multiple roles (e.g. sometimes
‘project leader’ and sometimes ‘developer’), then these roles should not be
linked to his usercode. The user may choose his appropriate role on the logon
screen. If he chooses role ‘none’ then all folders are available for him.
Employee-functions
are used for two purposes:
Employee-functions are NOT the same as roles.
The SURE repository
can be sub-divided in multiple environments (up to 8). The site can choose the
amount of environments and the environment-names. It is possible to add or
delete an environment later on. The set of environments in SURE should reflect
the various stages in the software development process at your site. For
example: if your site has a development-team, then you should have a develop
environment. If the software has to be tested by a test-team, then you will
need a testing environment. If the software has to be checked via an
integration test with other software modules, then you will need an acceptance
or integration environment. Finally you will have a production
environment that contains the production software. If you site does not have a
testing or integration-team, then it won't be necessary to define these
environments.
The word ‘project’
can have two meanings:
-
A sub-system.
-
An amount of work to be done.
In the SURE context,
the word ‘project’ means ‘sub-system’.
‘An amount of work
to be done’ is in the SURE context ‘a task’.
A file belongs to a
project and a project belongs to a system. A system is always a project of
itself.
Example:
SYS-1 is a system
and PROJ-1 is a project of that system.
Project SYS-1 is (by
definition) a project of system SYS-1
Project PROJ-1 is
(manually defined as) a project of system SYS-1.
File FILE-1 is
(manually) linked to project PROJ-1 and thus automatically linked to system
SYS-1
File FILE-2 is
(manually) linked to project SYS-1 and thus automatically linked to system
SYS-1
The
usercode that is entered at log on time is primarily used for authorization,
identification and logging purposes. This usercode is not necessarily equal to
the workspace in Cande where the developer does his work.
It is
possible that multiple users are logged on with the same usercode, but we
certainly don’t recommend it, because then those users will hinder each other
during their daily routines:
An example:
A
programmer can only change a file when that programmer is linked to a current
task. Therefore, it is highly recommended that each person has his own unique
personal usercode. When two persons are logged one with the same usercode, then
they can change each other current task and that won’t happen if each person is
logged on with his own personal usercode.
When a request is raised, it must cross
more than one system. For example, for a feature that must be implemented in
system ‘trust card’, we also must do something in system ‘deposit’. This
request must be transferred into the task. So this task must cross over the
deposit system and the trust card system. When I add a new task, there is just
one field to place project name. And the project name belong only to one
specific system, it cannot cross over two systems. So I can’t create a task
that belongs to both systems.
When you are going to maintain a source (check-out,
edit, check-in) then you need a workspace in Cande (a usercode on a pack) to do
so. This workspace (or work-environment) is inherited from the
system-definition in SURE.
If the work-environment of the system is (SYS1) ON
WORKPACK then SURE places the checked-out file on disk under the usercode on
that pack, and you must log on with Cande or with NX/Edit with that usercode.
Notice that a file is linked to a project and system,
but a task is also linked to a project (and inherits a system).
By default a user can work on files and tasks of all
projects, but if a user is linked to list of projects/systems then he can only
work on tasks and files of those projects and systems and the tasks of
other won't appear in this task-Life-Cycle folders (My-task, my-team-tasks,
etc).
The global options dialogue contains an option: 'User
can work on files from projects outside his defined project list' (Right click
on 'Environment'/'Global Options'/Tab sheet 'Security'). If a user has a
project-list and this option is set, then that user still can't work on tasks
of other projects, but he will be able to work on files of other
projects.
Now there are two possibilities:
The global options dialogue contains the option 'Use
task system work-environment (in stead of file system). If this option is set
then the work-environment that is defined for the system of the task is used.
(Right click on 'Environment'/'Global options'/Tab SURE).
Notice also that it is possible to define a system
that is only used for tasks and not for files. You can define a specific
workspace for that system, and if you are working on a task of that system then
you are routed to the workspace in Cande that is defined for that task-system.
So if a user is not linked to a project-list,
or if option 'user can work on files of other projects' is set, then the
developer can access all files of all projects.
If option 'use task-system workspace' is set, then he
will work in the work-environment that is defined for the task. All checked-out
files will then be placed in the same workspace in Cande (usercode/family) no
matter what their systems are.
Notice that it does make sense to link a user to a
project-list, and the set the option 'user is allowed to work on files of other
projects', because:
For technical reasons these options must remain reset.
The design of the INFDB repository is in conflict with the usage of these
options. If these options are set, then many deadlocks will occur in the tables
of the sets to dataset DREL.
Database INFDB is kept open during by library
RESPECT/LIBRARY and by the online interface. The online-interface goes
automatically to end-of-task after one hour doing nothing. Library
RESPECT/LIBRARY thaws automatically after not being accessed for 5 hours.
Therefore, in normal circumstances the database will be out of the mix each
morning.
Is
there a preferred mechanism for backup the INFRA DB? It would seem most simple
to do an offline backup nightly to tape, and then just remove the audit files.
To do this I would need to have a way to programmatically close the DB. Then I
could perform the offline dump, remove the audit files from disk, and the
programmatically tell SURE it is okay to open the DB.
There is
no preferred method. Some sites do an online database dump, followed by a
forced close of the audit file. Other sites do an offline database dump.
The frequency of the
database dump also varies per customer, depending on amount of transactions
that are done. Sites with a small development team dump the database once a
week. Sites with a large development team dump the database each day.
There are four types
of programs:
-
Programs that do not access the database
-
Library RESPECT/LIBRARY
-
The SureWindows interface on the mainframe
(OBJECT/RIS/API/= )
-
Other programs that access the database and that can
be started via a RUN in Cande
The following
technical aspects are important in this issue:
-
All programs that access the database do that in
UPDATE mode, there are no inquiry-only programs.
-
Almost all programs do a call to library
RESPECT/LIBRARY. This library also opens the database for update mode, and it
is a permanent frozen library.
-
Some programs of the SureWindows interface access
RESPECT/LIBRARY and/or the database.
Most sites want to
have the repository available for the development team as much as possible.
They also want to have the repository available during the night, because
sometimes a developer has to make a quick-fix to a source.
The benefit of an
online database dump is that the database remains available for the developers
while the database dump is busy. Notice that all programs open the database for
update, and these programs can’t run during an offline dump.
The SureWindows
interface is usually always enabled (in Coms), unless there are technical
reasons to disable it (in case of a software upgrade, or in case of a
calamity).
All these
considerations and technical aspects lead almost automatically to ‘online
database dump’, but it is still very well possible to use the offline method
because:
-
The SureWindows interface on the mainframe is always
enabled in Coms, but the programs that access the database and/or
RESPECT/LIBRARY go automatically to end-of-task after one hour doing nothing.
-
Library RESPECT/LIBRARY
thaws automatically after not being accessed for 5 hours.
As a result: in normal
circumstances, the database goes each night automatically to end-of-task.
If the database has
to be dumped via the offline method, then it is necessary to bring down the
database programmatically, because you want to control the moment of dumping.
The database goes always
immediately to end-of task if the SureWindows interface in Coms is disabled
and RESPECT/LIBRARY is thawed. This can be done manually, but there is also a
way to do both actions programmatically with function MIXCMD in program RESPECT/TOOLS.
This batch function must be used with the name of a mix-entry and a command for
that mix-entry. This makes it possible to thaw a library and disable a program
in Coms via a batch job.
Example:
RUN
OBJECT/RESPECT/TOOLS
("MIXCMD [LIB](SCRATCH)RESPECT/LIBRARY/SCRATCH/INFDB
ON IDRD, THAW");
RUN
OBJECT/RESPECT/TOOLS
("MIXCMD *SYSTEM/COMS, SM DISABLE
PROGRAM SCRATCH_API")
This
example THAWS library (SCRATCH)RESPECT/LIBRARY/SCRATCH/INFDB ON IDRD (the
instance of respect/library that works with a database INFDB which is installed
under usercode SCRATCH), and DISABLES program SCRATCH_API in Coms (the
SureWindows interface for that repository).
The name
of program in the parameter must be equal to the name of the mix-entry
(with usercode and packname). A comma is used to separate the command from the
name.
How
could I define the "Historical Environment" above Production so that
the normal transfer process will stop at Production?
The new environment
above environment PRODUCTION must be defined as follows:
-
Right click on environment PRODUCTION and choose NEW
to go to the dialogue where you can create a new environment. The setting of
environment Production are pre-filled and you must change some of the settings
according to your own needs (you should change at least the
default-workfile-location and the SURE-batch-location).
-
Enter the name of the new environment in field
‘Environment’.
-
Enter PRODUCTION in field ‘Environment below newly
created environment’ and click on ‘Copy upward’. This copies environment
PRODUCTION upwards as the new environment.
Your second question: how to stop
the transfer at environment Production?
Go to the
properties of environment PRODUCTION (right click on Production and choose
properties) and set option ‘Task arriving in this environment is indicated as
SOLVED’.
If this option is
set for an environment, and a task is transferred to that environment, then
that task gets status SOLVED. It is not possible to transfer a task that has
status solved. Therefore, it is not possible to transfer tasks from Production
to your new environment.
I read the Assistant ppt file, and it
says that these fields help to separate the total develop process into three
environments: Design, Global-Function, Develop.
It also mentions that the task of
Reorganization could be used here in order to increase the productivity of the
developer.
I am more confused after reading the explanation.
What would these fields impact the other environments?
These field are useful if you have a layered
application design.
Suppose that you designed the application software in
such a way that is has three layers:
-
The lowest layer contains
the database plus all basic modules that actually access the database (such as
'add customer' or 'add account' or 'modify customer').
-
The middle layer contains
global business functions (such as 'calculate interest' or 'open new account'
or 'expire credit-card').
-
The third layer contains
all software modules that deal with the user interface: online programs, batch
programs, etc.
The programs of the third layer call modules from the
second layer, so the second layer functionality must exist before it is
possible to start with the development of third layer functionality.
The modules from the second layer call modules from
the first layer (for example: module 'open new account' calls ' add/modify
customer' and 'add account'), so the first layer functionality must exist
before it is possible to start with the development of second layer
functionality.
Suppose that you have the following environments:
PRODUCTION |
|
TEST |
|
USERINTERFACE |
with option 'Phase 3 development' SET |
GLOBAL-FUNCTIONS |
with option 'Phase 2 development' SET |
DB-DESIGN |
with option 'Phase 1 development' SET |
Environment DB-DESIGN is used to develop and test the
first layer functionality such as:
-
Add a new dataset to the
database and create basic modules to add/inquire/modify/delete/browse records of
that dataset.
-
Add a new item in an
existing dataset, and update the basic module that performs the actions on that
dataset so that the new item is also processed.
-
Delete an existing item in
an existing dataset, and update the basic modules.
-
Etcetera.
Each modification of the dasdl requires updates in one
or more basic software modules. This is all done in environment DB-DESIGN. The
changed dasdl and the changed software modules are all linked to the same task.
The task remains in DB-DESIGN until all changes are well tested, and then the
task is transferred to environment GLOBAL-FUNCTIONS.
Environment GLOBAL-FUNCTIONS is used to develop and
test the second layer functionality (global business functions):
-
It is possible that a
global business function must be adapted because of a change in the database
(for example: an existing dataset was deleted from the dasdl, and the basic
modules for that dataset are also deleted, so the business functions in the
second layer that are calling those deleted function have to be adapted).
-
It is possible that a
global business function must be changed for a reason that is database
independent (for example: the formula to calculate the interest must be changed
in business function 'calculate interest').
The business function is developed and tested in
environment GLOBAL-FUNCTION. The changed software modules are linked to a task.
That means that these functions are without errors and well tested when the
task is transferred to environment USERINTERFACE.
Environment USERINTERFACE is used to develop and test
the third layer functionality. All software modules of the first and second
layer are stable in this environment, because these modules were already tested
in a lower environment and that increases the productivity of the developers.
Environments PRODUCTION, TEST and USERINTERFACE
contain ALL application sources.
Environment GLOBAL-FUNCTIONS contains the sources of
the first and second layer functionality.
Environment DB-DESIGN only contains the sources of the
first layer functionality.
The three bits make it possible to copy the
application layers in the development process, and that improves the stability
of the application.
If the application software does not have a layered
design, then you probably will have only one develop environment with all three
bits set.
If is also possible that the application software has
two layers: one develop environment with two bits set, and another develop
environment with one bit set.
The SURE environments that have one or more of the bits
set form together the 'development environment' (for example: DB-DESIGN,
GLOBAL-FUNCTIONS and USERINTERFACE form together the 'development environment')
Some specific actions are performed by SURE for the
'development environment'. These are:
-
Give a task a QUICK-FIX
indication if it is added or reactivated on a non-development environment (in
the example TEST or PRODUCTION) .
-
RESPECT/SURE/COMPILE will
never use the Cande-version of a checked-out source or copy-file, when it is
running in a non-development environment.
Each SURE-environment that is part of the total
development environment (DB-DESIGN, GLOBAL-FUNCTIONS, USERINTERFACE) is treated
equally by SURE.
The content of each of these SURE-environments is
determined by the site.
My two colleagues are testing the SURE
functions in environments DEVELOP,
ACCEPTANCE, and PRODUCTION.
Is it possible that I create my own
separate environments like JACOB_DEVELOP, JACOB_ACCEPTANCE, JACOB_PRODUCTION so
that I can do my own test without interfering theirs?
This is a normal situation, because usually all files
of all application systems are loaded in the same repository, and developers of
system A don’t want to be hindered by stuff of system B (and vice versa).
The defined SURE-environments apply be default for all
loaded files of all systems. So defining extra environments won't be the
solution for this.
There are several methods:
First of all: each of you should work with a unique
personal usercode.
The usercode is used for all kinds of identification
purposes. One of the usercode attributes is the current work task: a person can
only modify files when he is linked to a current work task. A usercode cannot
be linked to multiple work-tasks simultaneously. So if you are all working with
the same usercode, and somebody changes his current work task, then the current
task is changed for all of you. That is very unhandy, and that won’t happen if
each team member has his own personal usercode. (Notice that the personal
usercode is not per definition equal to the usercode of the Cande work
environment).
The first solution:
You can define a new SYSTEM, and link your usercode to
the new system-name and the other usercodes to the other system-names. Now you
will only see files and tasks of your own system in the Life-Cycle folders.
That creates a good separation on system level.
A very different method is:
You can install an extra Sure repository to do your
own tests. Notice that it is only necessary to create extra repositories if you
want to do some tests with a different set of environments.
Creating an extra repository can be done via the
installation program on the mainframe: OBJECT/RIS/INSTALL. Your current
repository is installed under usercode (SURE). You can install a second SURE
system under any other usercode, for example usercode (JACOB). You have to copy
the SURE objects on the mainframe (OBJECT/RIS/= and OBJECT/RESPECT/=) and it
requires an extra port-number (higher then 1024), and an extra program
definition in Coms.
Then you have to do an extra installation of the
client software on your PC, in an separate folder
(for example C:\JacobSure\*.*) and connect that client
to the same port number.
The problem with an extra repository is that you won't
learn how to work with different projects and systems in one repository.
Normally all application projects and systems are loaded in the same repository.
When I studied your PPT explaining the Baseline
concept, I found an interesting statement:
"Modifications for a specific
baseline-task can be transferred to production separately from modifications
for other tasks."
I can't see how it can be transferred to
production separately, if the patch becomes visible to other baselines after it
is transferred away from the Develop environment.
It is unavoidable to be interfered by
other baselines once the patches are visible for each other.
I explain
this via an example with the following environments: DEVELOP, MERGE,
INTEGRATION and PROD.
1. Consider the following file:
PROG/XX in all
environments:
00000000% VERSION 2.1
00000100 BEGIN
00000200 DISPLAY(“line 1”);
00000300 END.
The status of this file is stable: the same
version is resident in each environment.
2. The file is checked-out in environment
DEVELOP for baseline BB and the following modification is made (the
BB-modification):
00000000% VERSION 2.1
00000100 BEGIN
00000200 DISPLAY(“line 1”);
00000250 DISPLAY(“LINE BB”);
00000300 END.
This results in the following patch-file for
baseline B:
PATCH/PROG/XX/001 in
DEVELOP:
00000250 DISPLAY(“LINE BB”);
Program PROG/XX itself is not changed because
the modification is made via a patch-file.
3. The BB-modification is transferred to
environment MERGE. Baselines are not activated in this environment. The
BB-modification is still in a separate patch-file, but ALL available
patch-files are always merged (temporarily) into the source when that source
has to be compiled:
The merged PROG/XX in
environment MERGE:
00000000% VERSION 3.1
00000100 BEGIN
00000200 DISPLAY(“line 1”);
00000250 DISPLAY(“LINE BB”);
00000300 END.
The source itself has in MERGE still the same
content as in point 1.
The BB patch-file has still the same content as
in point 2.
4. The file is checked-out in environment
DEVELOP for baseline AA and the following modification is made (the
AA-modification):
00000000% VERSION 3.1
00000100 BEGIN
00000150 DISPLAY(“LINE AA”);
00000200 DISPLAY(“line 1”);
00000300 END.
This results in the following patch-file for
baseline AA:
PATCH/PROG/XX/002 in
DEVELOP:
00000150 DISPLAY(“LINE AA”);
Program PROG/XX itself is not changed because
the modification is made via a patch-file.
5. The AA-modification is transferred to
environment MERGE. Baselines are not activated in this environment. The
AA-modification is still in a separate patch-file, but ALL available
patch-files are always merged (temporarily) into the source when that source
has to be compiled:
The merged PROG/XX in
environment MERGE
00000000% VERSION 3.1
00000100 BEGIN
00000150 DISPLAY(“LINE AA”);
00000200 DISPLAY(“line 1”);
00000250 DISPLAY(“LINE BB”);
00000300 END.
The source itself has in MERGE still the same
content as in point 1.
The BB patch-file has still the same content as
in point 2.
The AA patch-file has still the same content as
in point 4.
Both patches are now for the first time in the
same merged source. This is the place where the merge process can be checked
for physical and logical errors. The check on logical errors is a manual check.
SURE gives a warning if two patch-files have records with the same sequence
number (physical overlap). The compiler gives syntax-errors if the merge result
is invalid.
6. The AA-modification is transferred to
environments INTEGRATION and PROD. The AA-modification is still in a separate
patch-file, but baselines are not activated in these environments and that
causes that ALL available patch-files are always merged (temporarily) into the
source when that source has to be compiled:
The merged PROG/XX in
environment INTEGRATION and PROD
00000000% VERSION 3.1
00000100 BEGIN
00000150 DISPLAY(“LINE AA”);
00000200 DISPLAY(“line 1 quick fix”);
00000300 END.
The source itself has in INTEGRATION and PROD
still the same content as in point 1.
The BB patch-file is not yet available in these
environments.
The AA patch-file has still the same content as
in point 4.
Summarized:
A
patch for baseline BB is transferred to MERGE.
A
patch for baseline AA was transferred to MERGE, INTEGRATION and PROD.
The source
is NOT polluted by the BB-modification in INTEGRATION and PROD and that proves
that the modifications for a specific baseline-task (the AA task) are
transferred to production separately from modifications for other tasks (the BB
task).
It is possible to define three default
object-locations per system and per environment: the object-location, the
bindobject-location and the alternate-location. (These names have become like
this for historical reasons, but there is technically no difference between the
three locations. The could also have been named object-location-1,
object-location-2 and object-location-3)
The three different object-locations are per
environment. If you need more then three different location then you must do
that via an extra system.
The ‘object-inherit option’ of the
file-type identifies which of the three object-locations must be inherited by a
file with that file-type. (This means that an object-location is physically
linked to a file: The object-location of a file on the object-tab of the
file-properties.)
Example:
Consider system AAA with the following
object-locations:
|
Object-location |
= PRODUSCD on
CODEPK |
|
Bind object loc |
= SURE on SUREPK |
|
Alternate location
|
= * on CODEPK |
File-type HAS-OBJECT inherits the object-location
File-type WFL inherits the alternate location
File-type BINDMODULE inherits the bindobject location
File-type NO-OBJECT inherits no object location
All files with file-type HAS-OBJECT have now
object-usercode/pack PRODUSCD on CODEPK.
All files with file-type WFL have now object-usercode
* and object-pack CODEPK.
All files with file-type BINDMODULE have now object
location SURE on SUREPK.
All files with file-type NO-OBJECT don't have an
object-usercode/pack and won't be compiled.
A newly added file
with file-type WFL gets automatically object-location * ON CODEPK.
Etcetera
End-example
If one of the
default object-locations of a system is changed, then the inherited
object-locations of the files of that system are changed too.
If the
inherit-option of a file-type is changed, then the object-locations of the
files of that file-type are changed too.
If the system or
file-type of a file is changed, then the object-location is changed too.
These three rules
enforce that an inherited object-location is always equal to the default
object-location of the system.
It is possible to
overrule this inheritance, by defining a deviation object-location for a file.
If that is done, then the object-location of that file will NOT be updated
automatically if the system-properties or file-type properties are changed.
Security.
Notice the two
authorization bits 'Update' and 'Update object fields' (on the security
map:button SURE authorizations):
-
A user is allowed to update the file-properties if he
has security-bit 'Update' set.
-
A user is allowed to update the object-related
file-properties (on the object-tab) if he has both security bits (Update' and 'Update
object fields') set.
The correct
object-location is always automatically set when a file is newly added,
independent of the two security bits.
The
combination of this all leads to a secured object-environment, and at the same
time flexibility when a new file is added.
Give a programmer
the authorizations to add a file (security:'Enter') and to modify the
file-attributes (security 'Update') but NOT to modify the object-related fields
(security 'Update object fields'). The programmer can now add a new file, and
the correct object-location (depending on the file-type/system) is
automatically linked to the file. The programmer cannot change the
object-location, so the object-environment remains unharmed.
Transfer option per environment.
The environment-transfer
option (Environment/PRODUCTION/properties/"Sure Transfer") is used to
identify how the library maintenance copy to a specific pack/host must
be done: a copy via tape or a pack-to-pack copy (via BNA)
Example:
Consider files
PROG/1 and PROG/2, both with object-location (OBJ1) ON SYS1PACK ON HOST1.
If one or
both of these files are compiled during the batch then the new objects have to
copied to the correct location: (OBJ1)<object name> ON SYS1PACK ON HOST1
This copy goes by
default via a tape and the tape name will be
OVZ<packname>'_'<hostname>. In this example the tape name will be:
OVZSYS1PACK_HOST1.
If combination
SYS1PACK/HOST1 is entered in the environment-transfer options then the copy
will be done directly from pack to pack via BNA.
Summarized:
The default
object-locations of a system are used to setup object-locations. A newly
entered file inherits such a default object-location.
The
environment-transfer options are used to identify the type of
library-maintenance copy.
How come SURE compiles copylib's and wfllib's and gives them
syntax errors? Do we have something configured wrong?
Also we are getting
errors(abort) on DFDS modules.?
If a file
has an object-location then it will be compiled. The object-location is the
trigger for RESPECT/SURE/COMPILE to compile that source.
There are a few
exceptions to this rule, based on the filekind of a source:
-
If a file is a
DATA, SEQDATA or TEXTDATA file, and it has an object-location, then it won't be
compiled, but copied from the repository to that object-location.
-
If a file is a
JOBSYMBOL and in use as a copy-file, and it has an object-location, then it
won't be compiled but copied to that location.
-
If a file is a
JOBSYMBOL and NOT in used as a copy-file, and it has an object-location,
then it will be compiled. (in use means: another source references this sources
as a copy-file).
-
For all other
files: if the file has an object-location, then SURE will try to compile that
source, even if it is a copy-file.
The solution:
give the copy-files a file-type with option 'never use object-location'.
A new
repository is installed with the following employee functions and authorization
schemes:
Employee
function SURE-ADMINISTRATOR
-
A SURE
administrator is allowed to perform all SURE functions, except to change the
authorization maps.
Authorizations:
-
All
authorization bits are set, except bit Global: security.
Employee
function SECURITY-ADMINISTR
-
A
security administrator is allowed to change the authorization maps.
Authorizations:
-
Global:
security.
Employee-function
OPERATOR
-
An
operator starts the evening batch.
Authorizations:
-
Global:
monitor, status
-
Sure:
operator
Employee function
DEVELOPER
-
Developers
are allowed to checkout, modify and check-in files, etc.
Authorizations:
-
Global:
compile, quick-fix, status, transfer-from
-
Sure:
copy, examine, find, get, list, request, resequence, reset, save, update, write
-
Task:
reactivate, update field solution
Employee
function SENIOR-DEVELOPER
-
A
senior developer is responsible for the total set of sources of the
application, and the links to drivers and start jobs.
Authorizations:
-
Global:
options, purge, recover,
-
Sure:
activate-queue, compile-prod, deactivate-queue, delete, driver, enter, guard,
load, multi-compile, object-security, patch, purge-queue, rename, replace,
startjob, updateOK, update-object-fields.
Employee
function ANALYST
-
An
analyst must be able to look into files and to add and approve tasks.
Authorizations:
-
Sure:
copy, find, list, view
-
Task:
add, approve/deny, change-to-solved, delete, reactivate, task-dependencies,
update solution
Employee
function PROJECT-LEADER
-
A
project leader is responsible for the status of the tasks.
Authorizations:
-
Global:
create-overlap, link-team, organization, transfer-block, -from, -to, -delink,
-move, options
-
Sure:
assign, request
-
Task:
variable-authorizations, power-attribute, task-groups
Please
notice that this is only an example. You can change the authorization schemes
to your own needs, and add or delete employee-functions.
Notice that
it is possible (and often necessary) to link multiple employee functions to a
usercode. In the example above:
-
A
senior developer must also have employee function DEVELOPER otherwise he cannot
checkout and check-in any files.
-
A
project-leader may also have employee-function ANALYST.
In many
organizations, a strict separation of responsibilities is a requirement. SURE
meets this requirement because it is possible to define authorizations per
employee function, as shown in the example. On the other hand: If all above
mentioned employee-functions are linked to one usercode, then that person is
allowed to perform all functions in SURE.
By the way, I found nothing happened
while I tried to compile via function <Right click on file>/Compile/SURE.
There are two ways
of compiling:
Answer
to question ‘by the way…’:
What you did
(function <right click on file>/Compile/SURE) is adding a file manually
to the compile-queue (I explained this above under point 2). You can see the
contents of the compile-queue via Tools/Compile_Interface.
I understood that SURE could start a
batch compilation for a task, which involved some program files and copy files
and wfl-files. What must be the contents of such a batch job and how must I
start it? Can the batch job be scheduled?
Each SURE
environment has its own job for the SURE evening batch. Parameters for the SURE
evening batch must be entered on tab-sheet ‘SURE batch’ of the
environment-properties dialogue (<right click on environment-name and choose
‘Properties’). The SURE-batch-directory must be entered on that screen plus the
time that the evening batch has to start. The evening job must be generated via
button ‘Generate WFL’. The job is called WFL/<environment>/SURE and is
placed in the SURE-batch-directory.
The SURE batch can
be started in two ways:
Chapter ‘Compilation
and Object files’ of the manual gives a detailed explanation about the
compilation process of the SURE evening batch.
SURE automatically re-compiles all sources that use
the include file.
A changed or transferred source is automatically
placed into the compile-queue, and you can check that queue via function
Tools/'Compile Interface'/Queue/Detail/ToCompile.
By default, the object-name of a source is
'OBJECT/'<source-name>. If no specific object-name is defined for
a source, then the object will get the default name.
If a specific object-name is defined for the source
then the compiled object will get that name, but still with the prefix
'OBJECT/'.
If a specific object-name is defined for a source and
that object-name starts with '$', then the compiled object will get that name,
but without character '$'.
Example with file PROG/ABC:
The name of the compiled object, if no specific
object-name is defined: |
OBJECT/PROG/ABC |
The name of the compiled object, if the defined
object-name = 'ABC': |
OBJECT/ABC |
The name of the compiled object, if the defined
object-name = '$OBJ/ABC': |
OBJ/ABC |
The default name-standard for object-names is
OBJECT/<source-name>, but many customers have another, unique, name
standard for their objects. It is possible to define a site-specific
library-procedure that calculates the object-name from a source-name, and (vice
versa) the source-name from an object-name. The default name-standard for
object-names (which is OBJECT/<source-name>) is then replaced by a
site-specific default name-standard.
With this method, it is not necessary to define a
deviating object-name for each specific source, because the object-names are
determined by the site name-standard that is written in the library. However,
it is still possible to overrule the site name-standard for an individual file
by defining a deviating object-name for that file.
If you want to define a site-specific default
name-standard for object-files then the name of that library must be entered in
the global-options dialogue:
<Right click on Environment>/'Global options'/Tab:Sure/Field:UserLibrary.
Notice that the full name of the library must be entered.
In the PowerPoint presentation called ‘Configuration’,
you will find details how to write (and to declare) a library that determines
object-names.
Binding is done via DRIVER and START-JOB
relations.
A source is linked to a driver via a DRIVER
relation. The driver is linked to a job via a START-JOB relation. The customer
determines the content of the start-job, but the following is mandatory: the
start-job requires two parameters and at the end of the start-job program.
RESPECT/SURE/FINISH must run. This program reports the result of the start-job
back to SURE.
If a program is compiled via RESPECT/SURE/COMPILE
during the SURE evening batch, and that program has a DRIVER relation, then the
start-job of that driver will be started at the end of all compilations, and
RESPECT/SURE/COMPILE will wait until the result of the started-job is reported
back.
A program can be linked to a driver via
File-Properties/Configuration/Driver
A program can be linked to a start-job via
File-Properties/Configuration/Start-Job.
Please refer to chapter ‘Binding of Programs’ in the
SURE manual for detailed information.
As you
know, database compilation could possible issue the reorganization, and also
would impact other programs. So will SURE handle these procedures automatically
if I just checked-in the DB-dasdl for compilation. Are the following functions
done by SURE: Compile the DASDL, compile the BUILDREORG, compile the REORG
program, execute the REORG program and compile the programs that are impacted
by the DB.
This time I must disappoint you because SURE does not do anything
concerning database reorganization.
The dasdl-compilation, buildreorg, reorg-compilation
and execute-reorg are all manual actions for the following reasons:
-
Sometimes it is not possible to perform the
reorganization in one step. In that case, an intermediate
compilation/reorganization has to be done.
-
Many sites have specific names for the database
software (the dasdl-compiler, etc) with a DMS-version number in the name.
-
The application database can be placed anywhere on the
system, perhaps even on another host.
-
The exact moment that the reorganization program must
be started is often very critical.
-
Etcetera.
Almost each site has its own procedures when database
reorganization is involved.
Compiling the programs that were impacted by the
database reorganization is done by SURE as follows:
1. Put the
new description file at the right place where it is visible for the SURE
evening batch.
2. Put files
that use the changed database and/or changed dataset manually into the
compile-queue with function 'Multi-compile'. The multi-compile dialogue has a
button 'Query' on it. This button must be used to select a group of files via a
query (for example the query DATASET(DS1) OR DATASET(DS2) selects all files that
use datasets DS1 or DS2). All files that are selected via the query are then
added to the compile queue.
3. Start the
evening batch.
After reading Chapter ‘’Sumlog
Information’ in user guide, I have a feeling that the main function of
RESPECT/SURE/LOG is to offer the help to monitor the violation.
What I am thinking is that when we begin
to implement SURE in a customer site, we will convert the existing procedures
gradually. For instance, we would like to implement SURE with a small
application system, so we load its entire source into INFDB. And then, we teach
the team member from now on to follow the methodology of SURE and use the tools
of SURE, from the beginning of a task to the end. But if a certain member wants
to take his chance to sneak a program into production environment, can we
detect this behavior afterwards or can we prevent this happen proactively?
The current situation is that program RESPECT/SURE/LOG
reports which objects in the production environment are not placed there
via SURE. In this case, the invalid object is already in production. We have a
library-function that checks the version of an object at the moment that the
object is started. This library function has to be called by a program, and
unfortunately, we don't have such a calling program right now. This program can
take the appropriate actions if an invalid object is started.
I don't think that it is very difficult to write such
a program. The functionality must be:
-
Wait until a new object is
started (perhaps there is an MCP function for this).
-
Check the timestamp of that
object in SURE.
-
Take appropriate action in
the case of an invalid object.
You are right: an important function of
RESPECT/SURE/LOG is reporting invalid objects.
Another important function is sampling the run-time
statistics of the objects that ran. If an object always runs for 2 minutes, and
today it runs for more then an hour, then there might be something wrong. The
sequence-number of the sumlog is kept together with the statistics of each
object session. This is handy for the developer, because he does not have to
search anymore for the correct sumlog in the case that he wants to have details
form the sumlog about a program session.
The run-statistics can also be used for planning
purposes: if an object runs each month 5 minutes longer, then you can calculate
when you must do something about that.
We also have a batch-program that reports how often
objects have ran during the last 15 months (using the run-statistics that are
loaded in SURE).
I loaded all files of a local system
into SURE. The production versions of
the OBJECT files should be *OBJECT/= ON DISK.
The evening batch of the DEVELOPMENT environment compiled the files and
waited for a tape called OVZDISK.
Where did it get the name OVZDISK?
Why is
it trying to copy to a tape?
Program
RESPECT/SURE/COMPILE compiles the sources that are loaded and places them into
the transfer-queue.
Program
RESPECT/SURE/TRANSFER uses this transfer-queue: All objects mentioned in the
queue are transferred to the correct object-location (= <object-usercode>
on <object-pack> at <object-host>)
RESPECT/SURE/TRANSFER
generates a job called SURE/TRANSFER/<object-pack>. All objects that must
be transferred to that object-pack are handled by that job. A transfer job will
be generated for each object-pack (if objects have to be transferred to that
pack).
The actual transfer
of the objects to the object-location goes via library-maintenance (a COPY
statement in the job).
The original (and
still default) method to transfer the objects to the object-location is via a
transfer-tape. The objects are then copied from the SURE-pack to a transfer-tape,
and the operator has to copy the objects from the transfer-tape to the
object-pack (via COPY = FROM <transfer-tape> TO <object-pack>). The
name of the transfer-tape is OVZ<object-pack>. OVZ is an abbreviation of
the Dutch word ‘overzet’ that means ‘transfer’.
It is possible to
indicate per object-pack (and per environment) that the library maintenance
copy has to go directly from the SURE-pack to the object-pack and not via an
OVZ-tape.
This option must be
set as follows:
-
Right click on the name of the applicable SURE
environment, and choose 'Properties' from the sub-menu.
-
Click on tab 'SURE transfer'
-
Enter the name of the object-pack in the column 'Disk
family name'. If you leave the corresponding field 'BNA hostname' (on the same
line as the disk-family-name) empty, then SURE assumes that the
<object-pack> is one of the packs of the current host, otherwise a
BNA-copy is started for each family/host combination.
Examples:
Disk Family Name |
BNA Hostname |
|
PACK1 |
<empty> |
All files that
have object-pack = PACK1 and object-host = empty are copied directly from the
SURE batch pack to PACK1(PACK). In this case it is
assumed that PACK1 belongs to MyHost. This line does not
select N.B. Files with an object-host. |
PACK1 |
HOSTA |
All files that
have object-pack = PACK1 and (object-host = HOSTA or object-host = empty) are
copied directly (via BNA) to PACK1(PACK,HOSTNAME=HOSTA). |
PACK1 PACK1 |
HOSTA HOSTB |
All files that
have object-pack = PACK1 and object-host = empty are copied directly (via
BNA) to PACK1(PACK,HOSTNAME=HOSTA) and to PACK1(PACK,HOSTNAME=HOSTB). Files that have
object-pack = PACK1 and object-host = HOSTA are only copied to
PACK1(PACK,HOSTNAME=HOSTA). Files that have
object-pack = PACK1 and object-host = HOSTB are only copied to
PACK1(PACK,HOSTNAME=HOSTB) |
<empty> |
HOSTB |
All files that
have object-host = HOSTB are copied directly (via BNA) to
<object-pack>(PACK,HOSTNAME=HOSTB). |
Why is the disk-to-tape-to-disk
method still the default?
If somebody enters
by accident a wrong object-pack-name then that object won't be copied
automatically to the wrong place, but the transfer starts waiting for an
OVZ-tape. In that case you still have the possibility to fix the problem and to
make sure that the object is copied to the correct location.
Chapter 22.12 of the
manual gives extra details about the object-transfer process.
SURE starts all the compiles at priority
49. Our standard is 40. How can I configure SURE to start the
compiles at priority = 40.
I have WFL MODIFY-ed
(SURE)OBJECT/RESPECT/SURE/COMPILE; PRIORITY = 40;
You can define a system queue that must be used for
the SURE batch job. The programs in the batch job will inherit the
priority from the system queue. The actual compilation will have a priority 1
less then the priority of RESPECT/SURE/COMPILE.
You can define a system queue for the batch job as
follows:
Right-click on the environment-name / Choose
properties / go to tab: SureBatch / field JobQueue (at the bottom of the
screen)
Modifying an
object is a possibility, but then you have to modify the object each time when
you receive a new version and that is unhandy.
SURE offers
two ways of compiling in order to get the object code for test: one is
Compile/Local; the other is Compile/Schedule SURE Compile.
If we check
out a copy-file, and then modify the content and choose “Schedule SURE Compile”
of the main symbol file, the compiling will be handled by SURE evening batch,
which will retrieve the original copy-file in the repository in the process of
compilation. The outcome will not be as what the programmer expects.
Please
classify whether it is designed as intended or not.
It is intended.
The following example describes
the exact procedure:
Version 3.1 of a copy-file
is stored in Sure and the developer does a checkout of this version. Now there
are two possibilities:
A.
The checkout was done with
option 'Local' to download the source to the PC. Version 3.1 is file-transferred
to the PC and a copy of version 3.1 remains on the mainframe in Cande under the
programmers work-usercode. The programmer modifies the work-version of the
copy-file on the PC with a local PC editor (like notepad), and he can use
function UPLOAD to copy his patches from his work-file on the PC to the copy
version in Cande.
Therefore, after using function
upload, the Cande-version and the local-PC-version of the copy-file are equal.
After
using function check-in, the modified copy-file is stored into Sure as version
4.1. The copy-file is copied to the work-directory.
B.
The checkout was done
without option 'Local'. The developer is going to modify the copy-file on the
mainframe via Cande, NX/Edit or System/Editor (U ED). The programmer modifies
the work-version of the copy-file and he can use function SAVE to save his
work-file and to replace his Cande source with his modifications. (Notice that
in this case NX/Edit is not categorized as a local editor, but as a
mainframe-editor with a Windows look-and-feel.).
Summarized: The
modifications are put in the Cande source after an UPLOAD when you use a local
editor or after a SAVE when you don't use a local editor.
Program
RESPECT/SURE/COMPILE handles the scheduled compilations. It retrieves the original
copy-files from SURE, but if RESPECT/SURE/COMPILE runs for the develop
environment, then it first checks if the copy-file is checked-out by a
programmer. It uses the checked-out copy-files from the programmer's Cande
work-locations, and the copy-files that are not checked-out are retrieved from
SURE. So all the programmer must do is SAVE or UPLOAD his work-file to ensure
that the Cande-source contains his last modifications.
Notice also option
"Allow use copy-files from directory * “. If this option is set, then
RESPECT/SURE/COMPILE first checks if the copy-file is checked-out by a
programmer. If it is not checked-out then it checks if the copy-file is
resident in the global directory *. If it is not resident under * then it
retrieves the copy-file from SURE. This option can be set per environment via
<environment-properties>/Tab:"SURE batch". Usually this option
is set for the develop environment and reset for the other environments.
The ABORT-START-JOB is an extra relation and it means that the start-job
did not come to a correct end. The COMPILE-STATUS(SYNTAX) or
COMPILE-STATUS(ABORT) is then added for a more detailed indication.
The
reason of ABORT-START-JOB is to notify the user that the problem has something
to do with the start-job. Notice that it is possible that you have a source
that is normally compiled and at the end of the compilation something extra's
is done in a start-job. Suppose that the start-job fails with a COMPILE-STATUS ABORT
or a COMPILE-STATUS SYNTAX. In that case, it is handy to know that the cause of
the problem is in the start-job, and therefore we add an extra
COMPILE-STATUS(ABORT-START-JOB). This relation is always removed when a new
compilation is started, so it this relation is present then a problem occurred
at the last compilation.
We used the example in the tutorial to load and
compile files from usercode SURE_DEV into SURE. But now we want to load files
from another usercode. Is that possible?
Loading a directory
of files in SURE is usually only done during the installation period of SURE.
In that case the
customer has already many application programs, and those programs have to be
loaded in SURE via a batch method. When all those files are loaded in SURE then
the batch-load-process won't be used anymore (usually) because new files are
then entered in SURE one by one via the 'Add New File' procedure at the moment
that a developer has to create a new file.
We think that the
function 'load files in SURE' is more or less a conversion function: the
customer converts from his old style of software-configuration-management to
the SURE-style.
The way we use this
function is as follows:
-
Copy a directory of files (that must be loaded in
SURE) to your source-usercode (mentioned in the SURE browser).
-
Load those files in SURE.
The source-usercode
that is given in the SURE browser (in your case (SURE_DEV) ON PK1) depends on
the default Cande-work-environment that is linked to a SURE-environment. This
default Cande-work-environment can be defined via the
Environment-properties-dialogue on the first tab-sheet.
If the default
work-environment is a usercode on pack (e.g.
(AAA) ON PK1) then that usercode/pack will always be mentioned as
source-usercode in the SURE-browser (no matter with which usercode you logged
on to SURE).
If the default
work-environment is the global directory on a pack (e.g. * ON PK1), then your log-on usercode will be
mentioned as source-usercode in the SURE-browser. In that case: if each
developer has his own personal usercode to log-on then that usercode will also
be his Cande work environment.
Is there a command that says "COPY
THE LATEST VERSION OF ALL FILES IN SYSTEM SYS1 OF ENVIRONMENT ACCEPTANCE AS
(ACC)SYS1/= TO MYPACK".
There are several thousands of files so
this would have to be a batch operation. We would not be able to do them
individually.
There is a batch program RESPECT/SURE/DUMP that can be
used for this purpose.
The program requires two parameters that are used to
select the group of files that must be dumped from the repository to disk.
Examples:
WFL RUN OBJECT/RESPECT/SURE/DUMP("COPY-FILE","<name
of a copy-file>")
WFL RUN
OBJECT/RESPECT/SURE/DUMP("LIBRARY","<name of a library")
WFL RUN
OBJECT/RESPECT/SURE/DUMP("SYSTEM","<name of a
system>")
Etc.
You must select the
SURE environment via the TASKSTRING attribute:
WFL RUN
OBJECT/RESPECT/SURE/DUMP("<class>","<asset>");TASKSTRING="<environment>"
In your case the
full statement will be:
WFL RUN
OBJECT/RESPECT/SURE/DUMP("SYSTEM","SYS1");TASKSTRING="ACCEPTANCE"
Please notice that the dumped files will not be
prefixed (with SYS1 of something like that). The dumped files are placed on
disk under the usercode where you ran the dump-program. I advise you the clean
up that usercode as much as possible before you start the dumping action.
Is it
possible to use NX/Edit in local mode on the PC? So the file(s) could be edited
on the PC just like the other editors? Most of the time we will use NX/Edit in
client/server mode, but sometimes it can be handy to edit a source in local
mode.
In the current release of SURE it is only possible to
use NX/Edit in client/server mode for the following reasons:
1. Sources
on the Unisys platform contain a sequence number. This is no problem if you are
editing your files on the mainframe via Cande or via the Unisys SYSTEM/EDITOR
(U ED), because those editors are well designed for those sequence numbers. The
sequence number becomes a problem if you are editing a file local on your PC
via a PC editor. Especially in the case that the sequence number place is at the
end of the physical record (such as algol: position 73 - 80). Editing on the PC
is then unhandy, because the sequence number at the end of a line will be
shifted when you are inserting or deleting characters on that line.
We solved
that problem as follows: When a file is checked-out and copied to the PC, then
the sequence numbers are converted to the beginning of each record. When a file
is checked-in and file-transferred from the PC to the mainframe, then the
sequence numbers are placed back from the beginning of the record to the
original place. This means: The position of the sequence number in the source
on the PC can differ from the position of the sequence number in the source on
the mainframe (in algol, job, seqdata, etc files; not in cobol files because
the sequence number in cobol is at the beginning of the record).
Now a new
problem occurs with the NX/Edit editor, because that editor expects the
sequence numbers at the original place and not always at the beginning
of the records (just like Cande and the SYSTEM/EDITOR). Our solution for that
problem was: leave the source on the mainframe when the local editor is NX/Edit
(and skip the sequence number conversion). The source will then be
file-transferred to the PC by NX/Edit itself in the correct way that NX/Edit
expects. This requires that NX/Edit starts in client/server mode.
2.
More or less the same story applies for the mark-id
in the source. The mark-id is placed at the end of each record and that is very
unhandy during the editing phase. Therefore we discard the mark-id when we
file-transfer the source form the mainframe to the PC.
3.
When the source is checked-out then the original
version of that source is copied to the PC in the user's source-directory
(=source-file) and to the user's work-directory (=work-file). That same
original version is also placed under Cande on the mainframe. The changes to
the source are made in de version in the work-directory (= the work-file). When
the file is checked-in then the work-file is compared with the source-file, and
the differences are file-transferred to the mainframe and merged into the
original version that is available under Cande. This is done for performance
reasons: most of the time it is much faster to transfer only the differences
instead of the total changed source. Notice that it is also possible to send
the complete source when the file is checked-in, but that is not the default
mode.
The
algorithm to create the difference-file expects that the sequence numbers are
at the beginning of the record, and this algorithm does not expect a mark-id in
the source. But that is in conflict with the NX/Edit mode of working (see
earlier).
The following
problems also occur (but have nothing to do with local mode or
client/server-mode; they are just NX/Edit problems):
I transferred a task from environment DEVELOP to ACCEPTANCE, but I can't
find the transferred task in environment ACCEPTANCE. How can I select a task
after transferring it successfully? Or do I need to specify the task name by
opening the Select Task folder?
The task must be
available in folder 'Life Cycle'\'Task Busy' on the ACCEPTANCE environment.
Obviously you can also select it via the 'Select\Task' method, but then you
must remember the task-name.
All this depends on
the option 'Task gets status solved'. This option can be set per environment
via folder <environment name>\Properties\<First tab>.
-
If this option is set for an environment and a
task is transferred to that environment then the task gets status SOLVED, and
then the task won't appear in the life-cycle\task-busy list. A task with status
SOLVED cannot be transferred anymore to a further environment.
-
If the option is not set for an environment,
and the task is transferred to that environment, then the task gets status
<environment-name> (in your case status ACCEPTANCE), and the task will
appear in the life-cycle/task-busy list of that environment.
Another aspect is
the project of the task. A task is only visible for a user if that user has the
task-project in his project-list or if the user has not any project in
his project-list.
Example: If user
SIMON is linked to one or more projects, but not to project PRJ1 then he won't
see tasks of project PRJ1 (he will only see tasks of his own projects). The
project list is used as a filter.
You can see the
project-list of a user if you click on the + symbol that is placed in front of
the user-name and then on 'project', or if you open the user properties.
The flow is as follows:
2.
If a task has
status ENTERED (immediately after it is added in SURE), it appears is
folder 'Tasks to assign' of the task-environment.
3.
If a task is
assigned (by making it current, or assigning it to any person) the
status is changed to the <task-environment>. It disappears from
folder 'Tasks to assign' and it appears in folder 'Tasks busy' of the
environment that is indicated by the task-status.
4.
If a task is
made ready it gets sub-status READY (the status does not change). It disappears
from folder 'Tasks busy' and it appears in folder 'Tasks ready' of the
environment that is indicated by the task-status.
5.
If a task is
transferred to the next environment, then the task-status is changed to that
next-environment, and sub-status READY is removed. The tasks appears now
in folder 'Tasks busy' of the environment that is indicated by the task-status
( = the transfer destination environment).
6.
If a task is
transferred to the last environment, then the task-status is changed to SOLVED.
The task does not appear anymore in any folders, but you can still select it
via the 'select task' folder, or via a macro.
N.B.: Step 4 is optional. You can also transfer a 'busy task' if all files are checked-in.
Example.
Suppose that you have a repository with three environments: DEVELOP, PRE-PROD and PRODUCTION.
ad 2: A new task is added for environment DEVELOP. The task-status = ENTERED, the task-environment = DEVELOP. The task is in folder 'Tasks to assign' of environment DEVELOP because that folder contains all tasks with task-status = ENTERED and task-environment = DEVELOP.
ad 3: The task is assigned to user STEVE (because he made the task current). The task-status = DEVELOP, the task-environment = DEVELOP. The task is in folder 'Tasks Busy' of environment DEVELOP, because that folder contains all tasks with task-status = DEVELOP and NOT sub-status = READY.
ad 4: The task is made ready. The task-status = DEVELOP, the task-environment = DEVELOP, the sub-status = READY. The task is in folder 'Tasks Ready' of environment DEVELOP, because that folder contains all tasks with task-status = DEVELOP and sub-status = READY.
ad 5: The task is transferred to PRE-PROD. The task status = PRE-PROD, the task-environment = DEVELOP, no sub-status. The task is in folder 'Tasks Busy' of environment PRE-PROD, because that folder contains all tasks with task-status = PRE-PROD and NOT sub-status READY.
ad 4. The task is
made ready. The task-status = PRE-PROD, the task-environment = DEVELOP, the
sub-status = READY. The task is in folder 'Tasks Ready' of environment
PRE-PROD, because that folder contains all tasks with task-status =
PRE-PROD and sub-status = READY.
ad 6: The task is transferred to PRODUCTION. The task status = SOLVED, the task-environment = DEVELOP. The task does not appear in any folder.
ad 1: Notice that the task-environment is not changed. It remains all the time DEVELOP. This is important because it is only possible to work on a task if the task-status is equal to the task-environment and when you are logged-on to that environment (in this example DEVELOP). If the task is not yet assigned, or if the task is transferred, then it can't be used anymore for file-modifications.
It is possible to re-activate a task. This sets a task back to a previous environment.
For example: A reactivation after step 5 sets the task-status back to DEVELOP, and the task appears again in the 'Task Busy' list of DEVELOP and disappears from the 'Task Busy' list in PRE-PROD.
Sometimes
it is necessary to change an include file and some of the programs using that
include file. What is the best way to find these programs?
There are several methods:
-
Make the include file current in the SURE browser and
click on the + symbol to open that folder. Open then the folder 'used by' and
you will see the programs that use this include-file. The attributes that are
listed under the 'used by' folder are variable and can be defined by the
customer via 'Configuration/References'.
-
Execute a query with the following (advanced) query
expression: COPY-FILE(<name of include file>). This will place all the
programs that use this include file in the query-folder.
The PowerPoint presentation about queries and macros
contains a list with all technical names of the SURE-file-attributes and
SURE-task-attributes and a short description of the purpose of each attribute.
The usage of these attributes in the queries/macro's is also explained.
Please notice that SURE automatically recompiles the
programs that use a copy-file when that copy-file is checked in. This is done
by program RESPECT/SURE/COMPILE during the SURE evening batch.
Notice also that the query function can be used to
select files that have to be scanned by the SURE-find function. The scanned
files that contain the find-target are marked and can be selected easily via a
query expression: FOUND(<my-userid>).
Then I do a checkout of one of these
files, modify something and do check-in, and now the include-files are suddenly
there. How is that possible?
When a file is
loaded in SURE, it is placed in the examine queue. When a file is checked-in,
it is placed in the examine-queue, but it is also quickly scanned for copy-file
statements. SURE examines a file automatically after it is checked-in, because
perhaps a new copy-statement is written in the source.
I think that I
understand what happened:
3. You did a
checkout and a check-in and that triggered an automatic examine.
A source is placed
in the examine-queue when it is changed in an environment:
-
After a check-in: placed in examine-queue + quick scan
for copy statements.
-
After a load: placed in the examine-queue.
-
After a transfer: placed in the examine-queue + quick
scan for copy statements.
Program
RESPECT/SURE/EXAMINE reads the examine-queue and examines all those files. Each
examined file is removed from the examine-queue, so at the end of the run the
examine-queue is empty. The program runs in the standard SURE batch, but you
can run the program manually under Cande as follows:
RUN
OBJECT/RESPECT/SURE/EXAMINE("");TASKTRING="<environment>".
You can
start the examine process for a specific program via
<File Properties>/Configuration/Examine.
Notice that
it is important that changed files are examined. If you are not starting
the SURE batch frequently, then you should run the examine program manually
every now and then.
Make the file current in the SURE browser and click on
the + symbol to open that file-folder. Open then folder 'Delta Files'. Right
click on the delta-file to where you want to roll back and choose 'Copy' from
the menu. This copies a rolled-back version of that file to disk under your
usercode. Notice that the file is not rolled-back in SURE. To do so you must
check-out the source in the regular way and overwrite the checked-out version
with the rolled-back version.
Are the
BD-files shown under "Printer Output" only those that are created by
SURE? If not, how to transfer other BD files to that folder?
The list of
BD files that appear in folder 'Printer Output' depends on the contents of the
Printing System. First the print-requests of the user are listed via command
'PS SHOW U <usercode>' and then the backup-files of a print-request are
listed via command 'PS BDIR REQUEST <print request number>'. If a
backup-file is not known in the Printing System then it won't appear in folder
'Printer Output'.
A
print-request appears in the Unisys Printing System at print disposition time.
Usually that is at the moment that a session is closed (SPLIT in Cande or EOJ
in a job). You can copy a print-request via the backup processor.
The
usercode that is used in the 'PS SHOW U <usercode>' command is equal to
the log-on usercode. But if you click on the + sign before a usercode-name (for
example via folder 'Organization/User/<usercode>' or via a team list)
then there will also be a sub-folder 'Printer-Output' for that user, and if you
open that printer-output then the BD-files of that user are given.
If you transfer a task to acceptance, then you
also transfer all the sources that are linked to that task. After the transfer
the task is not available in develop anymore.
It is possible to reactivate the task so that the task
becomes again available in develop. But if you reactivate the task, the
changed files are not automatically rolled-back in acceptance. So
the changed files remain in acceptance when the task is reset to develop.
We have the following remarks about this:
I give an example
with file PROG/A.
Situation 1: PROG/A
has the following versions:
In DEVELOP: |
Version 3.1 and
linked to task T1 |
In ACCEPTANCE: |
Version 2.1 |
In PRODUCTION: |
Version 2.1 |
Situation 2: After
task T1 is transferred to acceptance:
In DEVELOP: |
Version 3.1 |
In ACCEPTANCE: |
Version 3.1 and
linked to task T1 |
In PRODUCTION: |
Version 2.1 |
Version
3.1 of PROG/A is now being tested in the acceptance environment.
Situation 3: An
error was found during the test in acceptance and task T1 is
re-activated:
In DEVELOP: |
Version 3.1 and
linked to task T1 |
In ACCEPTANCE: |
Version 3.1 and
linked to task T1 |
In PRODUCTION: |
Version 2.1 |
The
reactivate was done with option 'request connected files for the user that
modified the files'. This option is on the reactivate-dialogue and re-links
PROG/A automatically to task T1 in develop.
The
error-version 3.1 is still in acceptance, but nothing else happens with
that file in that environment. Notice that version 3.1 will probably still be a
'better' version than the old version 2.1, especially when other files were
also adapted because of this task. Also notice that acceptance is still a test-environment. It is not a
disaster if you find errors in this environment.
Task T1
is now active in develop so it is not possible to transfer the task to production.
Version 3.1 of file PROG/A is still active in acceptance, but it is not
possible to transfer this file-version to production, because the task
T1 is active in develop.
Situation 4: The
error is fixed in develop (PROG/A is checked-out, fixed, and checked-in):
In DEVELOP: |
Version 4.1 and
linked to task T1 |
In ACCEPTANCE: |
Version 3.1 and
linked to task T1 |
In PRODUCTION: |
Version 2.1 |
The
error-version 3.1 is still in acceptance, but nothing else happens with
that file in that environment. It is not possible to transfer this file-version
to production, because the task T1 is active in develop and task
T1 first has to be transferred from develop to acceptance.
Situation 5: Task T1
is transferred to acceptance for the second time:
In DEVELOP: |
Version 4.1 |
In ACCEPTANCE: |
Version 4.1 and
linked to task T1 |
In PRODUCTION: |
Version 2.1 |
Version
3.1 in acceptance is now overwritten with the repaired version 4.1
Situation 6: Task T1
is transferred to production:
In DEVELOP: |
Version 4.1 |
In ACCEPTANCE: |
Version 4.1 |
In PRODUCTION: |
Version 4.1 and
task T1 is moved to the history of PROG/A |
Our
sources in develop contain ‘develop-names’ and our production-sources contain
‘production-names’. We use translate filters to change a develop-source to
production and vice-versa. Which version of a source should be loaded in SURE:
the one with the develop-names or the one with the production-names?
Many sites
have all kinds of distinctions between ‘develop’ ‘test’ and ‘production’ in a
source. For example:
Dollar
options: $ SET OMIT = NOT TEST and $
SET OMIT = NOT PRODUCTION
Usercodes:
$ INCLUDE (DEV)COPY/XX or COPY “(PROD)COPY/YY ON PRODPK”
Database
names: PAYROLLDBTEST and PAYROLLDB
We have the
following requirements:
1.
The
‘develop’ version of a source must be stored in the repository.
2.
The
sources stored in SURE should not contain any usercode or pack name.
Ad 1. When
a source is stabile (the production version of the source is equal to the
develop-version) then there is only one physical version of that file available
in SURE. Each SURE-environment has a reference to that same physical file.
It is very
unhandy for a programmer if the source that is stored in SURE does not contain
‘develop’ references: When he checks the source out of SURE, then he wants to
start working on it immediately and he does not want to change all kinds of
production-names to develop-names.
When the
source is checked-in, then he does not want to establish the ‘production’
references again (if he is not going to forget that at all).
If the SURE
source always contains ‘develop’ references, then the programmer can start
working on it immediately, and he cannot make mistakes between ‘develop’ and
‘production’ references. (Please remember that the developer himself does the
check-in and check-out of the source, not a project-leader or change-control
person who could filter the source from ‘production’ to ‘develop’ and back).
There are
three reasons why a source must be compiled:
a.
The
file is in maintenance by a programmer and the programmer starts a test
compilation (similar to the Cande COMPILE command).
b.
A new
version of a source arrives in a SURE-environment (for example: after a
check-in in the develop environment or after a transfer to another environment)
and that new version of the source must be compiled via the SURE evening batch.
c.
A file
that is referenced by the program is changed (a database description file or a
copy-file) and therefore the program has to be recompiled.
Compilations
of type b and c are always done in the SURE evening batch by program
RESPECT/SURE/COMPILE. This program performs the following actions:
-
Copy
the files that are in the SURE-compile-queue from the repository to disk.
-
Copy
all the copy-files that are used by these programs also from the repository to
disk.
-
Compile
the programs.
-
Remove
the sources that were copied from the repository to disk at the end of the
compilations.
As stated
earlier in this section: it is important that the ‘develop’ version of a source
is stored in SURE. Therefore it is possible to define
compilation-translate-tables. Each non-develop environment can have its own set
of compilation-translate-tables (one per application system). If a
compilation-translate-table is available, then the source will be filtered
according to that table just before the compilation starts. For performance
reasons we recommend to keep these tables small.
Ad 2. An
important feature of SURE is the integrity mechanism. This mechanism enforces
that all objects of an environment are compiled with the same set of
copy-files. The integrity-mechanism is optional. For the development
environment this option is usually reset, because the objects are never integer
in development. For the other environments (test, acceptance, integration,
production, etc.) this option is usually set. If a copy-file is changed in a
non-development environment then all programs using that copy-file are
automatically recompiled by SURE. If one of the compilations fails, then non-of
the objects are released to the object-location. The result is that all objects
in the object-location are compiled with the same set of copy-files.
The integrity
mechanism can only work correctly if all copy-files that are necessary for the
compilations are retrieved from SURE. That is why these copy-files are copied
from the repository to disk just before the compilation starts.
Suppose
that the copy-statement in the source contains a usercode. The correct
version of that copy-file is copied from the repository to disk where it is
available for the compiler, but the compiler will not use that correct
version, but some unknown version from the usercode that was used in the
copy-statement. Notice that that will probably be a develop version of the
copy-file because the source only contains ‘develop’ references. This is why
the source should not contain any usercodes or pack-names.
I changed
something in a DASDL-source and that required a recompilation of all related
sources: Here are my procedures:
1.
Go to Compile
Interface.
2.
Choose Multi
Compile.
3.
Using Query
function to choose all related sources by
DATABASE(CSFCDB) OR DATABASE(CSFDDB)
OR DATABASE(CSFHDB) OR DATABASE(CSFRDB).
4.
Start Compile
Job CP/NX.
My main
question is: I don’t see any record in the log of my current task regarding the
compiling action. Where is the log to prove that I have initiated the compiling
job?
The compile command and the
multi-compile command are not placed in the log of the task. The task-log
contains only actions that affect the status of the task, and the assignment
action:
-
New task: the task is added
in the repository and the task-status becomes ENTERED.
-
Assign (or current): the
task is assigned to somebody and the task-status becomes <environment>.
-
Close or deny: the task is
closed and the task-status becomes SOLVED.
-
Activate: the task is
activated again and the task-status becomes <environment>.
-
Transfer: the task is
transferred and the task-status becomes <destination-environment>.
The (multi) compile command
affects the file that was placed into the compile queue, and therefore that
action is placed into the log of that file.
Please notice that not all
command are logged. A global overview of the commands that are logged:
-
The task-command that
change the task-status: New, Assign, Current, Close, Deny, Activate, Transfer,
Reprocess-quick-fix
-
The task-commands that
change the task-assignment: Assign, Current, Update (to handle by)
-
The file-commands that
change the assignment of the file (request, assign, check-out, undo check-out,
delink, move)
-
The file commands that
change the contents of the file in an environment (check-in, re-sequence,
purge, remove, recover, transfer, load).
-
The file commands that
change the contents of the compiled object (compile + compilation-result)
-
The rename of the file
(rename).
I
accidentally hit the 'Current' menu item when looking at a task. How do I undo
'Current'?
Right click on the
task that you made current and choose ‘Not current task’. That removes the
current-state from that task. The result is that no task is current.
Sometimes,
when we tried to do a Local Compile of a CP/NX Cobol source, the job get a wait
situation, by a "no file" condition. The requested file is a copy
file, related into the Cobol source.
We
believe that the Local Compile process must copy the "copy files" to
work directory at CP/NX before the actual compilation, to avoid the "no
file" condition. Do you agree?
It works as follows:
There is a
system-option: 'put copy-files in the work-directory after save or load'. That
option should be enabled for all your systems in the development environment,
so that the work-environment contains a complete set of copy-files. You can run
program RIS/COMPLETE/WORKENVIRONMENT("") to make the work-directory
complete.
If a source is
checked-out, then SURE checks all the copy-files that are linked to that
source. For each of those linked copy-files the following checks are done
(AAA):
-
Is the copy-file resident on disk and visible for the
developer: if not, then make it resident on disk in the work-directory of the
developer.
-
Is the correct version of the copy-file resident on
disk: if not, give a warning to the developer.
If a copy-file is
checked-out, modified and checked-in, then the new version of that copy-file is
placed in the work-directory, where it is visible for the developers (if option
put-in-work-environment’ is enabled)
Therefore, this
problem should not occur, because all copy-files of a source are checked when
the source is checked-out, so that the developer can compile is source without
problems.
I assume that the
copy-file of the wait-situation is new, because otherwise it would already be
on disk for a long time because of references by other programs.
Now I discuss some
possible problems:
-
SURE scans all sources for references to copy-files.
The copy-files of a source are linked to the source via a COPY-FILE relation
(see folder <source>àreferences
with reason COPY-FILE). If for some reason a copy-file is not linked to a
source, then this wait situation may occur, because then the above-mentioned
check (AAA) will not be done for that copy-file.
-
The copy-files are checked when a source is
checked-out and NOT at the moment of the compile. Therefore, if you create a
new copy-file and start using that new copy-file in your source, and then do
the local compile, then you get the wait situation. Notice that SURE gives a
screen where you can define that you want to upload the copy-files that are in
use. Only the files that are known by SURE as a copy-file are mentioned on that
screen.
Summarized:
if your copy-file does not have file-type 'is-a-copy-file' and the copy-file is
new, then the wait-condition may occur.
The solution:
-
Enable system-option 'put copy-files in the
work-directory after save or load' for all systems in the development
environment.
-
Run program RIS/COMPLETE/WORKENVIRONMENT("")
to make the work-directory complete.
-
When you create a new copy-file give it a 'copy-file
class' file-type (like COPY-FILE or INCLUDE or any file-type with option 'use
as copy-file’)
-
When you do a local compile with a reference to a new
copy-file then make sure that that new copy-file is uploaded with the compile.
I
noticed that sources and objects are disappeared from the host when the task is
transferred to another environment, is that right procedure?
About the source:
A source is not
removed from disk at transfer-time but when it is checked-in.
If a file is
checked-out then it is placed on disk in the programmers’ work-location, where
he can work on it.
If a file is
checked-in then the new version of that file is loaded in SURE and removed
from disk. Usually it is not necessary to keep a version of the source on disk,
because there it can only confuse people (they will ask themselves ‘what is
this for a file: the correct version or an old version’).
A source is removed
from disk after the check-in for multiple reasons:
-
Less confusion for developer (is it the last version
or not)
-
Keep the work-pack clean (space)
-
Keep the work-location of the developer clean and
organized
There is an option
(Environment / global-options / second tab / field ‘save source directory’)
that defines a directory that keeps a copy of all checked-in files. When the
new version of a source is checked-in, it is changed to the
save-source-directory instead of removed from disk.
About the object:
By default, the
compiled object is not removed from disk when the source is checked-in.
There is an option
(environment, global options, second tab) that says: 'the test-object will be
removed after saving the source in SURE'. The purpose of this option is to keep
the work-environment clean. However, setting this option depends on the way
that SURE is configured:
-
If each developer has his own private workspace, then
this option should be set. A test compilation started by the developer puts the
test-object in his own private workspace where he can do his unit test. The
test-object is in the private workspace of the developer, so it won’t
jeopardize the tests of other developers. When the modifications to the source
are done and the source is checked-in, then the test-object can be removed from
the programmer’s private workspace. SURE recompiles the checked-in source as
soon as possible and puts the new object in a global directory where it is
visible for all developers.
-
If multiple developers work in a shared workspace, and
all objects are also available in the same shared workspace, then this option
should be reset.
With
SURE it is very easy to download a CP/NX backup file from the mainframe to the
PC and to view that backup file with Microsoft Word.
The
problem is then that the layout of the backup file is not perfectly presented
by Word:
-
A proportional font is used instead of a
non-proportional font
-
The font-size is too large
-
Wrong margin settings
-
Unexpected page-skips
-
Etcetera
It is very easy to download a CP/NX printer backup file from the mainframe
to your PC:
-
Open folder ‘Unisys CP/NX’ and then folder ‘Printer
Output’:
All waiting print requests of your
usercode are listed here (the list is similar to the output of Cande command
‘PS SHOW USER <usercode>’)
-
Click on the [+] in front of a print request name to see
all the backup files of that print request.
-
Rightclick
on the backup file name and choose command ‘Download’ from the sub-menu to
download that backupfile from the mainframe to the PC. The file is placed on
the PC in the ‘download directory’ that can be specified via ‘Menu bar:
Options/<Sure options>/<local options>.
After the
backup file is copied to the PC, it is loaded in Word because in most cases the
user wants to see the contents of that backup file. We have chosen to load the backup-file
in Word, because Word correctly handles the page-skips in the backup-file.
The problem
that now arises is that the default font and font size of Word are not
appropriate for the downloaded backup file: The backup file is created with a
non-proportional font and (often) up to 132 characters on a line, and Word’s
default font and font size are mostly not set like that. This problem is fixed
with a button¯o: BackupFile.
Button
‘BackupFile’ (in Word) sets the font to ‘Courier New’ (not proportional), and
changes the font-size to ensure that each line can have 132 characters and each
page 132 lines. A print of the file gives now correct output: no lines are
wrapped or truncated, no unexpected page skips. The layout of the report can be
checked online via the ‘Print Preview’ option.
The button
and macro are available via Word template BackupFile.dot.
Install button¯o ‘BackupFile’ in Word
as follows:
-
Click here to save template file BackupFile.dot on your
disk
-
Click
(in Word) on Tools/Macro/Macros and choose Organizer
-
Click
on button ‘Open File’ on the left-hand side of the Organizer and browse to the
saved BackupFile.dot template.
-
Click
on line ‘BackupFile’ in the text box on the left-hand side to make that line
current, and click then on button COPY to copy the macro to your default
template.
-
Click
on tab-sheet ‘Toolbars’ and copy the Toolbar ‘BackupFile’ from BackupFile.dot
to your default template.
The
BackupFile button is now always available in Word. When you click on it the
font and font-size of the current document are changed.
I
notice that when you generate a compile deck for a CANDE compile you set
USERBACKUPNAME=TRUE and specify the name of the source as the last node of the
listing name. That is very nice.
But
when you do a batch compile you set USERBACKUPNAME=TRUE and specify a name like
COMP_3. Why not use the name of the source for this, too? That would be much
better for electronic filing of the listings.
Is
there a way to do that in Sure?
Please read the
attached document for a possible solution. Click here