CA Agile Central Database Connector (Postgres)
The CA Agile Central Database Connector retrieves Agile Central (AC) entity item information and writes to a CSV file or Postgres database table. The Agile Central Database Connector is classified as a one-way and one-time mechanism. The connector runs on Windows, Linux or Mac. Information in Agile Central is never altered. Information is only written to CSV files which are used to populate tables in your target database. The Agile Central Database Connector consists of software that you run on your platform according to your desired schedule. The configuration of the connector is expressed in a YAML text file. A configuration file can list one or more Agile Central entities as long as each entity specified is a leaf entity in the Agile Central Data Model. Common entities are story, defect, task, release, and user. See the AC Help documentation, in particular the sub-document concerning the WSAPI, for more information on AC entities and the data model.
Database Requirements and Considerations
- You must already have a database installed, set up, and operational; this connector does not create a database. The connector does not need to be installed or run on the same host on which your database resides. The database must be on a host that is reachable on the network on which you install the connector. The user you specify in the DB section of the configuration must have permissions to create database tables.
- The connector emits CSV files for the purpose of bulk loading tables in your target database and saves them to CSV/<database_name> directory. The database name is set in the DB section of the configuration file, in Name property. Do not name your database default since CSV/default directory is reserved for CSV files created during CSV-only mode.
- If you want the data retrieved and written to the CSV file to be loaded into your target database, you must have the DB section in the configuration file with all required entries and any optional entries. By default, in this mode the CSV files will be removed once the database tables are loaded with the information in the CSV files. You can alter this behavior by a setting in the DB section of the configuration file.
- By default, when the connector is run, after the CSV file for an entity is written and before the database table is populated with that data, the connector will drop and recreate the target table. You can alter this behavior in the DB section of the configuration file. Be aware that this has implications about the suitability and coherence of data in the database tables.
- Most database implementations allow table aliasing. This aliasing capability can be used in a scenario where you indicate that you want the database table names to be timestamped, so that each time you run the connector an entirely new set of tables is created and populated with the AC data retrieved by the connector. You employ the alias (typically the name of the AC entity, for example, Task) to refer to the database table Task_<timestamp_value> with the most recent <timestamp_value>. This would allow you to have queries or reports using just the entity names.
- The connector requires Bulk Insert permissions for the database. Please consult your database Administrator for assistance with this.
This guide includes:
- Software Requirements
- Connector Download
- CSV-Only Mode
- Agile Central Considerations
- Appendix A
- Appendix B
- Appendix C
- Revision history
- Installed Postgres database.
- Python 3.5 or 3.6 (the 64-bit version is recommended for Windows). Download the installer from www.python.org.
To download the connector, follow the steps here.
LICENSE # CA Copyright file README-Postgres.txt # this file ac2db # connector initiation script, this is what you will run configs # holds any configuration files used with this connector sample_postgres.yml # a sample configuration to use as a base reference csv # holds CSV files produced by running the connector dbeif # dbeif module root directory log # holds log files produced by running the connector
- Mention one or two entities to process.
- Mention a minimum number of fields of interest.
- Use the Query configuration item to limit the amount of Agile Central entity records to retrieve or write.
- Use the Service, Preview: True configuration setting to confirm successful operation and realistic record counts.
Decide on the database table naming policy:
- It is simplest to use the AC entity names as the name of the database tables.
- You can specify that the table names will contain a common timestamp from each run, but this means you need to ensure adequate capacity in your database. Unless you cull older tables, your database will add n new tables each run, where n is the number of different AC entities you specify in the configuration file.
- Once you have gotten a simple configuration to work, you can incrementally add entities, fields, selectors, and more as needed to accomplish your intended objectives.
Review the log files while you are in the initial stages to see if there are any errors or warnings. Once you have confirmed proper operation, you may want to set up automated monitoring of the log files.
- Locate the configs * subdirectory.
- Copy the sample_postgres.yml file to a file named suitably for your environment such as, cp sample_postgres.yml to product_x.yml (or some other suitably named file that has a .yml suffix).
- Edit your product_x.yml file. Change the sample values for credentials, Workspace, Project, Server, Name to values that are relevant and valid for your environment.
*See Appendix A on configuration file syntax.
Using a terminal window or console:
- cd to the installation root directory e.g. /opt/local/sw/dbeif-1.1.0
- python3.6 ac2db acdata.yml
This software requires that the configuration file reside in the configs subdirectory. You specify the base name of the file on the command line. Do not specify the subdirectory in the command line argument.
Use cron, launchctl, or Windows Task Scheduler. Make sure the environment in effect when running this software is appropriately set so that you can run: python3.6 $DBEIF/ac2db_connector your_config_file_name. $DBEIF is the reference to an environment variable containing the fully qualified path to the directory where the software is installed. For example, if you unzipped the package in /opt/local/sw, then your DBEIF would be set like this: export DBEIF=/opt/local/sw/dbeif-1.1.0.
The connector always writes a log file name based on the configuration file name. The log file is written into the log subdirectory under the base installation directory. Within the configuration file, you can set the LogLevel which determines the amount of logging information written to the log file. When you set the LogLevel to DEBUG, you will get the full range of logging messages that can be helpful in pinpointing where a problem occurred and what information was being processed at the time of the anomaly.
Run the connector in Preview mode when setting things up for the first time. This allows you to get the connections to Agile Central and your database to initialize and validate correctly without creating and populating database tables. This mode can show you counts of records on a per Agile Central entity basis that would be written to the destination CSV file or the destination database tables.
This error was seen when the STATE field was listed two times in the fetch
DEBUG: AgileCentralConnection.createFieldCategories - ACConn.createFieldCategories has pi_state_fields:  and field_categories['pi_state'] of: 
ERROR: DBConnector.run - RallyResponseError in DBConnector.run processing entity: Story -- Unable to retrieve 10 chunks of data
If you want to export data to a CSV file and do not intend to use a database, you can choose to export data to CSV files. In this situation, you do not have to install a database and you will omit the database section of the configuration file. The connector writes a CSV file (whose name contains a timestamp) for each entity processed into a CSV/default folder. If run in this manner, the connector does not ever remove the CSV file from the CSV subdirectory.
For example, in the config below the DB section is omitted since we are not interested in populating a database. The Service section is optional: in the absence of Service section, its properties are set to default values.
Server : rally1.rallydev.com
APIKey : _abc123x
- Workspace: Jazz
- Deep Cats
Agile Central Considerations
For most tables, the ObjectID is used as the primary key for the table and is included whether you specified that field in the Fetch list. Two exceptions to this general rule are ProjectPermission and WorkspacePermission objects. Tables created for ProjectPermission and WorkspacePermission records use CustomObjectID field (which is a standard field despite its name). ProjectPermission and WorkspacePermission objects in Agile Central WebServices API inherit ObjectID attribute from root PersistableObject but it is null.
- The connector uses ObjectIDs to represent values of fields known as reference fields in Agile Central's WebServices API.
Examples of reference fields:
- Owner on a Defect (a reference to a User object)
- Release on HierarchicalRequirement (a reference to a Release object)
- Feature on HierarchicalRequirement (a reference to parent PortfolioItem of HierarchicalRequirement)
- Project or Workspace on any work item type (respective referencs to Project and Workspace objects)
ResolveUser property set to True in the configuration file's AgileCentral section will instruct the connector to represent users by their _refObjectName, which in AgileCentral defaults to DisplayName as long as DisplayName is not empty. If ResolveUser is set to True, the Owner column of a work item table will be populated by display names, such as Wallace, Gromit. If ResolveUser is set to False, the Owner column of a work item table will be populated by ObjectIDs. Note that DisplayName field on User object in AgileCentral does not have unique constraint: an AgileCentral subscription may have two different users with the same DisplayName Wallace.
Instead of resolving a user to a DisplayName, you may use this approach:
- Set ResolveUser to False
- Export AgileCentral users to Users table in your database
- Export AgileCentral work items to respective tables in the same database, such as Task, Story tables
- Consider using a Join between two tables, such as Task and Users after the connector run completes
Here is an example of a JOIN for task and users table that combines columns from both tables:
objectid | formattedid | owner | owner username | owner officelocation
86253669480 | TA93985 | 32429145056 | firstname.lastname@example.org | 62 West Wallaby St
86253670264 | TA93986 | 32429145056 | email@example.com | 62 West Wallaby St
86253670816 | TA93987 | 32429145000 | firstname.lastname@example.org | 62 West Wallaby St
86253670827 | TA93999 | 32429147147 | email@example.com | Mossy Bottom Farm
In Agile Central WebServices API, a query on the Agile Central User entity will return that user only if a query criteria is not supplied. In order to get more than the single user, the query (expressed either as a BaseQuery or a Selector for a User entity) should not be empty.
Example: BaseQuery: ((Disabled = True) OR (Disabled != True))
While it is possible to use a single configuration file, in some scenarios it is more convenient to use multiple configuration files for the same database. For example, you want to create Task, Story, Feature, and Users tables in a single database. You may use one configuration file, but combining dissimilar objects in the single configuration file makes it harder to come up with an Agile Central BaseQuery (which serves as a lowest common denominator) and may also make the Fetch list unwieldy. You may also create two configuration files. One configuration file's Entities property will list Task, Story and Feature, while the other configuration file's Entity will list User object only. Note that since User is a reserved table name in Postgres database, the connector creates Users table to avoid a conflict. This happens automatically: the Entities property of the configuration file must still use a valid AgileCentral entity name, User.
Entities: Feature, Strategy
The connector does not support ambiguous use cases when a custom portfolio item type's name matches a standard entity name. The following example illustrates this: a regular entity in Agile Central → task and a similarly-named portfolio item → PortfolioItem/Task.
This above constraint applies even if you reference an ambiguously named item. If you have the situation where a custom portfolio item type has the same name as a existing entity in the Agile Central system, you cannot use this connector.
Appendix A: Configuration file editing
The Agile Central Database Connector uses a text file in the YAML format. For brevity, this document mentions several of the most relevant syntax items and covers the three sections of a valid YAML configuration that can be used with the connector.
Refer to the Download Samples section at the bottom of this page for a sample configuration file with explanatory notes.
- Use a text editor (not MS Word or Google Doc) to edit the file
- Never use a tab character in the file; YAML does not allow or recognize tab characters
- Save the file in UTF-8 format
- Use a monospace font
- Be consistent with the number of spaces you use to indent (four spaces is a common indent level in YAML docs)
- On a line, the first occurrence of a non-quoted # character indicates a comment, the # character and all characters to the right are ignored in processing
- Keep the sections in the same order as is present in the sample.yml file
- Be aware that the colon character (:) is significant, it separates a key from the value
- Be aware that the dash character (-) is significant, it denotes the start of a list which may have one or more key value pairs that constitute the list item
- You usually do not have to quote values if they have spaces in them; you will have to quote the value if it contains an embedded # character
For complete information on YAML syntax, consult the web page at www.yaml.org/start.html.Here is a skeleton of the template_config.yml file.
... # several key value pairs are relevant for this section
... # several key value pairs are relevant for this section
... # a few key value pairs relevant for the overall operation of the connector appear in this section
The AgileCentral section specifies values to use to obtain a connection with Agile Central. The DB section specifies values to use to obtain a connection with the target destination database and specifies the policies governing time stamping of tables, whether dropping existing tables is allowed. The Service section controls some aspects of the connector behavior on an overall basis.
Appendix B: Agile Central Project specification
In AgileCentral, project names do not have to be unique. The connector provides a mechanism of distinguishing projects with the same name.
For example, if a fragment of a project tree looks like this:
Use a // sequence to separate the path components.
To specify the first occurrence of the Salamandra project you would have:
Biospheric // Salamandra
To specify the second occurrence of the Salamandra project you would have:
Biospheric // Corral // Salamandra
The connector by default does not scope to project's parent or children. A /* appended to a project name will scope down to project's child projects, as in the example of a configuration file's AgileCentral section below:
Some scenarios require only a workspace scope, and Project specification can be entirely omitted from the AgileCentral section. Here is a config file example for exporting AgileCentral user data to a rallyusers database:
Server : rally1.rallydev.com
APIKey : _abc123x
- Workspace: N
BaseQuery: (LastLoginDate > 2017-05-01)
User : postgres
Password : postgres
Appendix C: Postgres database user privileges
If the Superuser property of DB section is set to True, the connector will execute an SQL COPY command to bulk copy csv rows to the database. Set Superuser property to False if you receive the following error: must be superuser to COPY to or from a file. Hint: Anyone can COPY to stdout or from stdin. Psql's \copy command also works for anyone. The connector will attempt to bulk copy using psql's \copy command.
- 1.1.0-master --- 25-Jan-2018
- Supports MS SQL Server.
- 1.0.1-master --- 2-Dec-2017
- Accounted for inconsistencies of spacing in the schema.
- 1.0.0-master -- 22-June-2017
- Initial Release - works with Postgres DB.