The Load Data to Wdata template brings data into a Wdata table by either replacing an existing dataset or adding a new one. This template is most often used as part of an existing chain, but it can be added to any workflow that requires a dataset to be uploaded to a table.
Requirements
- The target Wdata table must be created before executing this chain.
- This template is comprised of three chains. Each chain must be separately published to your workspace.
- The file name must include a .csv or .tsv extension. For example: “File_Name_Example.csv”.
- The maximum recommended file size for a dataset is 300mb. Learn more about file size recommendations.
Find the template
The Load Data to WData template is located in the Workiva Chains section of the Templates screen.
Here's how to find it:
- In Chain Builder, go to the Templates tab.
- Select Workiva Chains from the menu at the top.
- Search for Load Data to Wdata | Primary Chain at the top right and open the template.
-
After locating the template, click New Chain to deploy it to your workspace.
Note: Once configured, each chain in this template must be published to the workspace.
Configure the template
This template consists of three chains: the Primary Chain, a Replace Dataset Chain, and an Add New Dataset Chain. When the Primary Chain is deployed in your workspace, the two child chains (Add/Replace) are automatically deployed with it.
Here's a closer look at each Chain:
- Primary Chain: This chain determines if there is a dataset within your table that has the same name as the dataset being loaded by the chain. If a matched dataset is found, the chain captures information about the matched dataset and -- based on the "Load Method" runtime input you've configured for the parent chain -- redirects that data to one of two child chains.
-
Replace Dataset Chain: This chain removes the matched dataset from your table and replaces it with a new one. If an error occurs during execution and the rollback option is selected, the chain will automatically delete the new dataset and revert to the dataset that was to be replaced.
-
Add New Dataset Chain: This chain creates a new dataset and imports it to your table. The new dataset must have a unique file name that isn't used currently used by any other datasets in the table.
When adding this set of chains to an existing chain, the runtime inputs for the Primary Chain must be configured within the “Run Chain” node. The child chains do not require any changes or configuration.
Your settings should look something like this:
Variables
Type | Name | Purpose |
Workspace variable | wsv-WdataLoadWarningThreshold |
This variable acts as a soft limit on your dataset file size. Any datasets above the chosen threshold will trigger a warning. Because the processing time grows with file size, smaller datasets can be processed substantially faster. The recommended file size provides ample warning that your dataset may be approaching the Wdata file size limit. When a warning is triggered, you may consider refactoring your file or exploring other options to account for the larger file. Recommended file size: 150MB-200MB |
Dynamic chain variable | dcv-Chain Result | Captures the status of the chain at various stages. This is required and should not be changed. |
Runtime inputs
Runtime input | Purpose | Field type | Required |
Table ID |
Enter the Wdata Table ID that the dataset will be uploaded to. This ID will also be used to determine if there are any duplicate datasets within the table. We recommend using workspace variables to store the Table ID; this allows the template to be used across multiple processes. |
Text | Yes |
File name |
The name of the dataset that will be imported to the Wdata table. The file name tells the chain which data should be replaced. File name requirements vary depending on the chain selected:
Note: The file name must include the .csv or .tsv extension. For example: “File_Name_Example.csv” |
Text | Yes |
Data file |
The data file that will be imported into the table. Extension must be .csv or .tsv. Note: The data file can utilize any of the supported Wdata table delimiters. |
File | Yes |
Load method |
Determines whether the file should replace an existing dataset or be added as a new one. Options:
|
Dropdown menu |
Yes |
Rollback |
In case of error, this rolls back any changes and re-imports the original dataset. Set to True by default. Only applicable to the Replace Dataset load method. |
Boolean (True/False) |
No |
Troubleshooting
To view your chain run history, including errors:
- Go to the Monitor tab in Chain Builder.
-
Hover over the question mark to view the status of the chain run.
If your chain triggered an error or failed to update the correct data, perform the following checks:
- Verify that all three chains have been published to your workspace. Each chain must be published separately.
- Ensure your dataset uses a supported delimiter and that the file name includes a .csv or .tsv extension (“File_Name_Example.csv”).
- When using the Replace Dataset load method, check that the file name matches the existing file name in your Wdata table. If it doesn't match, the chain will simply add a new file to your table without replacing the old one.
- Conversely, when using the Add New Dataset load method, check that the file name isn't used anywhere else in the Wdata table. The chain will fail if an overlapping file name is used.
- Ensure the OAuth2 grant associated with the Workiva connection has appropriate access.
- When inputting the Table ID, ensure that the entire ID has been entered and there are no leading or trailing blank spaces.
- Check for runtime timestamp variables. These should not be used in file names, as it creates a unique file name for each dataset -- meaning the chain won't be able to identify any matched datasets in your table.
Column mappings and other chain modifications
If you experience issues with column headers when importing data, data prep or other commands can be used.
Optional: Tagging imported datasets
Tagging is supported when importing datasets, but we recommend deploying an additional set of chains from the template before making any changes to the original. Once modified, the chain will likely be difficult to reuse for other tables unless the data sets in those tables utilize the same tags.
To add tagging to a chain:
- Add the following runtime inputs to each of the three chains deployed from this template:
- Tag-Key (Text Field)
- Tag-Value (Text Field)
- Edit the Primary Chain to pass the runtime inputs to each "Run Chain" event:
- In both the Replace Dataset Chain and the Add New Dataset Chain, edit the "Import New Dataset" command node to accept the tag runtime inputs.
Once completed, your imported datasets will be tagged with the provided tag key and tag value.