Skip to main content

Scheduled Jobs

Scheduled jobs are automated processes that may be executed on-demand or on a schedule. Workflows will automatically create their associated scheduled jobs.

Search for a scheduled job

  1. Expand the “Configuration” menu on the sidebar navigation.
  2. Select the “Scheduled Jobs” menu.
  3. Enter any text in the search box to locate scheduled jobs.

Add a new scheduled job

  1. Expand the “Configuration” menu on the sidebar navigation.
  2. Select the “Scheduled Jobs” menu.
  3. Select “Add” to add a scheduled job.
  4. Populate the fields.  Data fields with an asterisk (*) are required.
    • Type*: Enter the type of scheduled job. For some types, additional fields will appear below. Options include:
      • API Integration (Pull)
      • API Integration (Push)
      • Data Import
      • Document Analysis
      • Document Delivery
      • Form Analysis
      • Microsoft Graph: Directory Synchronization
      • Report Delivery
      • Webhook
    • Name*: Enter the name of the scheduled job.
    • Priority*: Enter the run priority of this job as compared to other jobs scheduled to run at the same time.
    • Frequency*: How often the job will run. Based on the frequency selection, additional fields will expand to allow you to select the month/day/hour/minute that the job should run, along with the start and end dates.
    • Start Date*: The earliest date when this job may run.
      • Note: If Start Date is in the past, the Next Run date will be calculated by iterating from the Start Date by the selected frequency until a date is in the future.
    • End Date: The latest date when this job may run.
    • Notify on Failure: Notify the scheduled job owner via email of a job failure.
    • Failure Retries*: The number of times to retry running the job on failure before giving up. Attempt timing depends on the Retry Strategy selected below.
    • Retry Strategy: The retry strategy that is used when a transient job failure is detected. Defaults to Exponential Backoff, which is calculated as delays from first run: 0.5, 1.5, 3.5, 7.5, and 15.5 hours.
    • Tags: The list of tags associated with this record.
    • Comments: Any comments you would like to store for this scheduled job.
 

Develop scheduled jobs

  1. Expand the “Configuration” menu on the sidebar navigation.
  2. Select the “Scheduled Jobs” menu.
  3. Select the “View/Edit” button for one of the scheduled jobs. Depending on the scheduled job type, different sections may be available:
    • API Properties
      • Data Type*: The type of data to import.
      • Allow Telemetry Outside of Service Dates: If the telemetry is allowed to be imported or updated when the telemetry active date is before the asset’s In Service Date or after the Retired Date. Visible only when the Data Type above is selected as Telemetry. Toggle defaults to ‘off’. Importing telemetry data to an asset without an in-service date is allowed as long as there is no retired date or it is before the retired date.
      • URL*The URL of the source data.
      • Method*: The resource method that is used for the API request.
      • Requested Content Type*: The type of content requested.
      • Authorization Type*: The authorization type that is used.
      • Authorization Token: The authorization bearer token that is used.
        • Note: this field will appear only when Bearer Token is selected in the Authorization Type field.
      • Username: The authorization username that is used to generate a Bearer Token for the API request, or for Basic Authorization.
        • Note: this field will appear only when Login and Bearer Token, Bearer Token, or Basic is selected in the Authorization Type field.
      • Password: The authorization password that is used to generate a Bearer Token for the API request, or for Basic Authorization.
        • Note: this field will appear only when Login and Bearer Token, Bearer Token, or Basic is selected in the Authorization Type field.
      • Login URL: The URL of the authorization login to be called to generate a Bearer Token just prior to the API request.
        • Note: this field will appear only when Login and Bearer Token is selected in the Authorization Type field.
      • Logout URL: The URL of the authorization logout to be called just prior to the API request.
        • Note: this field will appear only when Login and Bearer Token is selected in the Authorization Type field.
      • API Key: The API key that is used.
        • Note: this field will appear only when API Key is selected in the Authorization Type field.
      • Body Type*: The message body that is included with the API request.
      • Time Zone*: The time zone of any dates in the data. This field will default to the account time zone.
      • Maximum Rows*: The maximum number of rows to include in the mapping. 
      • Distinct Rows: Toggle on to filter out duplicate rows. 
      • Rows per Call*: The number of rows to bundle into a single API Call. The API is called until all the rows are processed. 
      • Paging: Toggle on if the results are paged and the API must be called for each page.
      • Page Number Token: Token in the results that holds the current page number (only appears if paging is on).
      • Next Page Number Token: Token in the results that holds the next page number (only appears if paging is on).
      • Data Token: Token in the results that holds the data (only appears if paging is on).
      • Maximum Pages: Maximum pages to pull (only appears if paging is on).
      • Start Row*: The first row of the result set to import. Rows before this are ignored.
      • End Row: The last row of the result set to import. Leave blank to import all rows after the start row.
      • Suppress Data Change Logging: Select if changes to the data (inserts, updates, or deletes) should not be logged.
    • Once the API properties are complete, a “Test Connection” and a “View Data” button will appear under the Verification field. Press the “Test Connection” button to verify the connection settings; it will return a sample of the results along with a count of the pre-filtered rows, or an error message. The “View Data” button returns a tabular report of the source data or a view of the raw data. Use the tab control at the top of the page to toggle between the view options.
    • Data Import Properties
      • Data Type*: The type of data to import.
      • Worksheet: The index number of the worksheet to be imported. Ignored for text files.
      • Header Row: For text or excel files, the row number that contains the header information.  Leave blank if your file does not contain a header row.
      • Start RowFor text or excel files, the first row of the data file that contains the data to import. Rows before this are ignored.
      • End RowFor text or excel files, the last row of the data file that contains the data to import.  Rows after this are ignored.  Leave blank to import all rows after the start row.
      • DelimiterFor text files, the delimiter that separates the columns in the data file. Enter any value for excel files; this field will be ignored.
      • Datetime FormatFor text or excel files, the format of the date/times in the data file.
        • Commonly used formats are listed in the dropdown values of the “Date Format” field in the Date & Time Preferences section of the “My Preferences” window.
      • Time Zone: The time zone that is used for dates in the data file.
      • Allow Manual Imports: If users are allowed to manually import data using this mapping.
        • Note: This tool will display a “Import Files Now” button on the Scheduled Job detail page, which directs the user to a dedicated page to place files for import based on the predefined mapping.
        • This is a valuable option to test FTP imports prior to going live.
      • File Filter: The comma-separated filter to apply to the selection of manual import files (e.g. ‘image/gif, image/jpeg, image/png, image/tiff, image/x-windows-bmp’).
      • Allow FTP Imports: This toggle controls if files should be pulled in from an FTP. Toggling this to ‘On’ reveals additional data fields to input the FTP server, port, username, password, folder, file filter, and post-processing file handling. For detailed directions on setting up an FTP import, please consult the Assetas Technical User Manual.
      • Email on Completion: If a summary email is sent to the person that manages the Scheduled Job when each file is finished processing.
      • Allow URL Imports: This toggle controls if files should be pulled from a URL address for automatic importing. Toggling this to ‘On’ reveals additional data fields to input the URL, Start Date, and End Date. For detailed directions on setting up a URL import, please consult the Assetas Technical User Manual.
      • Suppress Data Change Logging: Select if changes to the data (inserts, updates, or deletes) should not be logged.
    • Document Analysis Properties
      • Last Analyzed*: Limit the documents to only those analyzed on these dates
      • Document Type: Limit the documents to only those with this type. Leave blank to include all document types.
      • Document Status: Limit the documents to only those with this status. Leave blank to include all document statuses. Defaults to ‘Active’.
      • Suppress Data Change Logging: Select if changes to the data (inserts, updates, or deletes) should not be logged.
    • Document Delivery Properties
      • Document Template*: The document template that will be delivered.
      • Time Zone*: The time zone to use for any dates in the data.
      • Parameters: The parameters to use when generating the document. To pass in multiple parameters, separate them with a semicolon.
        • When setting a value in a dropdown list, use the following format:
          SL#<<Selection List GUID>>#=<<GUID of item in selection list>>
        • For example: SL#ParticipantsList#=4c6081db-25b8-47ec-845f-08daf275ba74
        • For date range and date time range parameters, please consult the Assetas manual.
      • Email Delivery: Toggle on to deliver the document by email.
        • Recipients: The list of email addresses that will receive the email.
          • Note: The variable {AssociatedContacts:GROUPID} may be used to deliver the email to a list of recipients.
        • Subject: The subject to use for the email.
        • Body: The body to use for the email.
      • FTP Delivery: Toggle on to deliver the document by FTP.
        • Server: The FTP server address where the document will be uploaded.
        • Port: The port number used for the FTP connection.
        • Username: The username for authenticating to the FTP server.
        • Password: The password for authenticating to the FTP server.
        • Folder: The destination folder on the FTP server where the document will be uploaded.
        • Overwrite File: Enable this option to overwrite an existing file with the same name in the specified folder.
    • Form Analysis Properties
      • Last Analyzed*: Limit the forms to only those analyzed on these dates.
      • Form Type: Limit the forms to only those with this type.
      • Suppress Data Change Logging: Select if changes to the data (inserts, updates, or deletes) should not be logged.
    • Microsoft Graph Directory Synchronization Properties
      • Tenant ID*: The unique identifier of the Azure Active Directory tenant that holds the source contact information.
      • Query String: The optional query string to use with the request.
      • Suppress Data Change Logging: Select if changes to the data (inserts, updates, or deletes) should not be logged.
    • Report Delivery Properties
      • Report*: The tabular report that will be delivered.
      • Report Parameters: The parameters to use when running the report. To pass in multiple parameters, separate them with a semicolon.
        • When setting a value in a dropdown list, use the following format:
          SL#<<Selection List GUID>>#=<<GUID of item in selection list>>
        • For example: SL#ParticipantsList#=4c6081db-25b8-47ec-845f-08daf275ba74
        • For date range and date time range parameters, please consult the Assetas manual.
      • Time Zone*: The time zone to use for any dates in the report.
      • Format*: The format of the delivered report.
      • File Name: The file name to use for the file. Please add the file type extension to the file name (.csv or .xlsx). Leave blank to use the report name.
      • Suppress Header Row: Toggle on to not include the header row in the file.
      • Send Blank Report: Toggle on to send the email even if the report has no data. Toggle off to only send the email when the report has data.
      • Email Delivery: Toggle on to send the report via email as an attachment.
        • Recipients: The list of email addresses that will receive the email.
          • Note: The variable {AssociatedContacts:GROUPID} may be used to deliver the email to a list of recipients.
        • Subject: The subject to use for the email.
        • Body: The body to use for the email.
      • FTP Delivery: Toggle on to upload the report to an FTP site.
        • Server: The URL or IP address of the FTP server.
        • Port: The port number to use for the FTP connection.
        • Username: The username to use for the FTP authentication credentials. Leave blank for an anonymous connection.
        • Password: The password to use for the FTP authentication credentials. Leave blank for an anonymous connection.
        • Folder: The folder on the FTP server where the report should be placed. For example: /Folder A/Folder 2
        • Overwrite File: Toggle on to replace an existing file on the server.
    • Webhook Properties
      • There are some key configuration elements to creating a webhook scheduled job:
        • The webhook must be sent as a POST to https://api.assetas.com/Webhook/<Webhook Scheduled Job ID>
        • The webhook request must include the API key, either as a querystring argument (ApiKey=xyz) or in the request header (X-API-Key=xyz)
        • The request body must be sent as form-data, x-www-form-urlencoded, or as raw JSON
      • Data Type*: The type of data to import.
      • Suppress Data Change Logging: Select if changes to the data (inserts, updates, or deletes) should not be logged.
    • Parameters
      • For API Integrations only.
      • UseToggle on to include this parameter in the request.
      • Key*The key of the item.
      • Value*The value of the item.
      • Comments: Any comments associated with the parameter.
    • Headers
      • For API Integrations only.
      • UseToggle on to include this header in the request.
      • Key*The key of the item.
      • Value*The value of the item.
      • Comments: Any comments associated with the header.
    • Body Items
      • For API Integrations only, and will appear only when a Body Type is selected in the API properties.
      • UseToggle on to include this header in the request.
      • Position*: The order of this body item in relation to the other body items in the group.
      • Key*The key of the item.
      • Value*The value of the item.
      • Value is an ArrayToggle on to process the value as an array. Multiple values may be added as a comma separated list with no spaces around the commas (for example: 1,2,3,4,5).
      • Comments: Any comments associated with the body item.
    • Mappings
      • For API Integrations, Data Imports, and Microsoft Graph scheduled jobs only. One column can be processed by multiple mapping records. This is useful in applying multiple sets of logic on one column of the import.
      • Hard Coded Value: Sets the token to the action value and does not use any import data.
      • Value: The data value to use.
        • Standard date variables are available to use in this field, including {MonthStart}, {MonthEnd}, {Now}, {Today}, {Today +/- n}, {YearStart},and {YearEnd}.
        • The variable {JsonValue:<token_name>, <child_token_name} may also be used to map data and perform evaluations that require data from multiple API response tokens or from tokens nested within a JSON object. For example:
          • Constructing dates: {DateTime:{JsonValue:gmt_datetime}, {JsonValue:hour_of_day}}
          • Concatenating fields: {JsonValue:token_name3 } (ID: {JsonValue:token_name4, child_token_name})
          • Converting dates: {Evaluate:ToDateTimeFromUnixSeconds ({JsonValue:parent_token,child_date})}
          • NOTE: The <child_token_name> parameter is optional and should only be used when a top-level token contains a JSON object. If the top-level token containing a JSON object is Base64 encoded, this variable will auto-detect the encoding and decode the object.
        • You may also use the {EVALUATE} variable to perform math functions on the data upon import. For example: {Evaluate:{JsonValue:flow} * {AssetAttribute:{JsonValue:serial},UnitConversion}}
        • NOTE: This field is visible when “Hard-Coded Value” is toggled on.
      • Token: The unique token that contains the data within the data source. For a .csv or .xlsx import, use the token to specify the column number of the mapping (for example, #12 or #7 in a .csv or A, B, C in a .xlsx), or the header name (for example, “Date”). If using the header name, the mapping must match exactly (capitalization, spaces, etc.).
        • NOTE: This field is visible when “Hard-Coded Value” is toggled off.
        • NOTE: When the “#” symbol appears as the first character of a token, a column number is required for the import to work. When the “#” symbol appears after the first character,  the import will assume it is part of the column name.  If a token begins with “#” but you are not able to provide the column number, a workaround is to use the {DataValue:[token name]} variable, replacing [token name] with the actual token name including “#”. 

      • Mapping: The data mapping to use for the given text or excel file or results set, with each column mapping entry separated with commas. View Data Import Mappings for more details.
        • NOTE: Excel files may be mapped within the files themselves (in the first row) and this mapping field should remain blank if using an Excel file mapped in this format. For detailed directions on mapping data, please consult the Assetas Technical User Manual.
      • Position*: The position of the mapping within the data source.
      • Aggregate: For API (Push) only. The aggregate function to apply to this column (leave blank to use the unchanged data).
      • Rounding: For API (Push) only. The number of decimal places to round the result.
      • Limit Length: For API (Push) only. Truncate the data that is displayed to this number of characters. Use 0 to show all the data.
      • Blank Data Replacement: For API (Push) only. The replacement value to use when the data is blank.
      • Null Data Replacement: For API (Push) only. The replacement  value to use when the data is null.
      • Data Transformations: Various data transformations are available. For detailed directions on setting up data transformations, please consult the Assetas Technical User Manual.
    • Recent Execution
      • For jobs that have been executed, this section will display the details around recent scheduled job runs.
      • API Integration and Data Import Scheduled Jobs will auto-cancel after they have been running for more than 30 minutes, or if they accrue over 100 row errors.
      • Hover over the “Logs” indicator to view the most recent detail log.
      • Select the “Details” button to view the detailed log entries, and to print and export detailed logs for individual scheduled job runs.
    • Child Data Imports
      • For scheduled jobs that trigger data imports, this section displays the details of those imports. The details and log page for each child import can be accessed by clicking on the row or the “Details” button.
    • Child Scheduled Jobs
      • For scheduled jobs that trigger child jobs, this section displays the list of those children and provides links to their details pages.
 

Manage scheduled jobs

  1. Locate the scheduled job.
  2. To the right of each record, under the Actions menu, you may:
    • View/Edit the scheduled job details.
    • Copy the scheduled job.
      • The copy function is only available for the following types of scheduled jobs: API Integration, Data Import, Database Integration, Document Analysis, Document Delivery, Form Analysis, and Report Delivery.
    • Download the scheduled job definition for importing into another database.
      • Only API Integration, Data Import, Database Integration, Document Delivery, and Report Delivery scheduled jobs may be downloaded.
    • Delete the scheduled job.
      • Rather than deletion, we recommend disabling a job by setting the “End Date” to any date in the past and the job will no longer execute.

Users are not able to delete the “Data Retention Policy” job, as these parameters are controlled in the Account Details, under Data Retention.

Table of Contents