Python

 To test both scripts and reconcile them into a single Python script to generate a REQ and then extract/read information about that REQ, you can follow these steps. I'll outline the overall process, including merging the logic of both scripts and how to test them in your environment (Azure).

Steps to Reconcile and Test the Scripts:

1. Define the Overall Flow:

  • Step 1: Generate a REQ using the first script (via a POST request to ServiceNow).
  • Step 2: Extract the generated REQ details using the second script (via a lookup and aggregation query).

The idea is that once the REQ is generated, its ID (or display value) can be captured from the response, and that same ID will be used to extract details using the second script.

2. Setup Your Environment:

Ensure that you have the following in your development environment:

  1. Python libraries:
    • requests: To handle HTTP requests to the ServiceNow API.
    • json: For working with JSON payloads.
    • os: To manage environment variables securely.
  2. Secure your credentials:
    • Use Azure Key Vault or environment variables to store sensitive information (authentication tokens, credentials).

3. Reconcile the Scripts:

Here is a step-by-step guide to merge both scripts:

Step 1: Define Configuration

Store any hardcoded values in environment variables or configuration files.

python
import requests import json import os # Securely manage credentials using environment variables or Key Vault user_name = os.environ.get('USER_NAME') pwd = os.environ.get('PASSWORD') api_root = os.environ.get('API_ROOT') # URL like 'https://atlas-api.ubsdev.net/api/' x_snow_auth = os.environ.get('X_SNOW_AUTH') # ServiceNow auth token

Step 2: Function to Generate REQ

This will use the first script to generate a REQ.

python
def generate_req(): headers = { 'Content-Type': 'application/json', 'Accept': 'application/json' } cat_item_sys_id = '4516dddd87cf06d00dc141d30cbb358b' api_endpoint = f'sn_sc/servicecatalog/items/{cat_item_sys_id}/order_now' data = { 'sysparm_requested_for': '3792a9cf4799b1d050d346f8736d4334', 'sysparm_quantity': 1, 'variables': {'hostname': 'ZU322K10'} } response = requests.post( f'{api_root}{api_endpoint}', auth=(user_name, pwd), data=json.dumps(data), headers=headers, verify=False ) if response.status_code == 200: response_data = response.json() # Assuming 'REQ' details are in the response, extract the REQ number req_number = response_data.get('result', {}).get('display_value') print(f"REQ generated successfully: {req_number}") return req_number else: print(f"Failed to generate REQ. Status Code: {response.status_code}") print(response.text) return None

Step 3: Function to Extract REQ Details

This will use the second script to fetch the details of the REQ generated in Step 2.

python
def extract_req_details(req_number): headers = { 'x-snow-auth': x_snow_auth, 'Content-Type': 'application/json', 'Accept': 'application/json' } # Aggregation pipeline to extract REQ details payload = [ { "$match": { "request.display_value": req_number } }, { "$lookup": { "from": "sc_task", "localField": "_id", "foreignField": "parent.sys_id", "as": "sc_task" }}, { "$unwind": "$sc_task" }, { "$project": { "RITM Number": "$number", "REQ Number": "$request.display_value", "Catalog Item": "$cat_item.display_value", "Task Number": "$sc_task.number", "Task Description": "$sc_task.short_description", "Task Assignment Group": "$sc_task.assignment_group.display_value", "Task State": "$sc_task.state.display_value", "Task Assigned To": "$sc_task.assigned_to.display_value" }} ] response = requests.post( f'{api_root}/api/now/table/<table_name>', # Replace with correct endpoint headers=headers, data=json.dumps(payload) ) if response.status_code == 200: req_details = response.json() print(f"REQ details retrieved successfully: {json.dumps(req_details, indent=4)}") else: print(f"Failed to retrieve REQ details. Status Code: {response.status_code}") print(response.text)

Step 4: Combine the Two Steps

You now have the functions to generate a REQ and extract its details. To combine them:

python
def main(): # Step 1: Generate the REQ req_number = generate_req() # Step 2: If REQ is successfully generated, extract the details if req_number: extract_req_details(req_number) else: print("Could not generate REQ. Skipping extraction step.") if __name__ == "__main__": main()

4. Testing the Combined Script:

You can now test the combined script by running it in your environment.

a. Testing in Local/Azure Environment:

  • Ensure your environment variables (e.g., USER_NAME, PASSWORD, API_ROOT, X_SNOW_AUTH) are correctly set up. For example, in Azure, you can use Azure Key Vault or Azure App Configuration to manage these values.

  • Run the script using a local Python environment, an Azure VM, or via an Azure Function.

b. Use Logging for Debugging:

Add logging to capture and monitor the status of each step. This will be useful, especially in an automated or Azure environment.

python
import logging logging.basicConfig(level=logging.INFO) logging.info("Starting REQ generation...")

c. Test for Different Scenarios:

  • Test what happens when the REQ generation fails.
  • Test edge cases where the REQ Number or other details might not be available.

d. Check the Response Structure:

Ensure that the response structure from ServiceNow matches what you're expecting. If necessary, adjust the response.json() parsing logic based on the API response.


5. Next Steps for Automation:

  1. Azure Functions: If you want to automate this, consider deploying the script as an Azure Function that runs on a schedule or is triggered by specific events.

  2. Continuous Integration: If this script will be part of a larger process, you can integrate it with Azure DevOps pipelines.

  3. Logging and Monitoring: Set up logging (e.g., Azure Monitor) to track the success/failure of REQ generation and detail extraction.

By following these steps, you can reconcile both scripts into one and automate the generation and extraction of REQ details. Let me know if you need further assistance with any part of the process!

No comments

Theme images by tjasam. Powered by Blogger.