Batch Transaction Scoring
Overview
EagleSense provides S3 bucket access for uploading historical transaction data and batch processing operations. All S3 operations require proper authentication as described in the Authentication section. Access can be achieved through programmatic methods using AWS SDKs or, for certain operations like historical data uploads, through manual uploads if preferred.
Please note, in live EagleSense operation, it is crucial that we receive the status (e.g., declined, authorised) of all previous charge attempts — including CITs (even if they are not sent for scoring), MITs, Initials, and Rebills — for a given user and order before a new retry is submitted for scoring. This ensures we can recalculate aggregate features with full context prior to the next evaluation.
You will use this method to send all processed transactions (CITs, MITs, initials, and rebills) with their statuses (e.g., declined, authorised), ensuring we receive complete transaction history and details regardless of whether a transaction requires scoring.
Transactions identified as rebills without a status (e.g., declined, authorised) will be submitted for scoring. All other transactions — those with statuses (CITs, MITs, initials, and rebills) — will not be scored, but will be ingested for aggregate feature recalculation (based on their timestamps) and for ongoing model training.
This page describes the process for sending batch transaction data for assessment.
S3 Bucket Structure
Your client-specific S3 bucket follows this naming convention: {merchant_name}-merchant-eagle-sense
{merchant_name}-merchant-eagle-sense/
├── historical-data/ # Upload historical transaction data for ML model training
│ └── historical_data_<id>.json
├── batch-request-data/ # Upload batch transaction data for evaluation
│ └── batch_<external_batch_id>.json
├── batch-result-data/ # Download batch evaluation results
│ └── batch_<external_batch_id>_<internal_subbatch_id>_result_<subbatch_index>_<total_number_of_subbatches>.json
└── update-data/ # Upload batch transaction attributes that became available later
└── update_<external_update_batch_id>.jsonPrerequisites
Before performing S3 operations, ensure you have:
Proper authentication as described in the Authentication section
AWS SDK or HTTP client configured with your credentials
Your client name and S3 bucket name confirmed with EagleSense support
Data Upload Operations
Batch Transaction Assessment
Process large volumes of transactions asynchronously using the batch processing system.
Step 1: Upload Batch Data
Upload your transaction data directly to the batch-request-data directory. Each batch file must have a unique identifier to prevent conflicts and enable proper result tracking.
Supported Formats: JSON and CSV—Please confirm your preferred format with EagleSense support prior to integration.
Target Location: s3://{merchant_name}-merchant-eagle-sense/batch-request-data/
File Naming Convention: batch_<external_batch_id>.{json|csv}
Step 2: Monitor Batch Processing
Batch processing is asynchronous. Results are not immediately available after upload.
Critical: Implement polling logic to check for result availability.
Background: Batches are split into smaller sub-batches for processing. Each sub-batch generates its own result file.
Polling Strategy: Since the full result filename includes internal sub-batch identifiers not known at upload time, you must:
List objects in the batch-result-data directory with prefix:
batch_<external_batch_id>Filter for result files containing
_result_in the filenameCheck for completion by counting result files against expected sub-batches
Result File Pattern: s3://{merchant_name}-merchant-eagle-sense/batch-result-data/batch_<external_batch_id>_<internal_subbatch_id>_result_<subbatch_index>_<total_number_of_subbatches>.{json|csv}
Step 3: Download Results
Once processing is complete, download the evaluation results from the batch-result-data directory.
Result Location: s3://{merchant_name}-merchant-eagle-sense/batch-result-data/
Result File Pattern: batch_<external_batch_id>_<internal_subbatch_id>_result_<subbatch_index>_<total_number_of_subbatches>.{json|csv}
Download Process:
Use the result file list obtained from Step 2 monitoring
Download each sub-batch result file individually
Combine results from all sub-batches to get complete batch evaluation
Note: Each result file contains evaluation results for one sub-batch of your original batch. You'll need to download and process all sub-batch result files to get the complete evaluation.
Last updated