Scheduled report files

If enabled, all Pismo customers can run multi-tenant batch jobs to generate report files, including those required to meet Brazilian Central Bank regulations and those to report daily account limits.

When a job executes, it searches for records that meet filter conditions (usually a Data Lake query). If any are found, it writes to the Amazon Web Services (AWS) S3 bucket for your organization. Files are saved in AWS parquet format.

File generation and paths

Files are written and saved according to type (daily, monthly, or full) and date. For example, s3://pismo-dataplatform-tn -55317847-57cd-45a3-8aed-a8dadd63cc6b/reports/job_name/<type>/<date values>.

Files and paths are generated on the following basis:

  • Daily - In this case, the type is daily and the date partitioning values are /year=YYYY/month=MM/day=DD/ . For example, .../reports/accounting_events/type=daily/year=2020/month=1/day=10/filename.parquet

  • Monthly - At your request, or when the need for a periodic reprocessing is identified, a job executes for a closed month of data. In this case, the type is monthly and the date partitioning is year=YYYY/month=MM. For example, .../reports/accounting_events/type=monthly/year=2020/month=1/filename.parquet

  • Full - You can generate a complete file, taking into account all past job data without a date filter. In this case, the type is full and the partitioning corresponds to the file's generation date. For example. .../reports/accounting_events/type=full/year=2020/month=1/day=10/filename.parquet

🚧

Target date versus generation date

It's important to note that date partition values correspond to the last available data date, not when the job is executed. So, for example, if a daily job is run on 02/04/2020, the date partition values correspond to 02/03/2020.