Salesforce Exporter

v0.0.0

📘

The Salesforce Bulk API documentation can be accessed here

🚧

Important Note on Data Management

  • We recommend testing this connector using credentials from a staging environment to ensure data flows as intended. Given that the reports can overwrite existing data or create new entries, please proceed with caution when executing them to prevent any unintended changes.

Set up

To get set up with the Salesforce exporter connector you will need the following:

  • Username - Your Salesforce username.
  • Password - Your Salesforce password.
  • Domain - If your Salesforce application is hosted at https://tea-pot.my.salesforce.com, the domain is tea-pot.my.salesforce.com, excluding the https:// portion.

Features


FeatureSupportNotes
API reliability🟢Reliable

Reports detail


⬇️ Report🔑 Incremental key🔑 Primary key📄 Link to API endpoint
Create ObjectsN/AN/ABulk Insert
Create Objects (Custom)N/AN/ABulk Insert
Failed Job ResultsN/AN/AFailed Job Results
Update ObjectsN/AN/ACreate a Job
Update Objects (Custom)N/AN/ACreate a Job
Upsert ObjectsN/AN/ABulk Upsert
Upsert Objects (Custom)N/AN/ABulk Upsert

Limitations


📘

Behaviour of exporter (and additional table for logging)

Whenever you run these reports the connector will store the requests in a table in the same schema with the same name as the destination table + _responses

so if you export little_pond.big_fish, it’ll be little_pond.big_fish_responses

The rows will contain the first level of the request map flattened (like url, method, headers, etc) as columns, and then a response and error columns with the response or error that came from that request.

In subsequent runs, the new rows will be appended to that same table.

📘

Custom Reports

Use the Custom report option to manually input an object name if your Salesforce object does not appear in the dropdown. A list of standard Salesforce objects can be found here.

📘

Failed Job Result report

In the destination_table_response , if the numberRecordsFailed field in the Response column > 0 , you can query the failed records separately due to the asynchronous nature of the Bulk API.

  • Input the ID from the response column into the Failed Job Report to identify why the API job failed.
  • Point the extract to the same warehouse table as the original attempt.
  • The destination_table_response will be updated with the results of the new query.
🚧

Batch Sizes

  • There is a limit of 15,000 batches allowed in a 24 hour period. Each Job, which is equivalent to a report run can not exceed 150 MB.