Skip to main content
All CollectionsImport and Export of dataAutomation
What are the different possibilities of recurring imports?
What are the different possibilities of recurring imports?

This article will help you understand the features of PIM in terms of recurring data import

Sylvain Gourvil avatar
Written by Sylvain Gourvil
Updated over a week ago

In a world where product data management is essential to ensure consistency and quality of information across all distribution channels, it is crucial to effectively integrate data from external sources into your PIM.
To facilitate this integration, we offer several methods of importing data, tailored to various situations and specific needs.
In this FAQ, we will discuss four main import options:

  • Full import by API ;

  • Scheduled import of CSV/XLS files with file access via HTTP or SFTP ;

  • API-driven import of CSV/XLS files with file access via HTTP or SFTP ;

  • Import triggered by CSV API with file information on request.

Each of these methods has advantages and disadvantages, and we will provide detailed information to help you choose the best option for your needs.

Full import by API

If you want to import your data only via API, a cookbook is available and shows you the different steps.


Scheduled import of CSV/XLSX files with file access via HTTP or SFTP

Conversely, you can import your entire product (or media) data into the PIM/DAM in file format (CSV/XLSX) directly from the PIM.

To do this, you will need to create import profiles and then schedule the recurring import of this profile.

This method has the advantage of not requiring you to manage any code or orchestration since the PIM retrieves your files at the right time, integrates your data and generates a report.

In addition, you can find all imported files and reports in the activity reports.

However, there is a constraint. In order to retrieve your file, our import engine must have access to the file via an HTTPS or (s)FTP link. You must therefore be able to offer these accesses within the import profile.

HTTPS links must be public.

The (s)FTP server must be freely accessible via a user and a password.


API-driven import of CSV/XLS files with file access via HTTP or SFTP

If you are interested in importing files via HTTPS or (s)FTP, but do not want to use the Quable PIM schedule, it is possible to trigger the import on demand.

To do this, you need to create a profile for your import and then use the Quable API to trigger the import.

Here is an example in python.

import requests
import json

url = "https://_MYPIM_.quable.com/api/imports"
payload = json.dumps({
"importProfileId": "xxx-xxx-xxxx-xxxx-xxx-xxx",
"remotePath": "sftp://_DOMAIN_.com:__PORT_/_PATH_/_FILENAME_.csv"
})
headers = {
'Content-Type': 'application/hal+json',
'Authorization': 'Bearer _MYAPITOKEN_'
}
response = requests.request("POST", url, headers=headers, data=payload)

In case you want to request the import via an HTTPS file, you just have to modify the remotePath key.

You can use the FULL ACCESS TOKEN available on your instance.


Import triggered by CSV API with file information on request

If you don't want to or can't use HTTP or (s)FTP access to your file, it is possible to create an import profile and launch the import on demand while pushing your file.

The latter should be available locally.

You will need to:

  • Push your file to the PIM via the /api/files API

  • Use the id returned in the response to launch the import.

Here is an example in python.

import requests
import json

urlFile = "https://_MYPIM_.quable.com/api/files"
urlImport = "https://_MYPIM_.quable.com/api/imports"
headers = {
'Content-Type': 'application/json',
'Authorization': 'Bearer _MYAPITOKEN_'
}

# Push the local file
payload = {'fileType': 'import'}
files=[
(
'file',
('__FILENAME___',open('__PATH_TO_LOCAL_FILE__','rb'),'text/csv')
)
]
responseFile = requests.request("POST", urlFile, headers=headers, data=payload, files=files, timeout=30)
# Retrieve the file ID that will be used when starting the import
fileId = responseFile['id']

# equivalent with CURL
# curl --location urlFile \
# --header 'Authorization: Bearer _MYAPITOKEN_' \
# --form 'file=@"__PATH_TO_LOCAL_FILE__"' \
# --form 'fileType="import"'

# Start the import
payloadImport = json.dumps({
"importProfileId": "__IMPORT-PROFILE-ID__",
"fileId": fileId
})
responseImport = requests.request("POST", urlImport, headers=headers, data=payloadImport, timeout=30)

You can use the FULL ACCESS TOKEN available on your instance.

Did this answer your question?