curl --request GET \
--url https://studio.edgeimpulse.com/v1/api/organizations/{organizationId}/create-project/{createProjectId} \
--header 'x-api-key: <api-key>'{
"success": true,
"error": "<string>",
"status": {
"id": 123,
"organizationId": 123,
"name": "<string>",
"uploadType": "dataset",
"status": "waiting",
"transformJobStatus": "waiting",
"uploadJobStatus": "waiting",
"category": "training",
"created": "2023-11-07T05:31:56Z",
"totalDownloadFileCount": 123,
"totalDownloadFileSize": 123,
"totalDownloadFileSizeString": "<string>",
"totalUploadFileCount": 123,
"transformationParallel": 123,
"transformationSummary": {
"startedCount": 123,
"succeededCount": 123,
"finishedCount": 123,
"totalFileCount": 123,
"totalTimeSpentSeconds": 123
},
"inProgress": true,
"operatesOn": "file",
"totalTimeSpentSeconds": 123,
"totalTimeSpentString": "<string>",
"files": [
{
"id": 123,
"fileName": "<string>",
"bucketPath": "<string>",
"transformationJobStatus": "waiting",
"linkToDataItem": "<string>",
"lengthString": "<string>",
"sourceDatasetType": "clinical",
"transformationJobId": 123
}
],
"fileCountForFilter": 123,
"uploadJobId": 123,
"uploadJobFilesUploaded": 123,
"projectOwner": "<string>",
"projectId": 123,
"projectName": "<string>",
"transformationBlockId": 123,
"builtinTransformationBlock": {},
"transformationBlockName": "<string>",
"outputDatasetName": "<string>",
"outputDatasetBucketId": 123,
"outputDatasetBucketPath": "<string>",
"label": "<string>",
"filterQuery": "<string>",
"emailRecipientUids": [
123
],
"pipelineId": 123,
"pipelineName": "<string>",
"pipelineRunId": 123,
"pipelineStep": 123,
"createdByUser": {
"id": 123,
"name": "<string>",
"username": "<string>",
"photo": "<string>"
}
}
}Get the current status of a transformation job job.
curl --request GET \
--url https://studio.edgeimpulse.com/v1/api/organizations/{organizationId}/create-project/{createProjectId} \
--header 'x-api-key: <api-key>'{
"success": true,
"error": "<string>",
"status": {
"id": 123,
"organizationId": 123,
"name": "<string>",
"uploadType": "dataset",
"status": "waiting",
"transformJobStatus": "waiting",
"uploadJobStatus": "waiting",
"category": "training",
"created": "2023-11-07T05:31:56Z",
"totalDownloadFileCount": 123,
"totalDownloadFileSize": 123,
"totalDownloadFileSizeString": "<string>",
"totalUploadFileCount": 123,
"transformationParallel": 123,
"transformationSummary": {
"startedCount": 123,
"succeededCount": 123,
"finishedCount": 123,
"totalFileCount": 123,
"totalTimeSpentSeconds": 123
},
"inProgress": true,
"operatesOn": "file",
"totalTimeSpentSeconds": 123,
"totalTimeSpentString": "<string>",
"files": [
{
"id": 123,
"fileName": "<string>",
"bucketPath": "<string>",
"transformationJobStatus": "waiting",
"linkToDataItem": "<string>",
"lengthString": "<string>",
"sourceDatasetType": "clinical",
"transformationJobId": 123
}
],
"fileCountForFilter": 123,
"uploadJobId": 123,
"uploadJobFilesUploaded": 123,
"projectOwner": "<string>",
"projectId": 123,
"projectName": "<string>",
"transformationBlockId": 123,
"builtinTransformationBlock": {},
"transformationBlockName": "<string>",
"outputDatasetName": "<string>",
"outputDatasetBucketId": 123,
"outputDatasetBucketPath": "<string>",
"label": "<string>",
"filterQuery": "<string>",
"emailRecipientUids": [
123
],
"pipelineId": 123,
"pipelineName": "<string>",
"pipelineRunId": 123,
"pipelineStep": 123,
"createdByUser": {
"id": 123,
"name": "<string>",
"username": "<string>",
"photo": "<string>"
}
}
}Organization ID
Create project job ID.
Maximum number of results of transformation jobs
Offset in results of transformation jobs, can be used in conjunction with TransformLimitResultsParameter to implement paging.
Type of selected rows, either 'all', 'created', 'in-progress' or 'failed' (defaults to 'all')
OK
Whether the operation succeeded
Optional error description (set if 'success' was false)
Show child attributes
dataset, project waiting, created, started, finished, failed waiting, created, started, finished, failed waiting, created, started, finished, failed training, testing, split Number of transformation jobs that can be ran in parallel
Show child attributes
Total amount of compute used for this job (in seconds)
file, directory, standalone Total amount of compute used for this job (in seconds)
Total amount of compute used (friendly string)
Show child attributes
waiting, created, started, finished, failed Only set after job was finished
clinical, files Was this page helpful?