Issues with the loading script: not enough RAM to load all the TAR file
#1
by
albertvillanova
HF staff
- opened
This is a follow-up of the issue: https://github.com/huggingface/datasets-server/issues/1248
As reported by
@chavinlo
, the viewer crashes: JobManagerCrashedError
Job manager crashed while running this job (missing heartbeats).
Error code: JobManagerCrashedError
This is caused by the loading script, which tries to load all TAR file in memory. Maybe you could try to yield while iterating over the TAR archive components...
elif file_type == 'jso':
response_dict[file_id][file_type] = json.loads(file_contents.read())
if len(response_dict[file_id]) == 4:
key = file_id
value = response_dict.pop(file_id)
yield key, {
"id": key,
...
For the reported issue with
_CHUNK_LIST = json.loads(open(dl_manager.download("lists/chunk_list.json"), 'r').read())
you could try to replace it with:
chunk_list_url = "lists/chunk_list.json"
chunk_list_path = dl_manager.download(chunk_list_url)
with open(chunk_list_path, "r") as f:
_CHUNK_LIST = json.load(f)