The identities list contains a big array of approximately 57000 images. Now, I am creating a negative list with the help of itertools.product
. This whole list store in RAM which is very costly and my system will be hanged after 4 minutes. How I can optimize the below code and avoid saving in RAM?
for i in range(0, len(idendities) - 1):
for j in range(i + 1, len(idendities)):
cross_product = itertools.product(samples_list[i], samples_list[j])
cross_product = list(cross_product)
for cross_sample in cross_product:
negative = []
negative.append(cross_sample[0])
negative.append(cross_sample[1])
negatives.append(negative)
print(len(negatives))
negatives = pd.DataFrame(negatives, columns=["file_x", "file_y"])
negatives["decision"] = "No"
negatives = negatives.sample(positives.shape[0])
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…