I have two sets of files: masks and images. There is no tiff decoder in 'tensorflow', but there is 'tfio.experimental'. Tiff files have more than 4 channels.
this code doesnt work:
import numpy as np
import tiffile as tiff
import tensorflow as tf
for i in range(100):
a = np.random.random((30, 30, 8))
b = np.random.randint(10, size = (30, 30, 8))
tiff.imsave('new1//images'+str(i)+'.tif', a)
tiff.imsave('new2//images'+str(i)+'.tif', b)
import glob
paths1 = glob.glob('new1//*.*')
paths2 = glob.glob('new2//*.*')
def load(image_file, mask_file):
image = tf.io.read_file(image_file)
image = tfio.experimental.image.decode_tiff(image)
mask = tf.io.read_file(mask_file)
mask = tfio.experimental.image.decode_tiff(mask)
input_image = tf.cast(image, tf.float32)
mask_image = tf.cast(mask, tf.uint8)
return input_image, mask_image
AUTO = tf.data.experimental.AUTOTUNE
BATCH_SIZE = 32
dataloader = tf.data.Dataset.from_tensor_slices((paths1, paths2))
dataloader = (
dataloader
.shuffle(1024)
.map(load, num_parallel_calls=AUTO)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)
it is impossible to keep entire dataset in the memory, saving to numpy arrays also gives no easy solution. Although code provided above gives no error directly. But shape of images is (None, None, None)
'model.fit' gives error
Is there alternative way to save arrays? I only see bruteforce solution with manual feeding random batches during custom training.
question from:
https://stackoverflow.com/questions/65944224/how-to-use-tf-dataset-with-tiff-files-in-image-segmentation 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…