I've found an almost identical question here but don't have enough reputation to add comments so will ask again hoping that someone has found a solution in the mean time.
I am using MLflow (1.13.1) to track model performance and GCP Storage to store model artifacts.
MLflow is running on a GCP VM instance and my python application uses a service account with Storage Object Creator and Storage Object Viewer roles (and then I've also added storage.buckets.get permissions) to store artifacts in GCP buckets and read from them.
Everything is working as expected with parameters and metrics correctly displaying in MLflow UI and model artifacts correctly stored in buckets. The problem is that the model artifacts do not show up in MLflow UI because of this error:
Unable to list artifacts stored under gs:/******/artifacts for the current run.
Please contact your tracking server administrator to notify them of this error,
which can happen when the tracking server lacks permission to list artifacts under the current run's root artifact directory.
The quoted artifacts location exists and contains the correct model artifacts, and MLflow should be able to read the artifacts because of the Storage Object Viewer role and the storage.buckets.get permissions.
Any suggestion on what could be wrong? Thank you.
question from:
https://stackoverflow.com/questions/65939058/mlflow-stores-artifacts-on-gcp-buckets-but-is-not-able-to-read-them 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…