V
V
Vladimir2022-02-10 20:44:36
Amazon Web Services
Vladimir, 2022-02-10 20:44:36

An error occurred while changing the storage class of an object. Why?

Current Object Class: Glacier Flexible Retrieval (formerly Glacier)
Must be changed to Standard

Via WWW
ERROR:
Automatically canceled
After 10,000 failed edits, the S3 console automatically cancels the edit storage class action for the remaining objects. For more information, see the Error column in the Failed to edit table below.

Well, good, more than 10 tons (maybe a separate question is it possible to remove restrictions)

CLI:
aws s3 cp s3://awsexamplebucket/dir1/ s3://awsexamplebucket/dir1/ --storage-class STANDARD --recursive --force-glacier -transfer
ERROR:
copy failed: s3://****** to s3://******* An error occurred (InvalidObjectState) when calling the CopyObject operation: Operation is not valid for the source object's storage class

UPLOAD
Looks like a Cyrillic problem

aws s3api restore-object --bucket f1 --key Sber/Sber.zip --restore-request ' {"Days":25,"GlacierJobParameters":{"Tier":"Standard"}}'
Result:
An error occurred (NoSuchKey) when calling the RestoreObject operation: The specified key does not exist.

and accordingly
aws s3api restore-object --bucket f1 --key Sberb/Sber.zip --restore-request '{"Days":25,"GlacierJobParameters":{"Tier":"Standard"}}'
works

Answer the question

In order to leave comments, you need to log in

2 answer(s)
V
Vitaly Karasik, 2022-02-11
@vitaly_il1

You cannot migrate an object from Glacier Flexible Retrieval to S3 Standard. You need to copy it to a new S3 standard bucket.
https://docs.aws.amazon.com/AmazonS3/latest/usergu... :
aws s3api restore-object .....
UPDATE: sorry, it seems possible - see https://aws.amazon.com/ premiumsupport/knowledge-ce... , just like you tried.
I would do one of three - either ask AWS support for help, or copy to another bucket, or try your command on fewer objects. If the latter works, then just run in a loop.

V
Vladimir, 2022-02-14
@Looka

I will partially answer myself.
The restore works with the command:
$ aws s3api restore-object --bucket awsexamplebucket --key dir1/example.obj --restore-request '{"Days":25,"GlacierJobParameters":{"Tier":"Standard"}}'
But what misled me, or rather, I did not understand how the recovery mechanism works.
A new object of the standard class is not created, and the class of the object is not changed.! The object simply becomes available for retrieval. Unfortunately, this is not visible in the web interface.
You can understand the status of an object like this:
$aws s3api head-object --bucket looka1 --key 1Cv77_Bin.zip
{ "AcceptRanges": "bytes",
"Restore": "ongoing-request=\"true\"",
"ContentLength": 10771150,
"ETag": "\"f4fc5cdcfeccf75b3a432c1ae6c540f9\"",
"VersionId": "CefbquFTCKrUa0xf5TEKv.2UdWCXNuNs",
"ContentType": "application/zip",
"Metadata": {},
"StorageClass": " DEEP_ARCHIVE"
} Pay
attention:
"Restore": "ongoing-request=\"true\""
true - restoration in progress, false - restored, access available.
Accordingly, the Initial restore item does the same in the web interface.
But the item is available only for files, if the Dir\ folder is in the list, then the interface does not select the restore option, only change the class,
And for the CLI, I did not find a solution for a recursive call, only. scripts, and is further complicated by the presence of Cyrillic and spaces in the names. In this sense, the question remains. I would be grateful for advice and help.
And the Cyrillic has nothing to do with

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question