Jump to content

Recommended Posts

Posted
Just now, sobrenome said:

And the replication bucket has a copy only rule or a sync rule? If it is sync, if a file is deleted from the main bucket it will also be deleted from the replica.

Hi,

Yes, it's all automatic. I also enabled the versioning option, you can set up rules for how long to keep the versions for, and it will log if you want it to. I suppose you could also put a copy in the deep freeze/glacier type S3 storage facilities, if you were so minded (along with frozen onion rings, pizza and cheesecakes!).

 

 

 

Posted
11 minutes ago, sobrenome said:

If the file is not actually deleted, there is already a security layer against abusive deletion by adminCP account on IPS, and the files can be restored. No need for replication, as long as S3 has multiple files along AZs. Am I right?

Well the versioning is certainly useful for providing the option to restore a previous version. The replication option offers a different perspective, it might be a bit superfluous depending upon your usage case, for some it may be about geographical and legislative considerations, to others it might provide a quicker response to have a bucket in a region closer to the end user for uploading or perhaps a check had to be made to see if a cached file was still cachable. I haven't done any particular testing with it. 

I think if you are interested in being able to restore deleted content, bucket versioning certainly and backups would be the way to go.

Posted
4 hours ago, sobrenome said:

So far o good. If I do not use dots in bucket name, how could I use Cloudflare and S3?

Good question! S3 takes care of it effectively. Each object (file) has a unique Object URL so you can still access it via a browser etc as normal.

E.g.

https://cdn-my-bucketname-com.s3-us-west-1.amazonaws.com/android-chrome-144x144.png

Assuming the object/file itself has been made public and you don't have any of the bucket-wide permission policies preventing public access to it, you would be able to view that file in your browser. Using a CNAME means you can effectively mask half of that long ugly filename, shortening it to say:

https://cdn-my-bucketname.com/android-chrome-144x144.png

 

Posted
2 hours ago, The Old Man said:

I think if you are interested in being able to restore deleted content, bucket versioning certainly and backups would be the way to go.

How do you backup s3 files?

Posted
9 hours ago, sobrenome said:

How do you backup s3 files?

AWS provide a number of technologies so it's kind of tied into budget with store layer savings (warm/cold storage), automation, usage/resilience needs.

Essentially, Amazon S3 provides features and tools that help maintain data version control, prevent accidental deletions, and replicate data to the same or different AWS Region. So with S3 Versioning, you can preserve, retrieve, and restore every version of an object stored in Amazon S3, which allows you to recover from unintended (malicious?) user actions and application failures. There is AWS Backup, but that's more for egress of data from on site storage to S3.

You could combine by making use of other features like the Lifecycle rules, same region replicationcross region replication, Intelligent Tiering, S3 Batch Operations.

One method is to make use of multiple accounts to send objects/files to a bucket owned by a second account where the first account doesn't have delete permissions, and then the 2nd account could  take ownership via S3 Object Ownership.  

More reading:

https://docs.aws.amazon.com/AmazonS3/latest/dev/disaster-recovery-resiliency.html

https://docs.aws.amazon.com/AmazonS3/latest/dev/batch-ops.html
 

There's also MFA Delete if you want to add extra protection:

https://aws.amazon.com/s3/features/

S3 Object Lock

The new Storage Lens feature looks interesting, it provides analytics and recommendations.

 

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...