We recently decided to opt for an s3 compatible on-premise solution, namely https://min.io/.
We would like to use a subdomain of our own domain to identify the storage, but the implementation made in IPS generates something like https://bucketname.file.example.com , which we were unable to do by using the free version of cloudflare.
The alternative is to use the bucket as the path name, as supported by the official amazon SDK (see the javascript snippet below).
const AWS = require('aws-sdk');
const fs = require('fs');
const s3 = new AWS.S3({
endpoint: 'https://files.example.com', //alternative compatible s3 software
accessKeyId: 'foo',
secretAccessKey: 'boo',
s3ForcePathStyle: true,
});
const bucket = 'attachments';
s3.createBucket({ Bucket: bucket }, err => {
if (err) {
return console.log('err createBucket', err);
}
});
const time = new Date().getTime();
fs.readFile('index.js', (err, data) => {
if (err) throw err;
const params = {
Bucket: bucket, // pass your bucket name
Key: 'tests/'+time+'.js', // file will be saved as attachment/bucket/time.js
Body: JSON.stringify(data, null, 2)
};
s3.upload(params, function(s3Err, data) {
if (s3Err) throw s3Err
console.log(`File uploaded successfully at ${data.Location}`)
s3.getObject({Bucket: bucket, Key: 'tests/'+time+'.js'}, function(err, data)
{
console.error(err);
console.log(data);
});
});
});
We tested it with a custom url but it didn't work, apparently it only modifies the download url and not the bucket upload url.
I appreciate if anyone can help with this issue.