Jump to content

Recommended Posts

Posted

We recently decided to opt for an s3 compatible on-premise solution, namely https://min.io/.

We would like to use a subdomain of our own domain to identify the storage, but the implementation made in IPS generates something like https://bucketname.file.example.com , which we were unable to do by using the free version of cloudflare.

The alternative is to use the bucket as the path name, as supported by the official amazon SDK (see the javascript snippet below).

const AWS = require('aws-sdk');
const fs = require('fs');

const s3 = new AWS.S3({
   endpoint: 'https://files.example.com',    //alternative compatible s3 software
    accessKeyId: 'foo',
    secretAccessKey: 'boo',
    s3ForcePathStyle: true,
});

const bucket = 'attachments';
s3.createBucket({ Bucket: bucket }, err => {
    if (err) {
        return console.log('err createBucket', err);
    }
});
const time = new Date().getTime();
  fs.readFile('index.js', (err, data) => {
     if (err) throw err;
     const params = {
         Bucket: bucket, // pass your bucket name
         Key: 'tests/'+time+'.js', // file will be saved as attachment/bucket/time.js
         Body: JSON.stringify(data, null, 2)
     };
     s3.upload(params, function(s3Err, data) {
         if (s3Err) throw s3Err
         console.log(`File uploaded successfully at ${data.Location}`)
		 

		s3.getObject({Bucket: bucket, Key: 'tests/'+time+'.js'}, function(err, data)
		{
			console.error(err);
			 console.log(data);
		});

     });
  });

We tested it with a custom url but it didn't work, apparently it only modifies the download url and not the bucket upload url.


I appreciate if anyone can help with this issue.

Posted

Check out the Amazon.php file  I realized that if bucket name is "." (dot?) it should use the path style, but really doesnt make any sence to have a url https://s3.amazon.com/./path/ as base url, it should be https://s3.amazon.com/bucket/path   instead 

 

<?php 
//system/File/Amazon.php
//...
/**
     * Build up the base Amazon URL
     * @param   array   $configuration  Configuration data
     * @return string
     */
    public static function buildBaseUrl( $configuration )
    {
        if ( mb_strstr( $configuration['bucket'], '.' ) )
        {
            return (
            \IPS\Request::i()->isSecure() ? "https" : "http" ) . "://"
            . ( isset( $configuration['endpoint'] ) ? $configuration['endpoint'] : "s3.amazonaws.com" )
            . "/{$configuration['bucket']}"
            . static::bucketPath( $configuration )
            . '/';
        }
        else
        {
            return (
            \IPS\Request::i()->isSecure() ? "https" : "http" ) . "://{$configuration['bucket']}."
            . ( isset( $configuration['endpoint'] ) ? $configuration['endpoint'] : "s3.amazonaws.com" )
            . static::bucketPath( $configuration )
            . '/';
        }
    }

As result to use dot as bucket name it will be return a 403 (probably due wrong signature)

 

Could contain: Text, File, Menu, Webpage

  • 2 weeks later...
Posted (edited)

I remember IPS mentioning their S3 implementation was meant for S3 and not S3 compatible services. So you will probably be on your own here hacking at /system/File/Amazon.php to make it work.

There is also an incompatibility in their generateTemporaryDownloadUrl/URL system which at some point made their signature URLs invalid with Wasabi & Minio. I'm not sure if it's still an issue, as we use a heavily patched /system/File/Amazon.php for our Minio deployment.

Edited by G17 Media
  • 2 months later...
Posted

Is there a specific reason why it's not possible to make IPB compatible with S3-compatible services? Apart from modding the Amazon.php file.

Posted

I'm not IPB staff, but there are a few reasons that come to mind...

  • Even though services claim to be "S3 compatible", they end up with things that are not FULLY compatible.  (Again, like above...  invalid signature URLs).  If it was actually compatible, you would not have problems.  
  • None of these services have a "mass following".  Why spend development time/effort on something that is only utilized by a super tiny portion of the user base?  The cost vs return really is not there.
  • If they were to add it, they have to support it.  Again that means they have to keep up with these third party companies, provide technical support, etc.  

This is a situation where this is best served as a 3rd party resource where it can serve that niche group that want to go that path.  

Posted (edited)

The core of the issue I uncovered during testing was a failure in IPS' URL encoding and had nothing to do with Minio/Wasabi (although I am happy to be wrong -- works fine in our environment with a one line fix). The "scariest" feature IPS really uses is multipart file uploads, which work on pretty much any S3 compatible service I've used.

Edited by G17 Media
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...