Files

From DreamFactory
Jump to: navigation, search

File Storage Service, or simply File Service, extends the DreamFactory REST API with a generic interface to use local (DreamFactory server) and/or remote storage (Amazon S3, Azure blob, Rackspace, etc.) services seamlessly without having to worry about the differences among them.

File Services are native services of DreamFactory and are supported by features like role-service-access, lookup usage, live API documentation, and caching. Once your file service is configured in DreamFactory, all the configuration details like service credentials are hidden from your client. They are securely stored on your DreamFactory instance database. This allows for a simple, secure, and consistent way to access your local and remote storage services via the DreamFactory REST API.

There are primarily two types of File Services: Local and Remote/Cloud.

Note: As of DreamFactory 2.3.1, all files downloaded using the file services are downloaded in chunks in order to support large files. The default chunk size is 10MB. You can change the chunk size by changing (and uncommenting) the environment flag DF_FILE_CHUNK_SIZE in the .env file.

Local File Service

The Local File Service comes pre-configured in a fresh installation and by default allows you to use the server disk storage where your DreamFactory instance is hosted. The default root of this local file storage is /your-dreamfactory-installation-root/storage/app/. You can change this default root simply by uncommenting and changing the DF_LOCAL_FILE_ROOT setting in the .env file located in your DreamFactory installation root.

You can also configure the Local File Service for your DreamFactory instance to use cloud storage or even network storage mounted on your server. Please see the container section in the service configuration below for details.

Remote/Cloud File Service

DreamFactory also supports cloud-based storage services using the same REST API that's used for the Local File Service. The following cloud storage services are currently supported.

Configuration

File services are managed via the api/v2/system/service API endpoint under the system service and have the following service types.

Service Type Storage Service Configurations
local_file Local File Storage
 {
   //Choose a URL safe service name
   "name": "files",
   //Choose a label for your service
   "label": "Local File Storage",
   //A short description of your service
   "description": "Service for accessing local file storage.",
   //Boolean flag to activate/inactivate your service
   "is_active": true,
   //Service type
   "type": "local_file",
   "config": {
     "public_path": ["public/folder1/", "public/folder2/"],
     //A folder path from your system root, or a path relative to the configured 
     //storage folder for this installation, i.e. /home/vagrant/code/storage. 
     //This path must be readable and writable by the web server.
     "container": "local"
   }
 }
aws_s3 Amazon Web Service S3 Storage
 {
   //Choose a URL safe service name
   "name": "s3",
   //Choose a label for your service
   "label": "s3",
   //A short description of your service
   "description": "An S3 storage service",
   //Boolean flag to activate/inactivate your service
   "is_active": true,
   //Service type
   "type": "aws_s3",
   "config": {
     "key": "xyz",
     "secret": "123",
     "region": "us-east-1",
     "public_path": [
       "public/folder1/",
       "public/folder2/"
     ],
     "container": "my-s3-unique-container-name"
   }
 }
azure_blob Microsoft Azure Blob Storage
 {
   //Choose a URL safe service name
   "name": "ab",
   //Choose a label for your service
   "label": "Azure Blob",
   //A short description of your service
   "description": "An Azure Blob storage service",
   //Boolean flag to activate/inactivate your service
   "is_active": true,
   //Service type
   "type": "azure_blob",
   "config": {
     "account_name": "account-name",
     "account_key": "account-key",
     "protocol": "https",
     "public_path": [
       "public/folder1/",
       "public/folder2/"
     ],
     "container": "my-azure-blob-container"
   }
 }
gridfs GridFS Storage
{
    //Choose a URL safe service name
    "name": "gridfs",
    //Choose a label for your service
    "label": "gridfs",
    //A short description of your service
    "description": "A GridFS storage service",
    //Boolean flag to activate/inactivate your service
    "is_active": true,
    //Service type
    "type": "gridfs",
    "config": {
        "host": "x.x.x.x",
        "port": "27017",
        "database": "gridfs",
        "username": "gridfsuser",
        "password": "**********",
        "dsn": "null or dsn string if not using above connection parameters",
        "options": [],
        "driver_options": [],
        "public_path": [
            "public/folder1/",
            "public/folder2/"
        ]
    }
 
}
rackspace_cloud_files Rackspace Cloud Files
  {
    //Choose a URL safe service name
    "name": "rcf",
    //Choose a label for your service
    "label": "rcf",
    //A short description of your service
    "description": "Rackspace Cloud Files storage service",
    //Boolean flag to activate/inactivate your service
    "is_active": true,
    //Service type
    "type": "rackspace_cloud_files",
    "config": {
      "username": "username",
      "tenant_name": "123456",
      "api_key": "abc123xyz",
      "url": "https://identity.api.rackspacecloud.com/v2.0",
      "region": "DFW",
      "storage_type": "rackspace cloudfiles",
      "public_path": [
        "public/folder1/",
        "public/folder2/"
      ],
      "container": "my-rackspace-cloud-container"
    }
  }

Notice that all storage type configurations ("config" section) have public_path and container configuration in common. The cloud storage types have some extra self-explanatory configuration items that are specific to the corresponding cloud vendors.

The following describes the two common configurations across all storage service types:

public_path

Array of texts. Optional. If you want to make any of your storage directories public (accessible from anywhere), then you can list those directories under public_path config. Once you make your directories public, you can access them via the API http://your-instance/<service>/<path>/<file>. For example, let's say you create a Local File Service using the following config.

 {
   "name": "files",
   "label": "Local File Storage",
   "description": "Service for accessing local file storage.",
   "is_active": true,
   "type": "local_file",
   "config": {
     "public_path": ["docs/", "public/folder1/", "public/folder2/"],
     "container": "local"
   }
 }

You can access your files using following APIs.

   http://your-instance/files/docs/document.html
   http://your-instance/files/public/folder1/image.jpg
   http://your-instance/files/public/folder2/report.html

container

Text. Required. For remote/cloud storage service types, this is the name of your bucket/root directory. However, for Local File Storage, this is a full folder path from your system root, or a path relative to the configured storage folder for this installation, i.e. /home/vagrant/code/storage. This path must be readable and writable by the web server.

Below is how the default disks configuration looks. You can choose any of these disks (local, s3, rackspace, azure) as your container for your Local File Storage. The default is 'local'.

'disks' => [
    'local'     => [
        'driver' => 'local',
        'root'   => env('DF_MANAGED_STORAGE_PATH', storage_path()) .
            DIRECTORY_SEPARATOR .
            ltrim(env('DF_LOCAL_FILE_ROOT', 'app'), '/'),
    ],
    's3'        => [
        'driver' => 's3',
        'key'    => env('AWS_S3_KEY'),
        'secret' => env('AWS_S3_SECRET'),
        'region' => env('AWS_S3_REGION'),
        'bucket' => env('AWS_S3_CONTAINER'),
    ],
    'rackspace' => [
        'driver'       => 'rackspace',
        'username'     => env('ROS_USERNAME'),
        'password'     => env('ROS_PASSWORD'),
        'tenant_name'  => env('ROS_TENANT_NAME'),
        'container'    => env('ROS_CONTAINER'),
        'url'          => env('ROS_URL'),
        'region'       => env('ROS_REGION'),
        'storage_type' => env('ROS_STORAGE_TYPE'),
    ],
    'azure'     => [
        'driver'       => 'azure',
        'account_name' => env('AZURE_ACCOUNT_NAME'),
        'account_key'  => env('AZURE_ACCOUNT_KEY'),
        'protocol'     => 'https',
        'container'    => env('AZURE_BLOB_CONTAINER'),
    ],

],

If you choose any of the cloud-based storage options, make sure that the corresponding service credentials are in the .env file. For example, if you choose 's3' as your container, then you'll need to define the following options in the .env file.

   AWS_S3_KEY
   AWS_S3_SECRET
   AWS_S3_REGION
   AWS_S3_CONTAINER

You can also define your own disk and add it to this list. For example, if you want to use network storage mounted on your server as the storage for your Local File Service, then define your 'network_mount' disk and add it to the list like below. Once your disk is added to this list, you can use it for your Local File Service container.

'disks' => [
    'local'     => [
        'driver' => 'local',
        'root'   => env('DF_MANAGED_STORAGE_PATH', storage_path()) .
            DIRECTORY_SEPARATOR .
            ltrim(env('DF_LOCAL_FILE_ROOT', 'app'), '/'),
    ],
    'network_mount' => [
        'driver' => 'local',
        'root' => '/mnt/your-mount-point'
    ],
    's3'        => [
        'driver' => 's3',
        'key'    => env('AWS_S3_KEY'),
        'secret' => env('AWS_S3_SECRET'),
        'region' => env('AWS_S3_REGION'),
        'bucket' => env('AWS_S3_CONTAINER'),
    ],
    'rackspace' => [
        'driver'       => 'rackspace',
        'username'     => env('ROS_USERNAME'),
        'password'     => env('ROS_PASSWORD'),
        'tenant_name'  => env('ROS_TENANT_NAME'),
        'container'    => env('ROS_CONTAINER'),
        'url'          => env('ROS_URL'),
        'region'       => env('ROS_REGION'),
        'storage_type' => env('ROS_STORAGE_TYPE'),
    ],
    'azure'     => [
        'driver'       => 'azure',
        'account_name' => env('AZURE_ACCOUNT_NAME'),
        'account_key'  => env('AZURE_ACCOUNT_KEY'),
        'protocol'     => 'https',
        'container'    => env('AZURE_BLOB_CONTAINER'),
    ],

],

Events

All file storage services fire the following events.

  • files.get,
  • files.post,
  • files.patch,
  • files.delete,
  • files.{folder_path}.get,
  • files.{folder_path}.post,
  • files.{folder_path}.patch,
  • files.{folder_path}.delete,
  • files.{file_path}.get,
  • files.{file_path}.post,
  • files.{file_path}.put,
  • files.{file_path}.patch,
  • files.{file_path}.delete