Files

From DreamFactory
Jump to: navigation, search
(Created page with "File Storage Service, or simply File Service, extends the DreamFactory REST API with a generic interface to use local (DreamFactory server) and/or remote storage (Amazon S3,...")
(No difference)

Revision as of 20:58, 27 July 2016

File Storage Service, or simply File Service, extends the DreamFactory REST API with a generic interface to use local (DreamFactory server) and/or remote storage (Amazon S3, Azure blob, Rackspace etc.) services seamlessly without having to worry about the differences among them. File Services are native services of DreamFactory and is supported by features like role-service-access, lookup usage, live API documentation, and caching. Once your file service is configured in DreamFactory, all the configuration details like service credentials are hidden from your client. They are securely stored on your DreamFactory instance database. This allows for a simple, secure, and consistent way to access your local and remote storage services via the DreamFactory REST API. There are primarily two types of File Services - Local and Remote/Cloud.

Local File Service

The Local File Service, comes pre-configured in a fresh installation, by default, allows you to use the server disk storage where your DreamFactory instance is hosted. The default root of this local file storage is /your-dreamfactory-installation-root/storage/app/. You can change this default root simply by uncommenting and changing the DF_LOCAL_FILE_ROOT setting in .env file located in your DreamFactory installation root.

You can also configure your DreamFactory instance to use a cloud storage or even a network storage mounted on your server for your Local File Service. Please see the container section in the service configuration below for details.

Remote/Cloud File Service

DreamFactory also supports cloud based storage services using the same REST API that's used for Local File Service. The following cloud storage services are currently supported.

  • AWS S3
  • Azure Blob Storage
  • OpenStack Object Storage
  • Rackspace Cloud Files

Configuration

File services are managed via the api/v2/system/service API endpoint under the system service and have the following service types.

Service Type Storage Service Configurations
local_file Local File Storage
 {
   //Choose a URL safe service name
   "name": "files",
   //Choose a label for your service
   "label": "Local File Storage",
   //A short description of your service
   "description": "Service for accessing local file storage.",
   //Boolean flag to activate/inactivate your service
   "is_active": true,
   //Service type
   "type": "local_file",
   "config": {
     "public_path": ["public/folder1/", "public/folder2/"],
     "container": "local"
   }
 }
aws_s3 Amazon Web Service S3 Storage
 {
   //Choose a URL safe service name
   "name": "s3",
   //Choose a label for your service
   "label": "s3",
   //A short description of your service
   "description": "An S3 storage service",
   //Boolean flag to activate/inactivate your service
   "is_active": true,
   //Service type
   "type": "aws_s3",
   "config": {
     "key": "xyz",
     "secret": "123",
     "region": "us-east-1",
     "public_path": [
       "public/folder1/",
       "public/folder2/"
     ],
     "container": "my-s3-unique-container-name"
   }
 }
azure_blob Microsoft Azure Blob Storage
 {
   //Choose a URL safe service name
   "name": "ab",
   //Choose a label for your service
   "label": "Azure Blob",
   //A short description of your service
   "description": "An Azure Blob storage service",
   //Boolean flag to activate/inactivate your service
   "is_active": true,
   //Service type
   "type": "azure_blob",
   "config": {
     "account_name": "account-name",
     "account_key": "account-key",
     "protocol": "https",
     "public_path": [
       "public/folder1/",
       "public/folder2/"
     ],
     "container": "my-azure-blob-container"
   }
 }
openstack_object_storage OpenStack Object Storage
 {
   //Choose a URL safe service name
   "name": "oos",
   //Choose a label for your service
   "label": "oos",
   //A short description of your service
   "description": "OpenStack Object Storage service",
   //Boolean flag to activate/inactivate your service
   "is_active": true,
   //Service type
   "type": "openstack_object_storage",
   "config": {
     "username": "username",
     "password": "secret",
     "tenant_name": "123456",
     "url": "https://identity.api.rackspacecloud.com/v2.0",
     "region": "DFW",
     "public_path": [
       "public/folder1/",
       "public/folder2/"
     ],
     "container": "my-open-stack-object-container"
   }
 }
rackspace_cloud_files Rackspace Cloud Files
  {
    //Choose a URL safe service name
    "name": "rcf",
    //Choose a label for your service
    "label": "rcf",
    //A short description of your service
    "description": "Rackspace Cloud Files storage service",
    //Boolean flag to activate/inactivate your service
    "is_active": true,
    //Service type
    "type": "rackspace_cloud_files",
    "config": {
      "username": "username",
      "tenant_name": "123456",
      "api_key": "abc123xyz",
      "url": "https://identity.api.rackspacecloud.com/v2.0",
      "region": "DFW",
      "storage_type": "rackspace cloudfiles",
      "public_path": [
        "public/folder1/",
        "public/folder2/"
      ],
      "container": "my-rackspace-cloud-container"
    }
  }

If you look closely to all these storage type configurations ("config" section) you will see that they all have public_path and container configuration in common. The cloud storage types have some extra configuration items that are specific to the corresponding cloud vendors and are self-explanatory. The following describes the two common configurations across all storage service types.

public_path

Array of texts. Optional. If you like to make any of your storage directories public (accessible from anywhere) then you can list those directories under public_path config. Once you make your directories public, you can access them via the API http://your-instance/<service>/<path>/<file>. For example, let's say you create a Local File Service using the following config.

 {
   "name": "files",
   "label": "Local File Storage",
   "description": "Service for accessing local file storage.",
   "is_active": true,
   "type": "local_file",
   "config": {
     "public_path": ["docs/", "public/folder1/", "public/folder2/"],
     "container": "local"
   }
 }

You can access your files using following APIs.

   http://your-instance/files/docs/document.html
   http://your-instance/files/public/folder1/image.jpg
   http://your-instance/files/public/folder2/report.html

container

Text. Required. For remote/cloud storage service type, this is the name of your bucket/root directory. However, for Local File Storage, this is the name of one of the disks configured in your DreamFactory instance under config/filesystems.php. Below is how the default disks configuration looks. You can choose any of these disks - local, s3, rackspace, azure as your container for your Local File Storage. The default is 'local'.

'disks' => [
    'local'     => [
        'driver' => 'local',
        'root'   => env('DF_MANAGED_STORAGE_PATH', storage_path()) .
            DIRECTORY_SEPARATOR .
            ltrim(env('DF_LOCAL_FILE_ROOT', 'app'), '/'),
    ],
    's3'        => [
        'driver' => 's3',
        'key'    => env('AWS_S3_KEY'),
        'secret' => env('AWS_S3_SECRET'),
        'region' => env('AWS_S3_REGION'),
        'bucket' => env('AWS_S3_CONTAINER'),
    ],
    'rackspace' => [
        'driver'       => 'rackspace',
        'username'     => env('ROS_USERNAME'),
        'password'     => env('ROS_PASSWORD'),
        'tenant_name'  => env('ROS_TENANT_NAME'),
        'container'    => env('ROS_CONTAINER'),
        'url'          => env('ROS_URL'),
        'region'       => env('ROS_REGION'),
        'storage_type' => env('ROS_STORAGE_TYPE'),
    ],
    'azure'     => [
        'driver'       => 'azure',
        'account_name' => env('AZURE_ACCOUNT_NAME'),
        'account_key'  => env('AZURE_ACCOUNT_KEY'),
        'protocol'     => 'https',
        'container'    => env('AZURE_BLOB_CONTAINER'),
    ],

],

If you choose any of the cloud based storage then you will need to make sure that the corresponding service credentials are in the .env file. So, if you choose 's3' as your container then you will need to define the following options in the .env file.

   AWS_S3_KEY
   AWS_S3_SECRET
   AWS_S3_REGION
   AWS_S3_CONTAINER

You can also define your own disk and add it to this list. For example, if you like to use a network storage mounted on your server as the storage for your Local File Service then you can define your 'network_mount' disk and add it to the list like below. Once your disk is added to this list you can use it for your Local File Service container.

'disks' => [
    'local'     => [
        'driver' => 'local',
        'root'   => env('DF_MANAGED_STORAGE_PATH', storage_path()) .
            DIRECTORY_SEPARATOR .
            ltrim(env('DF_LOCAL_FILE_ROOT', 'app'), '/'),
    ],
    'network_mount' => [
        'driver' => 'local',
        'root' => '/mnt/your-mount-point'
    ],
    's3'        => [
        'driver' => 's3',
        'key'    => env('AWS_S3_KEY'),
        'secret' => env('AWS_S3_SECRET'),
        'region' => env('AWS_S3_REGION'),
        'bucket' => env('AWS_S3_CONTAINER'),
    ],
    'rackspace' => [
        'driver'       => 'rackspace',
        'username'     => env('ROS_USERNAME'),
        'password'     => env('ROS_PASSWORD'),
        'tenant_name'  => env('ROS_TENANT_NAME'),
        'container'    => env('ROS_CONTAINER'),
        'url'          => env('ROS_URL'),
        'region'       => env('ROS_REGION'),
        'storage_type' => env('ROS_STORAGE_TYPE'),
    ],
    'azure'     => [
        'driver'       => 'azure',
        'account_name' => env('AZURE_ACCOUNT_NAME'),
        'account_key'  => env('AZURE_ACCOUNT_KEY'),
        'protocol'     => 'https',
        'container'    => env('AZURE_BLOB_CONTAINER'),
    ],

],


Events

  • files.get,
  • files.post,
  • files.patch,
  • files.delete,
  • files.{folder_path}.get,
  • files.{folder_path}.post,
  • files.{folder_path}.patch,
  • files.{folder_path}.delete,
  • files.{file_path}.get,
  • files.{file_path}.post,
  • files.{file_path}.put,
  • files.{file_path}.patch,
  • files.{file_path}.delete,