Cloud software, local files: A hybrid DAM approach
There’s been two interesting articles on hybrid Digital Asset Management systems this week: Jeff Lawrence’s Finding the Perfect Balance Between SaaS and In-House DAM, and Ralph Windsor’s Combining On-Premise And SaaS DAM Strategies.
I don’t know which DAM products already work the way Jeff is describing – a “tightly integrated hybrid DAM solution” that keeps work in progress in a local system, pushing finished assets to a SaaS component for external distribution. [Update: Jeff says “Picturepark, SCC, Kaltura and many others”.] I’ve been thinking about hybrid DAM for quite a while from the developer’s perspective. Here’s an idea that I haven’t gotten around to implementing yet (click to enlarge):
The primary benefit of a hybrid DAM is fast internal file transfer because the files remain inside the local network. So let’s assume the asset files (images, PDFs, videos etc.) are stored on a local server. That local server will also deliver the files via a simple Web server, and run minimal file processing software to be able to ingest files and accept uploads, create renditions and extract file metadata.
The rest of the DAM software will run “in the cloud”: The user interface, metadata database and search engine index. When you’ll run a search in the DAM UI, the system will know your files’ URLs and point your Web browser to load them from the (fast) local network. (Just what image search engines on the Web are doing: They copy text and metadata into their index and provide the search interface, while the image files you’re seeing are downloaded from the original servers.)
Upload and ingestion will be a two-step process with the files going onto the local server, which then sends all metadata to the cloud DAM (to put it into its database and search engine index). Instructions for creating renditions (how many, how large) can be fetched from the cloud.
We’ll now have decoupled “software as a service” from “storage as a service”: The storage, delivery and processing of files is cleanly separated from the DAM software, the metadata database and the search engine index. The latter – which require a lot more ongoing maintenance (software updates, search performance tuning etc.) – will nicely be dealt with by the DAM provider in their cloud. The local file server component can be installed relatively easy, or run from a pre-packaged virtual machine appliance or even a hardware offering (“your local DAM storage box”).
Now what about distribution of assets to the outside world which doesn’t have access to your local network? If you expect low traffic, your local file server could be made available on the Internet and directly serve the files. Or files to be distributed could be copied to Internet-connected storage (in the DAM cloud or at any other storage provider). Maybe you’ll just want to copy smaller renditions of the files into the cloud, and redirect download requests for large files to your local server.
I’d add a small component that keeps copies of the metadata records on the local server. If the Internet connection fails or the DAM cloud goes down, you’ll be able to perform basic searches on your local server. (Or easily move to another DAM cloud provider since all the data is still under your control.)
If you wanted to get really creative, imagine the local server software running directly on your Windows or Mac client computer. Asset files could remain on your hard disk, while the heavy DAM machinery runs in the cloud. Or the “local server” would actually be running in a different cloud. With the protocol between DAM cloud and local server being open and well-documented, there could be multiple interoperable implementations. How about a distributed, “peer-to-peer” DAM with many local servers contributing to the same DAM cloud instance?
I’m pretty sure someone’s already doing this. Any pointers? [Update: Jason Wehling of NetXposure writes that NetX can sync portions of the repository onto local drives or shares.]