Or: How I learned to stop worrying and love ‘the cloud’..
A great session, that spoke to a lot that’s going on in post production and computing at the moment.
We’ve had 100 years of filmmaking, 30 years of computing and a 10 year timeframe to integrate them together, with some successes and some failures. The inevitable total merge of these processes is, that catch-all concept, ‘the cloud’. But in this context, the cloud just really means centralized networked storage, combined with the ability to bring creative and technical processes to that storage.
In this environment, Post Production is facing extreme competition from clients that can and will hire commodity machines to get work done at lower prices. The post facility needs to be smarter than just a place of for-hire rental, and needs better technology than the client can source for themselves.
Ramy Katrib from Digital Film Tree took up this theme by talking about OpenStack, an open source cloud project initiated by Nasa and Rackspace. Openstack gives the ability to set up an open cloud environment, that is “like Amazon & Azure, but way cooler”. It gives the ability to set up storage and networking in the same place, is highly scalable and avoids vendor lock-in. And because it is open source it engenders what Ramy called “Co-opetition”: a platform that is used among highly competitive entities, who develop individually and share common technology at the same time.
In Openstack, you can have many ‘stacks’ that are geographically distributed but appear as one storage block. It has intelligence to distribute and sync files to all parts of the stack- in this case, Production Office, Studio Archive, VFX, Editorial may all be separate stacks, and OpenStack works out what files are needed where.
Each stack in the whole can have all data, or a security-restricted part of the whole relevant to the work being done at that location.
Metadata is replicated through all parts of the OpenStack, and then heavier data is drip-fed based on bandwidth and delivery time. The files are allowed to form relationships with each other through the many users and applications interacting with the files, to build a ‘community of data’. The data is managed by rules and algorithm- applications still need to be developed to support decision making on large data sets. An archive information management client that I’ve been working with refers to the principle of ‘disposition’- determining what’s valuable in a near archive, what gets moved into deeper, longer-term archive, and what gets deleted and disposed of- this process will be managed by humans presiding over smart algorithms in the future.
Joe Beirne finished by speaking about a principle that is coming through education now with astonishing results- the Flip Classroom. The students do the homework with teachers during the day, and then go home to listen to the lectures online. This same principle is now being applied most to the post facility- it’s only the really intensive data-crunching work and work that required calibrated environments that needs to be done indoors, at the facility. Everything else can be done away from the facility- distributed processed brought to storage.
But even these calibrated environments can now be built wherever a client wants. Once the data is everywhere, school is out.