Living life on the edge
The rise of data consumption in the last decade has been meteoric. It almost seems redundant to mention, but the thing is, we take for granted the fact that video enters into our homes over digital networks at a constant pace, and it almost never skips a beat. Whether it’s live-streamed sports, on-demand TV, video conferencing or online gaming, we are getting more and more demanding not only in terms of quantity, but in terms of quality Streaming one step ahead.
It really wasn’t that long ago when the concept of ‘buffering’ a film was entirely normal. Where gaming lag was accepted as a pitfall of the medium, and the main point of throttle on the graphics and experience that gaming companies could deliver. Where a dropped conference call was just something you took in your stride.
But now, we want 24/7 perfection, and we want it cheap: we want high speeds straight into our houses and direct to our phones, and we want unlimited bandwidth from our services as they store and send tens of thousands of gigabytes out around the world every day.
So whilst the statement ‘we consume more online media than ever’ is obvious, what’s less obvious is how we can stretch an already stretched infrastructure to continue to provide more data, at a higher quality, with better reliability, and at increasingly lower costs.
And if that’s a question for consumers, then it must also be a question for content providers. And if that’s a question for content providers, then it must be a question for us. Because at MainStreaming, our key concern is empowering enterprises, media and gaming companies to provide their user base with exceptional video service. All day. Every day.
Approaching everything with fresh eyes
As you’d imagine then, a fundamental part of our drive is to push the capacities of the infrastructures we have at our disposal, not just tweaking and squeezing, but using innovation and creative engineering to fundamentally rethink the principles of data movement and how it’s achieved.
We’ve done this from the very beginning. That’s why we refer to our provision as an intelligent Media Delivery Platform (iMDP) which is an enhancement of a ‘Content Delivery Network’ (CDN): because we recognised right from the start that you can’t just keep doing things in the same old way and expecting to improve results. Video has a specific ‘nature’ to its data form, and needs to be treated as such. By recognising this and developing systems accordingly, we are able to grant content providers an unrivalled delivery network that meets the needs of their customer base, keeps operational costs low, and even reduces environmental impact (a once overlooked area of computing that is now becoming increasingly important).
And a core part of this has been embracing ‘the edge’. But what is the edge? How does it work? And why does it matter? These are some of the things we wanted to cover in this month’s blog post.
What we talk about when we talk about edge?
Whether you’re from an engineering background or not, you’ll be familiar to a degree with the way that network structures move things around in different ways (even if it’s just from the lessons you had years ago in school, where you hooked up lights and buzzers up in a circuit, and annoyed your teacher by making them buzz every seven seconds). Different network configurations bring advantages and disadvantages in relation to where they keep the data, where they move it to, and how they move it there – be it in parallel across lots of channels, or nice and orderly in a line.
And of course those structures have evolved. Where data was once centralised in physical data centres and had to travel significant distances to make it to your fingertips, increasingly there has been a move towards the outskirts of the network – the ‘edge’. At first, this was mere storage of data, so that it didn’t have so far to get to where it was going when it was called. But increasingly, the edge represents a place where the very processing of data can occur – placing servers and storage devices at micro datacentres (sometimes called cloudlets), ISPs and base stations around the ‘edge’ of the network so that more complex data activities can be undertaken without having to revert information back to a central cloud in the sky. (Indeed, a term that has gained some traction is the concept of ‘fog’ computing, based on the idea that fog is just clouds we have near us on the ground…)
Bilal and Erbad refer to the edge as a topology structure which “extends the storage and computational resources to a very fine grained level to the edges of the network, where users can access these resources in a ‘fewer number of hops’, resulting in real-time interaction, low latency and instant response, location awareness and support for mobility, to name but a few potential benefits”.
What are the benefits of the edge?Speed matters
So the hint as to why edge computing matters – and particularly why it matters to us – is already there Bilal and Erbad’s description. It results in data movement which is agile, responsive and most of all… fast.
That’s beneficial in any context, but it’s particularly beneficial in the context of demanding applications where there isn’t just a data dump on the user, but instead a near-continuous back and forth of demand requests. Whilst ‘conventional’ streaming of video still represents an overwhelming segment of the market, we now increasingly see moves from single view to multi-view to 360 view, from 2-D to 3-D, and other more demanding interactions. This heavy load simply cannot afford to trundle backwards and forwards to the central data centre multiple times per moment.
Moreover, in the field of both gaming, consumer content creation and live streaming, the idea of there being a direct linear relationship between Content Provider and viewer has gone out of the window – with the data flow relationship now often mediated by a crowd-sourced content model, involving a far more complex range of moving parts. Leveraging edge-based network structures facilitates real-time response and reduces bandwidth consumption. In essence, it allows us to push the complexity and quality of what can be provided to users in real time over global networks.
Quality of Experience matters even more
Even for the traditional form content or video streaming though, using an edge-based model provides a significant advantage: it allows us to see what the viewer is seeing, and thus, to make sure it’s right. Once-upon-a-time, Quality of Service (QoS) was the only metric that could be really used to ensure (well, hope for) the delivery of excellence to the customer. You tracked the bits and bytes going out, and if they were correct, you assumed they must be producing the desired experience on the customer’s screen.
But of course this wasn’t always the case – sometimes things went wrong.
Now though, with edge-based topologies, it’s possible to introduce Quality of Experience (QoE) as a mechanism for checking… well, the experience rather than the service. A constant dialogue of metrics between content provider and content consumer means that the provider can be much more sure that the consumer is receiving what they want to receive and what they expect to receive. And in an increasingly competitive market with next to no switching costs for a consumer, that’s truly important. Indeed, it’s a central source of competitive advantage.
By embracing edge technologies (when many providers are still using legacy topologies and network infrastructure components), MainStreaming sets itself apart in what it facilitates for content providers and gaming companies.
Where will the edge take us?
Gartner identifies that whilst only roughly 10% of processing occurs on the edge at the moment, by 2025 this is likely to be as much as 2025 – a fact which speaks to the self-evident benefits of the topology. Moreover, Forrester recognises that we’ll see four distinct ‘operations’ move towards the edge; engagement, operations, enterprise and provider – and the edge-based nature of all of these will be increasingly impacted by the rise of 5G.
But ‘the edge’ isn’t just something a company can turn around and say ‘oh, OK, let’s move to there’. It’s not like packing up your apartment and moving to a house in the country. It involves a fundamental restructure of network infrastructure elements and operations. It requires the leveraging of a multitude of moving parts and entities, and the harnessing of powerful software and management tools to coordinate all of these. Organisations seeking to move their operations to the edge are going to face a potentially difficult time of transition.
Which is why, at MainStreaming, we’re feeling pretty smug. By seeing so early in the game how fundamental edge computing would be to the provision of video distribution (and how meteoric the rise in video demand would be), we were able to secure both the logistical technologies and tacit knowledge needed to lead the way in the field. Our iMDP – which can be customised to make use of private, public and hybrid networks and edge points according to client need – has already helped to deliver exceptional quality live-stream and gaming services to millions of users across the globe.
That’s why we’re not too worried that the world is catching on to our little edge-based secret. Indeed, we’re glad; improvements in the way the industry approaches network management is good for everyone – it pushes the progress we can make collectively, and of course at MainStreaming, it just gives us more motivation to keep staying one step ahead, using our decades of experience in delivery networks and innovation mindset to drive forwards and continue to help our clients deliver video excellence to their customers.