Wednesday, 26 September 2012

Responsive Images: No solution yet

I've just finished reading what is, essentially, a very good A List Apart article on the new(ish) problems of high resolution displays entering the marketplace (such as the new iPad).

It's good because it tells people, again, what they should be practicing but oh so many aren't...optimise your assets. However it's not great because it's the same conversation we've been having for over a decade now. The problems of the turn of the century, where people would "go crazy" and design for 56k modem users leaving those of us on the 28.8k connections languishing in ever longer download times, are the problems of now...but on a more global scale.

I had high hopes...

A visitor to your website might be using a high-PPI tablet or phone from the comfort of her couch, or from the middle of the Arizona desert. Likewise, those brand-new Retina Macbook Pros could be connected to the internet via Google Fiber, or tethered to a 3G hotspot in an airport. We must be careful about our assumptions regarding pixels and bandwidth.

...but ultimately this avenue was simply not progressed. And thus we have the problem about discussion of proper responsive web design in the modern time: It focuses *only* on the front end development team.

I love front end development, the creativity and freedom to express is fantastic...but I also love that when there is a problem that front end development simply can't address, I can fall back to my server side knowledge.

So when I see this...

Other attempts exist, such as bandwidth detection, cookie setting, server-side detection, or a mixture of all three. As much as I’d like robots to solve my problems, these solutions have a higher barrier to entry for your average web developer. The major pain point with all of them is that they introduce server/cookie dependencies, which have been historically troublesome.

We need a purely front-end solution to high resolution images.

...I just have to groan in despair. (emphasis is mine)

You see, the problem of serving the right image size for the right device is indeed a front end problem. You measure the size of the screen you've got, you take the resolution in to account, and you put the right image in to keep everything looking pretty and engaging.

Bandwidth though, bandwidth... that is a server side problem (for now), and one we have gone on for too long ignoring. The idea that you can have a purely front-end solution to the paradoxes of bandwidth management is unrealistic right now, and may always be. I guess that's why the rest of the article ignored this issue.

Take for example your "average web developer", who is on holiday in the moors. They have, at best, a 2g/edge connection to the internet over mobile networks but their company has just called them to sort out a critical problem with a website.

They pull out the (hypothetical, but soon to happen) large screen laptop they develop on knowing this kind of call was possible, with a 200DPI screen. Your purely front-end developer produced website loads some catastrophically large images that take ages, and cost loads, to load and assess.

On the flip side you have someone with an old iPhone2, sitting at home on their WiFi, with bandwidth that exceeds the kind of speeds you'd ever need to buffer your favourite episode of Breaking Bad with no buffering issues, and they are being served heavily optimised images just in case they have little network capability.

Each person receives the opposite experience that they deserve, and all because our purely front-end approach doesn't take in to account the main factor that we're trying to be responsible about.

The solution?

Browsers need to take more responsibility. Our only communication with the user's situation is through the browser. We are sitting in a room with an earpiece on that is connected to a single person. That person can talk to anyone they want, but we can only listen to, and speak to, them.

The browsers need to start taking information from the user's machine, and from their very browsing, to generate reliable and mid-term data of that person's bandwidth. Through access to the device's inner workings, and complimented by being able to keep short-term calculations of something as simple as an ongoing average download speed on all sites, would provide our websites with the information they need to do right by the user.

This could be as simple as sending this information along within the data that is sent to the server as standard (in a HTTP request). A numeric value of the current likely download speed would let server-side developers take that value and guide what is served to the user, by managing which CSS files are delivered, or more directly through a server-side image processing solution such as Adaptive Images.

But it's not impossible to assume that the browser developers themselves can't go one step further. If we are to implement the srcset or picture aspects of HTML5, then why not future proof the web and consider that a value for bandwidth may also be included?

If a media-query style selector was available for bandwidth, such as "min-bandwidth" and "max-bandwidth"...dealing with the potential problem for the value of this bandwidth changing (and thus changing content)...we could add a better granularity to our front-end image requests.

Looking at the Apple pushed srcset attribute, and using a value that represents the current average download speed in KB/s, we could see this (taken from the A List Apart article's example and modified)...

<img alt="Cat Dancing" src="small-1.jpg"
srcset="small-2.jpg 2x 50b, // this is pretty cool
large-2.jpg 100w 100b, // meh
large-2.jpg 100w 2x 400b // meh@2x
">


You see that I'm following the same nonsensical syntax that has been proposed by saying that "b" represents the bandwidth availability of the client, as "w" represents the pixel width.

If we were to look at the "A List Apart" example, that incorporates the picture element, it may look more like this:

<picture alt="Cat Dancing">
<source media="(min-width: 45em) and (min-bandwidth: 400kb)" srcset="large-1.jpg 1x, large-2.jpg 2x">
<source media="(min-width: 45em) and (min-bandwidth: 200kb)" srcset="large-1.jpg 1x">
<source media="(min-width: 18em) and (min-bandwidth: 100kb)" srcset="med-1.jpg 1x, med-2.jpg 2x">
<source media="(min-width: 18em) and (min-bandwidth: 50kb)" srcset="med-1.jpg 1x">
<source media="(min-bandwidth: 50kb)" srcset="small-1.jpg 1x, small-2.jpg 2x">
<img src="small-1.jpg">
</picture>


This would solve a whole load of problems, aside from being even more verbose than the already verbose suggestion! For a start it would begin by looking for a small and generic image that all can use. If the screen is a high resolution display it'll go for the higher resolution image, but only if the bandwidth measurements are such that it is responsible to do so.

The end result is that the developer with the large laptop in the hills will get poor resolution imagery, but it'll be fast...while the person at home will get crystal clear high quality imagery, also delivered fast.

By being more holistic in our approach to how we make decisions, whether front-end or server-side, we can deliver better experiences. Adding the ability to control content by an average bandwidth value, at the start of a user's browsing session ideally, we can control load times. We can ensure that no image we serve is likely to take more than 3 seconds to load regardless of the quality being delivered, all because we are deciding on that quality based on their bandwidth limitations.

EDIT: There is a further look at this, and how much broader we can go beyond screen dimensions, on Matthew Palmer's site.

2 comments:

  1. Hey, we in the Responsive Images Community Group wrote up some thoughts on bandwidth management here: http://www.w3.org/community/respimg/2012/06/18/florians-compromise/

    The long and short is that allowing authors to selectively apply this information may not be ideal—we’d be adding an extra step between the bandwidth info the browser has at hand and the preferences of the user. Some may not use it; some may decide that their super sharp images are worth the bandwidth hit and draw the lines in the wrong places. Not ideal.

    We’re codifying in the draft spec (http://dvcs.w3.org/hg/html-proposals/raw-file/tip/responsive-images/responsive-images.html) that the `srcset` attribute should act as more of a “suggestion” rather than the “absolute” represented by media queries. This would allow the UA—based on the info it has at hand (including user preferences)—to determine which source is most appropriate to the user’s bandwidth as well as screen resolution. Ideally, I envision in-browser preferences along the lines of “always download low resolution,” “always download high-resolution images,” and “download high-resolution images as bandwidth permits”—and perhaps that threshold could be a setting, as well. This would ensure a consistent experience from site to site, and leave control in the hands of users and the UAs.

    I’d love to hear your thoughts on the above!
    Mat Marquis

    ReplyDelete
    Replies
    1. Hmm, interesting Mat!

      I definitely see the benefits of using srcset as a suggestion, and that bandwidth and resolution are somewhat linked...but I don't feel that it deals with the issue of image size versus screen size adequately enough. If I'm serving something to a large screen TV my images have to be high res, but they can be medium resolution on a laptop.

      I understand that the browser can interpret the dimensions of the file in to it's assessment of whether it serves a larger or smaller image...but as it can't know the dimensions of the file it won't know for certain which is best to serve.

      Without the absolute of the media query there just isn't enough control for web developers to perform correct responsive design.

      Then there is the further question of who knows best. Again, with bandwidth, how would the browser know what the file sizes of the images are in relation to the bandwidth suggestions? I guess it would ping the server for the file information of all referenced files before deciding which to download?

      There is an issue, of course, with acquiescing control to the developers, and the potential abuse that can happen..but let's face it that there are all kinds of abuses that bad developers can cause! The question though is for responsible developers, is a granularity that they can control not more accurate than asking a browser to make assumptions?

      As long as the browser isn't cancelling an ongoing download before silently loading a different image in the background, and the browser is acting intelligently enough to note high variation in latency, this shouldn't cause any problem for the user's experience in my mind.

      That said, the idea of a different image format that caters for multiple bandwidth layers is interesting, however feels to me like it is something even tougher for the browsers to decipher accurately. Maybe I'm just pessimistic!

      Delete

Got something to say about my post? I'd love to hear it!

Try to keep it civil, I don't delete comments unless obliged to or feel the thread is getting too out of hand, so don't make me do it.