Incase you missed the big announcement...FLASH 10.1 is the future of everything. Only, no mention of set top boxes. Then, a few days later, Nvidia announce they have stopped making chips for Intel chip sets. Opinion seems split on FLASH 10 for DTV. Some believe it will be on the market in 2011, others (including one DTV chip manufacturer) think 2012.
What a mess.
Sunday, October 11, 2009
Thursday, October 8, 2009
OpenGL ES 2.0 - What has been cut out

OpenGL is changing with each release. For OpenGL programmers its a busy time to keep up. OpenGL ES 1.1 and OpenGL ES 2.0 are virtually different languages. The approaches programmers must take to getting 3D on the screen change dramatically. The best known change is the addition of shaders (programmable bits of the graphics pipeline) but whats left out is almost as dramatic:
- Immediate mode (glBegin/End and glVertex)
- Per primitive attributes like glColor, glNormal
- Explicit image drawing (glDrawPixels, glCopyPixels etc.)
- Antialiased points and lines
- The matrix stack and commands for co-ordinate transforms (eg glRotate)
- Non power of 2 textures with mip-mapping
- 3D textures
- Many data commands like glNormalPointer, glColorPointer, client state etc.
- Material and light mode
- Fog
- Texture environment (no fixed function blending)
- Provide their own maths libraries
- Use arrays for declaring 3d data
- Write shaders and pass in their own data for transforms and lighting and materials
- Handle images via textures
Microsoft Set Top Box
I'm not convinced by the arguments yet but this article, makes an interesting read nontheless. Could it be that Microsoft plans a gaming set top box? The key patent contains the following claims:
1. A system for integrating media on a gaming console, comprising:a first subsystem for displaying a dashboard having a plurality of media selections, wherein said dashboard is native to a gaming console and wherein at least one of said media selections is a television selection; and a second subsystem for providing users with the option of switching back and forth between said television selection and other media selections in said plurality of media selections.
2. The system according to claim 1, wherein said television selection is branded with a logo of a service provider providing content to said gaming console.
3. The system according to claim 1, further comprising a third subsystem for providing users with the option of selecting starting of said gaming console as a set-top box.
4. The system according to claim 1, further comprising a third subsystem for providing users with the option of selecting remote starting of said gaming console as a set-top using a gaming controller.
The patent goes on to clima DVR features. So are we are looking at a set top with DVR based on the xbox 360 console?
Notice the magic wand trademark they have too. I previously blogged about such a device which when used is very compelling and fun.
1. A system for integrating media on a gaming console, comprising:a first subsystem for displaying a dashboard having a plurality of media selections, wherein said dashboard is native to a gaming console and wherein at least one of said media selections is a television selection; and a second subsystem for providing users with the option of switching back and forth between said television selection and other media selections in said plurality of media selections.
2. The system according to claim 1, wherein said television selection is branded with a logo of a service provider providing content to said gaming console.
3. The system according to claim 1, further comprising a third subsystem for providing users with the option of selecting starting of said gaming console as a set-top box.
4. The system according to claim 1, further comprising a third subsystem for providing users with the option of selecting remote starting of said gaming console as a set-top using a gaming controller.
The patent goes on to clima DVR features. So are we are looking at a set top with DVR based on the xbox 360 console?
Notice the magic wand trademark they have too. I previously blogged about such a device which when used is very compelling and fun.
Tuesday, October 6, 2009
Graphics in MHP Tutorial
Incase you are interested in MHP or GEM graphics there is a great tutorial on the web already. I couldn't write better so I'll just provide a link:
Introduction to MHP Graphics
My only caveat is part on colour is a little out of date now. Most receivers support way more than 256 colours these days.
I should add that the article comes from a book called Interactive TV Standards: A guide to MHP, OCAP and JavaTV.
Introduction to MHP Graphics
My only caveat is part on colour is a little out of date now. Most receivers support way more than 256 colours these days.
I should add that the article comes from a book called Interactive TV Standards: A guide to MHP, OCAP and JavaTV.
Tutorial: Double buffering (and more)
Buffers are areas of memory specifically reserved for rendering graphics. The number of buffers affects what can be done by the graphics subsystem well and also has a memory cost. In general the number of buffers, the better.
Single Buffering

One buffer is used for drawing and also used for display simultaneously.
There is usually a problem with flickering. This happens if rendering of graphics is partially complete when frame fly back (the process of copying the buffer to the output screen) occurs. We see half of one frame and half another and, over time, this looks like a flicker in a small area or a band moving down the screen if the whole screen is being redrawn.
This is really only useful on software based, SD projects, with little or no animation and so very few pixels are being rendered each frame. The outcome is very little evident flicker. If the drawing process can wait for frame fly back, render everything in the next 50th of a second and then wait again for frame fly back, no flicker will be seen.
Graphics libraries go to a lot of trouble to identify the minimal area of the screen that needed to be redrawn. Remember that moving one icon across some image requires both the icon and the image to be redrawn in many cases.
Double Buffering

Double buffering or better is used in all PC games, and in simulations and anywhere that moving most of the visual scene is required. One buffer is being used as a display (front buffer), whilst the other is drawn to (back buffer). This means there is no flicker. Simply put, the buffer being used for display is complete and is never drawn to. When drawing is finished, the buffers can be swapped (a pointer index, not a copy usually). However, if this is done at any time, tearing can occur still (as half of one buffer and then half of another is copied to the output). Usually therefore, we lock the swapping of buffers to vsync. However, the problem of blocking occurs. Imagine that we must redisplay the graphics every 1/50th second, or every 20ms. Now if we take 21 ms to actually render our scene, we miss the 20ms critical barrier but as we do not wish to risk tearing, we wait 19ms before swapping again! This means our rendering rate drops to 25 fps from 50 fps as we always take 40ms to render (and then block) each frame.
Triple Buffering

To avoid this drop in frame rate, a third buffer is used. Here,
One buffer is being being displayed, one is ready to display, one being drawn to. The two buffers not being displayed can be swapped at any time with no effect. This allows our graphics process to render as fast as it can. In modern graphics cards on PCs double buffering often really implements triple buffering without the user or coder knowing in the background by default, though controller dialogues that come with the graphics card may also expose this, but they not.
N-buffering
Of little ineterst in DTV but just for completeness, triple buffering can be extended to cushion buffering, a logical extension of triple buffering that uses N buffers rather than just 3:
One buffer being displayed, N-2 buffers ready for display, one buffer being drawn to.
The idea is that some frames of animation are harder to draw than others and by amortising across many buffers the frames that would slow the 3D down do not. The problem with this technique is lag and it has not become popular.
On Memory
So the preferred technique is triple buffering. However, the cost is memory. Example: HD 32-bit buffer (RGBA, 8 bits each) is 1920x1080x4 = 7.9Meg. So triple buffering would be 7.9 x 3 = 24 Meg just for graphics buffers. In a unified memory architecture (where the graphics chip uses main memory for rendering) thats 24 Meg of DRAM gone, just for the graphics architecture to operate.
This can be cut down using pre-multiplied alpha format (ie 24 bits per pixel) or even a limited colour range of 565 (green usually gets more bits as the human eye is more sensitive to green) and by reducing to double buffering:
So a double buffered, limited colour, with pre-multiplied alpha could be: 1920x1080x2x2 = 8 Meg roughly. Remember its just for the graphics architecture to render, nothing to do with user defined graphics yet.
Thats without a z-buffer of course but thats for another tutorial, suffice to say in this context, if a full z-buffer is used it may require 1920x1080x3 bytes (24 bit z-buffer). Only one is required regardless of how many drawing buffers are used so an additional 6 Meg. However many of the architectures in DTV devices are smarter than this and use only small amounts of memory for z-buffering.
Conclusion
Triple buffering is the best for performance but double buffering is a good performance/memory balance. Single buffering is only for low end, software rendering on SD devices. Unfortunately, these buffers use large amounts of memory and will increase the BOM of set top boxes accordingly.
Single Buffering

One buffer is used for drawing and also used for display simultaneously.
There is usually a problem with flickering. This happens if rendering of graphics is partially complete when frame fly back (the process of copying the buffer to the output screen) occurs. We see half of one frame and half another and, over time, this looks like a flicker in a small area or a band moving down the screen if the whole screen is being redrawn.
This is really only useful on software based, SD projects, with little or no animation and so very few pixels are being rendered each frame. The outcome is very little evident flicker. If the drawing process can wait for frame fly back, render everything in the next 50th of a second and then wait again for frame fly back, no flicker will be seen.
Graphics libraries go to a lot of trouble to identify the minimal area of the screen that needed to be redrawn. Remember that moving one icon across some image requires both the icon and the image to be redrawn in many cases.
Double Buffering

Double buffering or better is used in all PC games, and in simulations and anywhere that moving most of the visual scene is required. One buffer is being used as a display (front buffer), whilst the other is drawn to (back buffer). This means there is no flicker. Simply put, the buffer being used for display is complete and is never drawn to. When drawing is finished, the buffers can be swapped (a pointer index, not a copy usually). However, if this is done at any time, tearing can occur still (as half of one buffer and then half of another is copied to the output). Usually therefore, we lock the swapping of buffers to vsync. However, the problem of blocking occurs. Imagine that we must redisplay the graphics every 1/50th second, or every 20ms. Now if we take 21 ms to actually render our scene, we miss the 20ms critical barrier but as we do not wish to risk tearing, we wait 19ms before swapping again! This means our rendering rate drops to 25 fps from 50 fps as we always take 40ms to render (and then block) each frame.
Triple Buffering

To avoid this drop in frame rate, a third buffer is used. Here,
One buffer is being being displayed, one is ready to display, one being drawn to. The two buffers not being displayed can be swapped at any time with no effect. This allows our graphics process to render as fast as it can. In modern graphics cards on PCs double buffering often really implements triple buffering without the user or coder knowing in the background by default, though controller dialogues that come with the graphics card may also expose this, but they not.
N-buffering
Of little ineterst in DTV but just for completeness, triple buffering can be extended to cushion buffering, a logical extension of triple buffering that uses N buffers rather than just 3:
One buffer being displayed, N-2 buffers ready for display, one buffer being drawn to.
The idea is that some frames of animation are harder to draw than others and by amortising across many buffers the frames that would slow the 3D down do not. The problem with this technique is lag and it has not become popular.
On Memory
So the preferred technique is triple buffering. However, the cost is memory. Example: HD 32-bit buffer (RGBA, 8 bits each) is 1920x1080x4 = 7.9Meg. So triple buffering would be 7.9 x 3 = 24 Meg just for graphics buffers. In a unified memory architecture (where the graphics chip uses main memory for rendering) thats 24 Meg of DRAM gone, just for the graphics architecture to operate.
This can be cut down using pre-multiplied alpha format (ie 24 bits per pixel) or even a limited colour range of 565 (green usually gets more bits as the human eye is more sensitive to green) and by reducing to double buffering:
So a double buffered, limited colour, with pre-multiplied alpha could be: 1920x1080x2x2 = 8 Meg roughly. Remember its just for the graphics architecture to render, nothing to do with user defined graphics yet.
Thats without a z-buffer of course but thats for another tutorial, suffice to say in this context, if a full z-buffer is used it may require 1920x1080x3 bytes (24 bit z-buffer). Only one is required regardless of how many drawing buffers are used so an additional 6 Meg. However many of the architectures in DTV devices are smarter than this and use only small amounts of memory for z-buffering.
Conclusion
Triple buffering is the best for performance but double buffering is a good performance/memory balance. Single buffering is only for low end, software rendering on SD devices. Unfortunately, these buffers use large amounts of memory and will increase the BOM of set top boxes accordingly.
Monday, October 5, 2009
Fonts for TV
If you are a designer, you take fonts very seriously. Its hard for most techies to understand this and get their heads around it. Don't believe me? Look here, here, or here to see passion about fonts. Or, more entertainingly :
I suspect as we go HD and graphics become more important to our lives, fonts will be too. Ten years of staring at Tiresias may be enough for anyone. Here it is, in all its glory:

Bitstream, who offer Tiresias screen font, also offer other TV fonts.
Another company offering TV fonts is Ascender. One font of particular note is their Mayberry(TM) font which is a one for one replacement for Tiresias. An image below illustrates how close the fonts are visually:

The top font is Teresias, the one below is Mayberry. The most obvious difference is how open the font is (compare for example the letter "S"). Open fonts will render much better at lower resolutions, something important for TV. Ascender also offer fonts for all sorts of TV functions and regions such as these teletext fonts.
This all said, a big part of the design of TV fonts has been geared to low resolution displays historically. In HDTV, I suspect a wider range of fonts are acceptable visually. So long as its not Comic Sans Serif of course ...
I suspect as we go HD and graphics become more important to our lives, fonts will be too. Ten years of staring at Tiresias may be enough for anyone. Here it is, in all its glory:

Bitstream, who offer Tiresias screen font, also offer other TV fonts.
Another company offering TV fonts is Ascender. One font of particular note is their Mayberry(TM) font which is a one for one replacement for Tiresias. An image below illustrates how close the fonts are visually:

The top font is Teresias, the one below is Mayberry. The most obvious difference is how open the font is (compare for example the letter "S"). Open fonts will render much better at lower resolutions, something important for TV. Ascender also offer fonts for all sorts of TV functions and regions such as these teletext fonts.
This all said, a big part of the design of TV fonts has been geared to low resolution displays historically. In HDTV, I suspect a wider range of fonts are acceptable visually. So long as its not Comic Sans Serif of course ...
Thursday, October 1, 2009
Tutorial: Antialiasing for DTV
The point of this blog was always to help bring people who are experts in DTV up to speed in the emerging graphics technologies. This mini tutorial looks at antialiasing and aliasing.
There are many types of aliasing but the most easily understood is that seen when drawing lines. The screen is composed of pixels. A perfect line cannot be drawn, instead one is approximated by filling in pixels. Close up it might look something like this:
This is aliasing. Its the result of sampling something which is continuous (the line equation) into digital discreet samples (the pixels). This can be unnoticeable to extremely annoying, especially if the line moves. When the line moves slowly, crawl occurs as first one set of pixels and then another is highlighted. The line is meant to be moving slowly and smoothly but the pixels suddenly switch. Over many frames of animation this makes the line look like its changing shape and crawling across the screen.
Antialiasing can be seen as an attempt to smooth out the digital sampling and have less harsh edges. Here is an antialiased version:
Pixels around the line are measured for how close they are to the line and the colour chosen depending on distance. It looks like the line is just blurred but it isnt. Blurring would not use distance from the line equation. The antialiasing make all the difference visually.
Fortunately 3D chips have antialiasing built in. This is all very well but TVs have digital filters that filter the output of any set top box of their own rendering pipeline. These filters blur or sharpen the image AFTER the graphics output. In experiments I've done, the filters within a single TV can make a huge difference and the filters between different TVs can make any attempt to compensate useless.
On LCD television this situation is even more interesting as each pixel is composed of three different coloured bulbs in a grid. Something like this for a single pixel row:
Now its possible to use each coloured bulb and measure its distance from the mathematically perfect line. This is "sub-pixel" antialiasing. It means we get more antialiasing and less blurring in effect. The result on black text on white background, looks like this:
Its worth taking a moment to see the left of the letter is more red and the right more blue. Looking at the pixel grid and distances you might think it would be the other way round but this is drawing black on white so things further away from the line or letter are brighter and so red is brighter on the left. There is a great article on this at java.net from where I blatantly stole the images.
So in antialiasing we might deal with: pixels and digital sampling, LCD bulb colours and layout. There is no easy way to deal with TV filters. The variation is simply too high. Yet there are other kinds of aliasing we can hope to deal with directly.
Temporal aliasing occurs when we sample a continuous animation. Imagine an image moving from one side of the screen to the other. It takes one second. At 50 fps the image will digitally be sampled at 50 locations as it moves from one side of the screen to the other. At 5 fps only five locations. This is a form of aliasing and explains why high frames per second are critical for rendering smooth graphics.
We can do better. Antialiasing of motion is called motion blur. It attempts to add graphics in the direction of motion. Here is a photograph of a pool table. Because the shutter of the camera remains open for a short time, the balls move across the image and leave a motion trail:
A single image again looks blurred but when seen as an animation, it all makes much more visual sense. This is computationally very expensive to render in a user interface, however some approximations can be done such as provided in this Flash tool. Disney artists long ago used techniques to approximate motion blur (reminder: temporal antialiasing) and actually deformed objects as they animate:

Notice the shape deforming on the ball, particularly just before it hits the ground. This is a crude, but highly effective, version of motion blur. The same technique ca n be applied to images as they move across the screen. They can be stretched slightly during fast motion to suggest motion blur.
Image sampling can also suffer from aliasing. There is considerable hardware built into modern 3D chips to avoid this. Mip-mapping and anisotropic texture filtering are used to avoid aliasing in images when scaling them. A blitter also uses a many TAP (texture accesses per pixel) filter to draw nicely scaled images without aliasing. The idea in all cases is simle. One pixel on the screen does not correspond to a single pixel in the source image when an image is scaled or rotated. The colour of many pixels in the source image is needed in order to draw one pixel on the screen. The source image pixels are then averaged depending on distance (compare with the line).
So, antialiasing is critical technique for producing compelling graphics. It has spatial and temporal forms. Some spatial antialiasing is done for us by the 3d hardware and blitter but then often ruined by the filters on TVs. Temporal antialiasing is usually achieved by rendering more frames per second. However, its possible to consider deforming objects in the direction of motion as cartoon animators have done for a century now.
There are many types of aliasing but the most easily understood is that seen when drawing lines. The screen is composed of pixels. A perfect line cannot be drawn, instead one is approximated by filling in pixels. Close up it might look something like this:
This is aliasing. Its the result of sampling something which is continuous (the line equation) into digital discreet samples (the pixels). This can be unnoticeable to extremely annoying, especially if the line moves. When the line moves slowly, crawl occurs as first one set of pixels and then another is highlighted. The line is meant to be moving slowly and smoothly but the pixels suddenly switch. Over many frames of animation this makes the line look like its changing shape and crawling across the screen.Antialiasing can be seen as an attempt to smooth out the digital sampling and have less harsh edges. Here is an antialiased version:
Pixels around the line are measured for how close they are to the line and the colour chosen depending on distance. It looks like the line is just blurred but it isnt. Blurring would not use distance from the line equation. The antialiasing make all the difference visually.Fortunately 3D chips have antialiasing built in. This is all very well but TVs have digital filters that filter the output of any set top box of their own rendering pipeline. These filters blur or sharpen the image AFTER the graphics output. In experiments I've done, the filters within a single TV can make a huge difference and the filters between different TVs can make any attempt to compensate useless.
On LCD television this situation is even more interesting as each pixel is composed of three different coloured bulbs in a grid. Something like this for a single pixel row:
Now its possible to use each coloured bulb and measure its distance from the mathematically perfect line. This is "sub-pixel" antialiasing. It means we get more antialiasing and less blurring in effect. The result on black text on white background, looks like this:
Its worth taking a moment to see the left of the letter is more red and the right more blue. Looking at the pixel grid and distances you might think it would be the other way round but this is drawing black on white so things further away from the line or letter are brighter and so red is brighter on the left. There is a great article on this at java.net from where I blatantly stole the images.So in antialiasing we might deal with: pixels and digital sampling, LCD bulb colours and layout. There is no easy way to deal with TV filters. The variation is simply too high. Yet there are other kinds of aliasing we can hope to deal with directly.
Temporal aliasing occurs when we sample a continuous animation. Imagine an image moving from one side of the screen to the other. It takes one second. At 50 fps the image will digitally be sampled at 50 locations as it moves from one side of the screen to the other. At 5 fps only five locations. This is a form of aliasing and explains why high frames per second are critical for rendering smooth graphics.
We can do better. Antialiasing of motion is called motion blur. It attempts to add graphics in the direction of motion. Here is a photograph of a pool table. Because the shutter of the camera remains open for a short time, the balls move across the image and leave a motion trail:
A single image again looks blurred but when seen as an animation, it all makes much more visual sense. This is computationally very expensive to render in a user interface, however some approximations can be done such as provided in this Flash tool. Disney artists long ago used techniques to approximate motion blur (reminder: temporal antialiasing) and actually deformed objects as they animate:
Notice the shape deforming on the ball, particularly just before it hits the ground. This is a crude, but highly effective, version of motion blur. The same technique ca n be applied to images as they move across the screen. They can be stretched slightly during fast motion to suggest motion blur.
Image sampling can also suffer from aliasing. There is considerable hardware built into modern 3D chips to avoid this. Mip-mapping and anisotropic texture filtering are used to avoid aliasing in images when scaling them. A blitter also uses a many TAP (texture accesses per pixel) filter to draw nicely scaled images without aliasing. The idea in all cases is simle. One pixel on the screen does not correspond to a single pixel in the source image when an image is scaled or rotated. The colour of many pixels in the source image is needed in order to draw one pixel on the screen. The source image pixels are then averaged depending on distance (compare with the line).
So, antialiasing is critical technique for producing compelling graphics. It has spatial and temporal forms. Some spatial antialiasing is done for us by the 3d hardware and blitter but then often ruined by the filters on TVs. Temporal antialiasing is usually achieved by rendering more frames per second. However, its possible to consider deforming objects in the direction of motion as cartoon animators have done for a century now.
Subscribe to:
Comments (Atom)
