Video Dithering Using Sierra Lite Algorithm
The core supplies post-processing for a video signal.
It reduces the color width while dithering the image to keep the impression of more colors than really exist.
This reduces banding effects and enhances the quality for the viewer.
The used method is "Sierra Lite".
The core is configurable (at compile/synthesis time) in:
- input color width
- output color width
It uses very few ressources.
Common Full HD Dithering (1920*1080 @ 60hz @ 6 bit from 8 bit source)
used with many LCD Displays possible on cyclone 2:
- 120 LE
- 8kbit Memory (2 M4K Blocks)
- timing met (~125 Mhz required, ~140 Mhz possible)
Tested in simulation:
bmp read -> processing -> written back to bmp
Tested on hardware using Altera/Terasic DE1:
- Cyclone 2
- 640 * 480 @ 60hz
- Reduction from 8 bit per color to 3 bit per color
How to use
1. Compile both files, containing dither entity and testbench.
2. Run all
3. Testbench will stop automatically when the whole image is processed
4. View Output.bmp for result
Note: for some reason MS Paint doesn't like the output.bmp
Just use another programm, e.g. FireFox, IrfanView, MS VisualStudio...
you should also try:
- exchange the image in input.bmp
- change reduced bits per pixel
- The core needs an image stream with 1 pixel(RGB) each clock.
- You can disable the core if the stream is stopped for some reason.(e.g. VGA offscreen)
- The core needs the x position of the current sample.
- You should NOT input the same x position twice in a row
- Always increase x with each clock cycle or turn the core off.
- y change is automatically recognized
- x must increase
- y can increase or decrease, result may look slightly different but is same quality