Archives for category: welcome

Just a quick note that we were asked to write a guest blog post  for Microsoft’s “Parallel Programming in Native Code” blog. They’ve already addressed an issue in the initial release version of C++ AMP that I bring up in the blog, but If you haven’t already seen it please give it a read.

Advertisements

Lately, we’ve been doing a lot work on improving the run-time performance of our CLAHE implementation. As you may or may not already know, “CLAHE” refers to the Contrast-Limited Adaptive Histogram Equalization algorithm, one of the classic methods for systematic contrast enhancement. Up to this point PrecisionImage.NET exposed two implementations – a multithreaded CPU version and a GPU-accelerated version. Depending on the hardware and the image size, the performance of both was generally very good. However, we limited the algorithm to an 8-bit histogram implementation after experimentation showed poor results with respect to speed and quality when applied to the sparse histograms commonly encountered in higher bit-depth images. You could still process these images with the CLAHE algorithm and the result would still span the full numerical range of the input image, but it would consist of at most 256 discrete values.

I’m very pleased to announce that not only has this limitation been overcome such that CLAHE can now be performed at 8/10/12/14/16-bit histogram resolutions to maintain the full fidelity of the source image, but the runtime performance has also been hugely improved over the previous version. The drawback to this story is that the optimizations that enabled these improvements now make the algorithm a poor fit for GPU compute. However, the run-time performance on a modern multicore CPU make this a non-issue for most applications. As an example, a 1K x 1K 16-bit image can be contrast optimized using a full 16-bit histogram analysis with a local region window of 201 x 201 pixels at every pixel in well under 500 milliseconds using a 3 year-old AMD Phenom II x6 CPU. A contemporary Core-i7 further decreases the processing time by another 40% – 50%. In fact, the 16-bit CPU version is now easily on a par with the 8-bit GPU version (even when running on a GeForce Titan); because of this, we’ve decided to remove the GPU version from the API…there’s just no need for it anymore at this point.

On another note, an interesting development came out of the Microsoft Build conference in San Francisco. Apparently, C++ AMP now has the ability to properly use shared memory architectures while avoiding the redundant memory copy operations that have been necessary with version 1 of AMP. This is very big news for PrecisionImage.NET, as the typical usage pattern for our library involves multiple function calls that will typically result in a lot of back-and-forth between the GPU and the CPU. The fly in the ointment here is the caveat that this capability will only exist in Windows 8.1 and onward due to the use of a new driver model. We haven’t yet recompiled the GPU branch of PrecisionImage.NET with the new version of AMP as it is only available with the CTP version of Visual Studio 13. Rumor has it though that the RTM versions will be available at the end of August so we won’t have to wait long. This is very good news for the millions of 3rd and 4th generation Intel Core CPUs using the HD 4000+ graphics (not to mention Iris Pro 5200). As soon as we can we’ll post some results to get a feel for the improvements in the AMP runtime and PrecisionImage.NET workflows.

We’ve just posted 4 screencast video tutorials demonstrating the use of PrecisionImage.NET. The first video uses the interactive C# window of Microsoft’s Roslyn CTP to interactively demonstrate the basics of PrecisionImage.NET. If you are unsure of how to incorporate our SDK into your workflow, definitely take a look. It also shows how you can use Roslyn in combination with our SDK to implement your own interactive technical scripting environment to quickly try out ideas without the overhead of building a complete WPF application (very handy). The other 3 videos discuss the implementation of various processing pipelines, including the use of PrecisionImage.NET to process the depth data streaming from a Microsoft Kinect sensor bar, and a video showing a real-time enhancement pipeline for industrial radiography. The videos are much more dynamic and informative than the written tutorials.

All videos are generated at 1280 x 720 resolution, so be sure to scale the video output appropriately to get the best viewing quality. You can see the videos on our code examples page:

http://www.coreoptical.com/codeexamples.html

Welcome to the Core Optical blog! In this – our very first blog post – I’d like to introduce our upcoming product: PrecisionImage.NET.

If you’ve had a chance to look around the website a little then hopefully you have a pretty good idea of what the SDK is and how it can be used. Maybe you just happened across the blog “organically” while searching for image processing tools for the .NET framework and now you’re curious. Well, read on to discover more.

So…first things first. What exactly is PrecisionImage.NET? Well, the official product description is something along the lines of “PrecisionImage.NET is an SDK for technical imaging professionals and businesses focusing on the .NET framework and WPF.” As an imaging scientist, I prefer to think of it as the toolkit I wish I had all along.

I’m also a .NET guy.

To me, the .NET framework strikes a good balance between productivity and power. Maybe there was a greater emphasis on productivity versus power when Microsoft conceived of .NET, and maybe that emphasis still exists today. I happen to think so. After all, It’s the reason I use .NET. You just can’t beat that oceanic framework when it comes to developer productivity and time-to-market.

That’s not to say it doesn’t suffer from a few gaps in its functionality. WinForms gave us some basic image processing classes and types but they were aimed at the more basic open/display/save crowd of developers and weren’t very suitable for someone who wanted to do something more analytical in nature.

But when WPF was introduced along with the underlying WIC (Windows Imaging Component) framework, that all changed. Suddenly, .NET developers doing image processing had access to built-in encoders/decoders for everything from 4-bit indexed types all the way up to 128-bit floating point HDR images. Best of all, the whole thing is extensible so that it can support proprietary image formats. When I saw these features I knew WPF/WIC would form the perfect foundation for high-power scientific/technical desktop applications implementing the most modern user interfaces. The only thing it was (and is still) lacking was a comprehensive computational back-end that enables sophisticated processing chains and workflows. That’s where PrecisionImage.NET comes in.

In terms of release, where does the product stand? We’re currently working on adding the GPU branch of the frequency domain processing. We’re estimating that to be done in January 2013, at which point it will be feature complete and ready for release. At the same time we’ll also be adding video blog entries introducing the basic concepts of how to use PrecisionImage.NET. On our list of upcoming videos are tutorials on using the toolkit to process and display data from the Kinect sensor, creating an image processing scripting environment using the Microsoft Roslyn compiler-as-a-service system and PrecisionImage.NET, as well as a variety of processing case studies and implementation strategies to get the most out of PrecisionImage.NET.

So please stay tuned, and don’t be shy with the feedback!

%d bloggers like this: