iOS CGImageRef Pixel Shader

Go To StackoverFlow.com

0

I am working on an image processing app for the iOS, and one of the various stages of my application is a vector based image posterization/color detection.

Now, I've written the code that can, per-pixel, determine the posterized color, but going through each and every pixel in an image, I imagine, would be quite difficult for the processor if the iOS. As such, I was wondering if it is possible to use the graphics processor instead;

I'd like to create a sort of "pixel shader" which uses OpenGL-ES, or some other rendering technology to process and posterize the image quickly. I have no idea where to start (I've written simple shaders for Unity3D, but never done the underlying programming for them).

Can anyone point me in the correct direction?

2012-04-04 00:07
by Georges Oates Larsen


2

I'm going to come at this sideways and suggest you try out Brad Larson's GPUImage framework, which describes itself as "a BSD-licensed iOS library that lets you apply GPU-accelerated filters and other effects to images, live camera video, and movies". I haven't used it and assume you'll need to do some GL reading to add your own filtering but it'll handle so much of the boilerplate stuff and provides so many prepackaged filters that it's definitely worth looking into. It doesn't sound like you're otherwise particularly interested in OpenGL so there's no real reason to look into it.

I will add the sole consideration that under iOS 4 I found it often faster to do work on the CPU (using GCD to distribute it amongst cores) than on the GPU where I needed to be able to read the results back at the end for any sort of serial access. That's because OpenGL is generally designed so that you upload an image and then it converts it into whatever format it wants and if you want to read it back then it converts it back to the one format you expect to receive it in and copies it to where you want it. So what you save on the GPU you pay for because the GL driver has to shunt and rearrange memory. As of iOS 5 Apple have introduced a special mechanism that effectively gives you direct CPU access to OpenGL's texture store so that's probably not a concern any more.

2012-04-04 01:13
by Tommy
At some point, I need to benchmark where the processing time crossover is on this as a function of input data sizes, particularly if you factor in Accelerate as a means of parallelizing some operations on the CPU. I think I see an overall win on iOS 4.0 in processing 640x480 frames coming from a CGImage or UIImage on the GPU vs. optimized CPU-bound functions, but I need to generate some hard numbers for this. Converting to and from Core Graphics types alone is far slower than the texture uploads, so that's more of a bottleneck for me - Brad Larson 2012-04-05 17:40
My CPU routines never got as far as Core Graphics; they were for live camera input so operated directly on CVPixelBuffers as returned by the camera. I had a quick Q&As with Alan Schaffer at the last WWDC on Tour and he seemed to concede that while iOS 5's facial detection required GPU cooperation no public API would be suitable for reimplementing it due to the back and forth costs. At the time I wasn't aware of the new Core Video-related mechanism that effectively gives you direct texture memory access so it wasn't discussed - Tommy 2012-04-05 18:59
Yeah, working directly with camera frames in OpenGL ES can be far faster than anything CPU-bound. For a simple sepia tone adjustment on a 640x480 image, I could run that from capture to display in 6 ms on an iPhone 4 (2.5 ms with the iOS 5.0 texture caches), where nothing on the CPU could come close. As far as facial detection, this entirely on-GPU OpenGL ES 2.0 implementation is very interesting: http://fileadmin.cs.lth.se/graphics/theses/projects/facerecognition/ . At some point, I'd like to give this a try - Brad Larson 2012-04-05 19:34
Ads