My question is same as mentioned at here. I'm also using two images in my app and all I need is to erase a top image by touch. Then un-erase (if required) the erased part by touch. I'm using following code to erase the the top image. There is also a problem in this approach. Which is that the images are big and I'm using Aspect Fit content mode to properly display them. When I touch on the screen, it erase in the corner not the touched place. I think the touch point calculation is required some fix. Any help will be appreciated.
Second problem is that how to un-erase the erased part by touch?
UIGraphicsBeginImageContext(self.imgTop.image.size);
[self.imgTop.image drawInRect:CGRectMake(0, 0, self.imgTop.image.size.width, self.imgTop.image.size.height)];
self.frame.size.width, self.frame.size.height)];
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
GContextSetLineWidth(UIGraphicsGetCurrentContext(), pinSize);
CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), 0, 0, 0, 1.0);
CGContextSetBlendMode(UIGraphicsGetCurrentContext(), kCGBlendModeCopy);
CGContextBeginPath(UIGraphicsGetCurrentContext());
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), lastPoint.x, lastPoint.y);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), currentPoint.x, currentPoint.y);
CGContextStrokePath(UIGraphicsGetCurrentContext());
self.imgTop.contentMode = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Your code is quite ambiguous: you're creating a context with imgTop inside, then blending with kCGBlendModeCopy
the black color? This would cause the black color to be copied onto imgTop. I assume you wanted to set the layer's content
property then?
Anyway this class does what you need. There are only a few interesting methods (they're at the top), the others are only properties or init...
routines.
@interface EraseImageView : UIView {
CGContextRef context;
CGRect contextBounds;
}
@property (nonatomic, retain) UIImage *backgroundImage;
@property (nonatomic, retain) UIImage *foregroundImage;
@property (nonatomic, assign) CGFloat touchWidth;
@property (nonatomic, assign) BOOL touchRevealsImage;
- (void)resetDrawing;
@end
@interface EraseImageView ()
- (void)createBitmapContext;
- (void)drawImageScaled:(UIImage *)image;
@end
@implementation EraseImageView
@synthesize touchRevealsImage=_touchRevealsImage, backgroundImage=_backgroundImage, foregroundImage=_foregroundImage, touchWidth=_touchWidth;
#pragma mark - Main methods -
- (void)createBitmapContext
{
// create a grayscale colorspace
CGColorSpaceRef grayscale=CGColorSpaceCreateDeviceGray();
/* TO DO: instead of saving the bounds at the moment of creation,
override setFrame:, create a new context with the right
size, draw the previous on the new, and replace the old
one with the new one.
*/
contextBounds=self.bounds;
// create a new 8 bit grayscale bitmap with no alpha (the mask)
context=CGBitmapContextCreate(NULL,
(size_t)contextBounds.size.width,
(size_t)contextBounds.size.height,
8,
(size_t)contextBounds.size.width,
grayscale,
kCGImageAlphaNone);
// make it white (touchRevealsImage==NO)
CGFloat white[]={1., 1.};
CGContextSetFillColor(context, white);
CGContextFillRect(context, contextBounds);
// setup drawing for that context
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetLineJoin(context, kCGLineJoinRound);
CGColorSpaceRelease(grayscale);
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch=(UITouch *)[touches anyObject];
// the new line that will be drawn
CGPoint points[]={
[touch previousLocationInView:self],
[touch locationInView:self]
};
// setup width and color
CGContextSetLineWidth(context, self.touchWidth);
CGFloat color[]={(self.touchRevealsImage ? 1. : 0.), 1.};
CGContextSetStrokeColor(context, color);
// stroke
CGContextStrokeLineSegments(context, points, 2);
[self setNeedsDisplay];
}
- (void)drawRect:(CGRect)rect
{
if (self.foregroundImage==nil || self.backgroundImage==nil) return;
// draw background image
[self drawImageScaled:self.backgroundImage];
// create an image mask from the context
CGImageRef mask=CGBitmapContextCreateImage(context);
// set the current clipping mask to the image
CGContextRef ctx=UIGraphicsGetCurrentContext();
CGContextSaveGState(ctx);
CGContextClipToMask(ctx, contextBounds, mask);
// now draw image (with mask)
[self drawImageScaled:self.foregroundImage];
CGContextRestoreGState(ctx);
CGImageRelease(mask);
}
- (void)resetDrawing
{
// draw black or white
CGFloat color[]={(self.touchRevealsImage ? 0. : 1.), 1.};
CGContextSetFillColor(context, color);
CGContextFillRect(context, contextBounds);
[self setNeedsDisplay];
}
#pragma mark - Helper methods -
- (void)drawImageScaled:(UIImage *)image
{
// just draws the image scaled down and centered
CGFloat selfRatio=self.frame.size.width/self.frame.size.height;
CGFloat imgRatio=image.size.width/image.size.height;
CGRect rect={0.,0.,0.,0.};
if (selfRatio>imgRatio) {
// view is wider than img
rect.size.height=self.frame.size.height;
rect.size.width=imgRatio*rect.size.height;
} else {
// img is wider than view
rect.size.width=self.frame.size.width;
rect.size.height=rect.size.width/imgRatio;
}
rect.origin.x=.5*(self.frame.size.width-rect.size.width);
rect.origin.y=.5*(self.frame.size.height-rect.size.height);
[image drawInRect:rect];
}
#pragma mark - Initialization and properties -
- (id)initWithCoder:(NSCoder *)aDecoder
{
if ((self=[super initWithCoder:aDecoder])) {
[self createBitmapContext];
_touchWidth=10.;
}
return self;
}
- (id)initWithFrame:(CGRect)frame
{
if ((self=[super initWithFrame:frame])) {
[self createBitmapContext];
_touchWidth=10.;
}
return self;
}
- (void)dealloc
{
CGContextRelease(context);
[super dealloc];
}
- (void)setBackgroundImage:(UIImage *)value
{
if (value!=_backgroundImage) {
[_backgroundImage release];
_backgroundImage=[value retain];
[self setNeedsDisplay];
}
}
- (void)setForegroundImage:(UIImage *)value
{
if (value!=_foregroundImage) {
[_foregroundImage release];
_foregroundImage=[value retain];
[self setNeedsDisplay];
}
}
- (void)setTouchRevealsImage:(BOOL)value
{
if (value!=_touchRevealsImage) {
_touchRevealsImage=value;
[self setNeedsDisplay];
}
}
@end
Some notes:
This class retains the two images you need. It has a touchRevealsImage
property to set the mode to drawing or erasing, and you can set the width of the line.
At the initialization, it creates a CGBitmapContextRef
, grayscale, 8bpp, no alpha, of the same size of the view. This context is used to store a mask, that will be applied to the foreground image.
Every time you move a finger on the screen, a line is drawn on the CGBitmapContextRef
using CoreGraphics, white to reveal the image, black to hide it. In this way we're storing a b/w drawing.
The drawRect:
routine simply draws the background, then creates a CGImageRef
from the CGBitmapContextRef
and applies it to the current context as a mask. Then draws the foreground image. To draw images it uses - (void)drawImageScaled:(UIImage *)image
, which just draws the image scaled and centered.
If you're planning to resize the view, you should implement a method to copy or to recreate the mask with new size, overriding - (void)setFrame:(CGRect)frame
.
The - (void)reset
method simply clears the mask.
Even if the bitmap context hasn't any alpha channel, the grayscale color space used has alpha: that's why every time a color is set, I had to specify two components.