UIGraphicsBeginImageContext() creates a bitmap-backed graphics context (under the hood it calls CGBitmapContextCreate).
The "restriction" happens because UIViews are rendered using CALayers, and CALayers are rendered as textured polys on the GPU, and the GPU has a maximum texture size (this appears to be 2044 pixels on iPhone 2G/3G and presumably iPod Touch 1G/2G). They've increased it on the 3GS and later devices, presumably because pictures from the camera are bigger.
That said, you ought to be able to manipulate individual camera-sized images without a problem; a crash suggests that you're using a lot of memory elsewhere (possibly a memory leak?).
ACB quoted the UIGestureRecognizer
reference. To make it a little more concrete, suppose you have a view with a pan gesture recognizer attached, and you have these methods in your view controller:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
NSLog(@"touchesBegan");
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
NSLog(@"touchesMoved");
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
NSLog(@"touchesEnded");
}
- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event {
NSLog(@"touchesCancelled");
}
- (IBAction)panGestureRecognizerDidUpdate:(UIPanGestureRecognizer *)sender {
NSLog(@"panGesture");
}
And of course the pan gesture recognizer is configured to send the panGestureRecognizerDidUpdate:
message.
Now suppose you touch the view, move your finger enough for the pan gesture to be recognized, and then lift your finger. What does the app print?
If the gesture recognizer has cancelsTouchesInView
set to YES
, the app will log these messages:
touchesBegan
touchesMoved
touchesCancelled
panGesture
panGesture
(etc.)
You might get more than one touchesMoved
before the cancel.
So, if you set cancelsTouchesInView
to YES
(the default), the system will cancel the touch before it sends the first message from the gesture recognizer, and you won't get any more touch-related messages for that touch.
If the gesture recognizer has cancelsTouchesInView
set to NO
, the app will log these messages:
touchesBegan
touchesMoved
panGesture
touchesMoved
panGesture
touchesMoved
panGesture
(etc.)
panGesture
touchesEnded
So, if you set cancelsTouchesInView
to NO
, the system will continue sending touch-related messages for the gesture touch, interleaved with the gesture recognizer's messages. The touch will end normally instead of being cancelled (unless the system cancels the touch for some other reason, like the home button being pressed during the touch).
Best Answer
You say:
No, that's not necessarily the case. If you have some original "digital asset", rather than creating a
UIImage
and then using one of those two functions to create theNSData
that you'll upload, you will often just load theNSData
from the original asset and bypass the round-trip to aUIImage
at all. If you do this, you don't risk any loss of data that converting to aUIImage
, and then back again, can cause.There are some additional considerations, though:
Meta data:
These
UIImageXXXRepresentation
functions strip the image of its meta data. Sometimes that's a good thing (e.g. you don't want to upload photos of your children or expensive gadgets the include the GPS locations where malcontents could identify where the shot was taken). In other cases, you don't want the meta data to be thrown away (e.g. date of the original shot, which camera, etc.).You should make an explicit decision as to whether you want meta data stripped or not. If not, don't round-trip your image through a
UIImage
, but rather use the original asset.Image quality loss and/or file size considerations:
I'm particularly not crazy about
UIImageJPEGRepresentation
because it a lossy compression. Thus, if you use acompressionQuality
value smaller than 1.0, you can lose some image quality (modest quality loss for values close to 1.0, more significant quality loss with lowercompressionQuality
values). And if you use acompressionQuality
of 1.0, you mitigate much of the JPEG image quality loss, but the resultingNSData
can often be bigger than the original asset (at least if the original was, itself, a compressed JPEG or PNG), resulting in slower uploads.UIImagePNGRepresentation
doesn't introduce compression-based data loss, but depending upon the image, you may still lose data (e.g. if the original file was a 48-bit TIFF or used a colorspace other than sRGB).It's a question of whether you are ok with some image quality loss and/or larger file size during the upload process.
Image size:
Sometimes you don't want to upload the full resolution image. For example, you might be using a web service that wants images no bigger than 800px per side. Or if you're uploading a thumbnail, they might want something even smaller (e.g. 32px x 32px). By resizing images, you can make the upload much smaller and thus much faster (though with obvious quality loss). But if you use an image resizing algorithm, then creating a PNG or JPEG using these
UIImageXXXRepresentation
functions would be quite common.In short, if I'm trying to minimize the data/quality loss, I would upload the original asset if it's in a format that the server accepts, and I'd use
UIImagePNGRepresentation
(orUIImageJPGRepresentation
with quality setting of 1.0) if the original asset was not in a format accepted by the server. But the choice of using theseUIImageXXXRepresentation
functions is a question of your business requirements and what the server accepts.