vImage is comprised of 6 headers:
Alpha.h | Alpha compositing functions. |
Conversion.h | Converting between image format (e.g. Planar8 to PlanarF, ARGB8888 to Planar8). |
Convoluton.h | Image convolution routines (e.g. blurring and edge detection). |
Geometry.h | Geometric transformations (e.g. rotate, scale, shear, affine warp). |
Histogram.h | Functions for calculating image histograms and image normalization. |
Morphology.h | Image morphology procedures (e.g. feature detection, dilation, erosion). |
Tranform.h | Image transformation operations (e.g. gamma correction, colorspace conversion). |
Alpha compositing is a process of combining multiple images according to their alpha components. For each pixel in an image, the alpha, or transparency, value is used to determine how much of the image underneath it will be shown.
vImage functions are available for blending or clipping. The most common operation is compositing a top image onto a bottom image:
UIImage *topImage, *bottomImage = ...;
CGImageRef topImageRef = [topImage CGImage];
CGImageRef bottomImageRef = [bottomImage CGImage];
CGDataProviderRef topProvider = CGImageGetDataProvider(topImageRef);
CFDataRef topBitmapData = CGDataProviderCopyData(topProvider);
size_t width = CGImageGetWidth(topImageRef);
size_t height = CGImageGetHeight(topImageRef);
size_t bytesPerRow = CGImageGetBytesPerRow(topImageRef);
vImage_Buffer topBuffer = {
.data = (void *)CFDataGetBytePtr(topBitmapData),
.width = width,
.height = height,
.rowBytes = bytesPerRow,
};
CGDataProviderRef bottomProvider = CGImageGetDataProvider(bottomImageRef);
CFDataRef bottomBitmapData = CGDataProviderCopyData(bottomProvider);
vImage_Buffer bottomBuffer = {
.data = (void *)CFDataGetBytePtr(bottomBitmapData),
.width = width,
.height = height,
.rowBytes = bytesPerRow,
};
void *outBytes = malloc(height * bytesPerRow);
vImage_Buffer outBuffer = {
.data = outBytes,
.width = width,
.height = height,
.rowBytes = bytesPerRow,
};
vImage_Error error = vImagePremultipliedAlphaBlend_ARGB8888(&topBuffer, &bottomBuffer, &outBuffer, kvImageDoNotTile);
if (error)
{
NSLog(@"Error: %ld", error);
}
UIImage *image = ...;
CGImageRef imageRef = [image CGImage];
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
size_t bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
size_t bytesPerRow = CGImageGetBytesPerRow(imageRef);
CGDataProviderRef sourceImageDataProvider = CGImageGetDataProvider(imageRef);
CFDataRef sourceImageData = CGDataProviderCopyData(sourceImageDataProvider);
vImage_Buffer sourceImageBuffer = {
.data = (void *)CFDataGetBytePtr(sourceImageData),
.width = width,
.height = height,
.rowBytes = bytesPerRow,
};
uint8_t *destinationBuffer = malloc(CFDataGetLength(sourceImageData));
vImage_Buffer destinationImageBuffer = {
.data = destinationBuffer,
.width = width,
.height = height,
.rowBytes = bytesPerRow,
};
const uint8_t channels[4] = {0, 3, 2, 1}; // ARGB -> ABGR
vImagePermuteChannels_ARGB8888(&sourceImageBuffer,&destinationImageBuffer,channels, kvImageNoFlags);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGContextRef destinationContext =
CGBitmapContextCreateWithData(destinationBuffer, width,
height, bitsPerComponent, bytesPerRow, colorSpaceRef, kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedFirst, NULL,
NULL);
CGImageRef permutedImageRef = CGBitmapContextCreateImage(destinationContext);
UIImage *permutedImage = [UIImage imageWithCGImage:permutedImageRef];
CGImageRelease(permutedImageRef); CGContextRelease(destinationContext);
CGColorSpaceRelease(colorSpaceRef);
Image convolution is the process of multiplying each pixel and its adjacent pixels by a kernel, or square matrix with a sum of 1. Depending on the kernel, a convolution operation can either blur, sharpen, emboss, or detect edges.
Except for specific situations where a custom kernel is required, convolution operations would be better-served by the Core Image framework, which utilizes the GPU. However, for a straightforward CPU-based solution, vImage delivers:
Blurring an Image
UIImage *inImage = ...;
CGImageRef inImageRef = [inImage CGImage];
CGDataProviderRef inProvider = CGImageGetDataProvider(inImageRef);
CFDataRef inBitmapData = CGDataProviderCopyData(inProvider);
vImage_Buffer inBuffer = {
.data = (void *)CFDataGetBytePtr(inBitmapData),
.width = CGImageGetWidth(inImageRef),
.height = CGImageGetHeight(inImageRef),
.rowBytes = CGImageGetBytesPerRow(inImageRef),
};
void *outBytes = malloc(CGImageGetBytesPerRow(inImageRef) * CGImageGetHeight(inImageRef));
vImage_Buffer outBuffer = {
.data = outBytes,
.width = inBuffer.width,
.height = inBuffer.height,
.rowBytes = inBuffer.rowBytes,
};
uint32_t length = 5; // Size of convolution
vImage_Error error =
vImageBoxConvolve_ARGB8888(&inBuffer,
&outBuffer,
NULL,
0,
0,
length,
length,
NULL,
kvImageEdgeExtend);
if (error)
{
NSLog(@"Error: %ld", error);
}
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB(); CGContextRef c =
CGBitmapContextCreate(outBuffer.data,
outBuffer.width,
outBuffer.height,
8,
outBuffer.rowBytes,
colorSpaceRef,
kCGImageAlphaNoneSkipLast);
CGImageRef outImageRef = CGBitmapContextCreateImage(c);
UIImage *outImage = [UIImage imageWithCGImage:outImageRef];
CGImageRelease(outImageRef);
CGContextRelease(c);
CGColorSpaceRelease(colorSpaceRef);
CFRelease(inBitmapData);
Resizing an image is another operation that is perhaps more suited for another, GPU-based framework, like Image I/O. For a given vImage buffer, it might be more performant to scale with Accelerate using vImageScale_* rather than convert back and forth between a CGImageRef:
Resizing an Image
double scaleFactor = 1.0 / 5.0;
void *outBytes = malloc(trunc(inBuffer.height * scaleFactor) * inBuffer.rowBytes);
vImage_Buffer outBuffer = {
.data = outBytes,
.width = trunc(inBuffer.width * scaleFactor),
.height = trunc(inBuffer.height * scaleFactor),
.rowBytes = inBuffer.rowBytes,
};
vImage_Error error =
vImageScale_ARGB8888(&inBuffer,
&outBuffer,
NULL,
kvImageHighQualityResampling);
if (error)
{
NSLog(@"Error: %ld", error);
}
Detecting if an Image Has Transparency
UIImage *image;
CGImageRef imageRef = [image CGImage];
CGDataProviderRef dataProvider = CGImageGetDataProvider(imageRef); CFDataRef bitmapData = CGDataProviderCopyData(dataProvider);
vImagePixelCount a[256], r[256], g[256], b[256];
vImagePixelCount *histogram[4] = {a, r, g, b};
vImage_Buffer buffer = {
.data = (void *)CFDataGetBytePtr(bitmapData),
.width = CGImageGetWidth(imageRef),
.height = CGImageGetHeight(imageRef),
.rowBytes = CGImageGetBytesPerRow(imageRef),
};
vImage_Error error = vImageHistogramCalculation_ARGB8888(&buffer, histogram, kvImageNoFlags);
if (error)
{
NSLog(@"Error: %ld", error);
}
BOOL hasTransparency = NO;
for (NSUInteger i = 0; !hasTransparency && i < 255; i++)
{
hasTransparency = histogram[3][i] == 0;
}
CGDataProviderRelease(dataProvider);
CFRelease(bitmapData);
Dilating an Image
size_t width = image.size.width;
size_t height = image.size.height;
size_t bitsPerComponent = 8;
size_t bytesPerRow = CGImageGetBytesPerRow([image CGImage]);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGContextRef sourceContext =
CGBitmapContextCreate(NULL,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpaceRef,
kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(sourceContext, CGRectMake(0.0f, 0.0f, width, height),[image CGImage]);
void *sourceData = CGBitmapContextGetData(sourceContext);
vImage_Buffer sourceBuffer = {
.data = sourceData,
.width = width,
.height = height,
.rowBytes = bytesPerRow,
};
size_t length = height * bytesPerRow;
void *destinationData = malloc(length);
vImage_Buffer destinationBuffer = {
.data = destinationData,
.width = width,
.height = height,
.rowBytes = bytesPerRow,
};
static unsigned char kernel[9] =
{
1, 1, 1,
1, 1, 1,
1, 1, 1,
};
vImageDilate_ARGB8888(&sourceBuffer,
&destinationBuffer,
0,
0,
kernel,
9,
9,
kvImageCopyInPlace);
CGContextRef destinationContext =
CGBitmapContextCreateWithData(destinationData,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpaceRef,
kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedFirst,
NULL,
NULL);
CGImageRef dilatedImageRef = CGBitmapContextCreateImage(destinationContext);
UIImage *dilatedImage = [UIImage imageWithCGImage:dilatedImageRef];
CGImageRelease(dilatedImageRef);
CGContextRelease(destinationContext);
CGContextRelease(sourceContext);
CGColorSpaceRelease(colorSpaceRef);