PrintNumber ErrorLocation Error Correction DateAdded
1 iv First printing, April 2012 Second printing, June 2012 4/30/2012
1 p 3 This book assumes a working knowledge of C, including pointers, malloc(), and the usual hazards of low-level memory management. If you don’t have experience with C or any Clike language (C++, Java, and C#), stop right now and read a good book on C before you attempt to tackle this book. This book assumes a working knowledge of C, including pointers, malloc(), and the usual hazards of low-level memory management. If you don’t have experience with C or any C-like language (C++, Java, and C#), stop right now and read a good book on C before you attempt to tackle this book. 5/23/2012
1 p 9 The source code for the projects in this book is available as a downloadable disk image (.dmg). To get it, click on the Resources tab on the book’s catalog page:
www.informit.com/title/9780321636843
The disk image contains a README file and folders with the projects for each chapter.
The source code for the projects in this book is available on the Resources tab on the book’s catalog page:
www.informit.com/title/9780321636843
The downloads contain a README file and folders with the projects for each
chapter.
5/23/2012
1 p 21 Well, that’s pretty cool: You’ve got a nice dump of a lot of the same metadata that you’d see in an application such as iTunes. Now let’s check it out with an AAC song from the iTunes Store. Changing the command-line argument to something like ~/Music/iTunes/iTunes Music/Arcade Fire/Funeral/07 Wake Up.m4a gets you the following: Well, that’s pretty cool: You’ve got a nice dump of a lot of the same metadata that you’d see in an application such as iTunes. Now let’s check it out with an AAC song from the iTunes Store. Changing the command-line argument to something like ~/Music/iTunes/iTunes Music/Arcade Fire/Funeral/07 Wake Up.m4a gets you the following on Snow Leopard: 5/23/2012
1 p 35 9. You can now ask Core Audio to create an AudioFileID, ready for writing at the URL you’ve set up. The AudioFileCreateWithURL() function takes a URL (notice that you again use toll-free bridging to cast from a Cocoa NSURL to a Core Foundation CFURLRef), a constant to describe the AIFF file format, a pointer to the AudioStreamBasicDescription describing the audio data, behavior flags (in this case, indicating your desire to overwrite an existing file of the same name), and a pointer to populate with the created AudioFileID. 9. You can now ask Core Audio to create an AudioFileID, ready for writing at the URL you’ve set up. The AudioFileCreateWithURL() function takes a URL (notice that you again use toll-free bridging to cast from a Cocoa NSURL to a Core Foundation CFURLRef), a constant to describe the AIFF file format, a pointer to the AudioStreamBasicDescription describing the audio data, behavior flags (in this case, indicating your desire to overwrite an existing file of the same name), and a pointer to populate with the created AudioFileID. 5/23/2012
1 p 49 This tells you that AIFFs can handle only a small amount of variety in PCM formats, differing only in bit depth. The mFormatFlags are the same for every ASBD in the array. But what do they mean? The flags are a bit field, so with a value of 14, you know that the bits for 0x2, 0x4, and 0x8 are enabled (because 0x2 + 0x4 + 0x8 = 0xE, which is 14 in decimal). At this point, you need to consult the documentation for the AudioStreamBasicDescription flags or the CoreAudioTypes.h header file to figure out what those bit flags represent. Because the bits 0x2, 0x4, and 0x8 are set, this PCM format is equivalent to kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked. This tells you that AIFFs can handle only a small amount of variety in PCM formats, differing only in bit depth. The mFormatFlags are the same for every ASBD in the array. But what do they mean? The flags are a bit field, so with a value of 14, you know that the bits for 0x2, 0x4, and 0x8 are enabled (because 0x2 + 0x4 + 0x8 = 0xE, which is 14 in decimal). At this point, you need to consult the documentation for the AudioStreamBasicDescription flags or the CoreAudioTypes.h header file to figure out what those bit flags represent. Because the bits 0x2, 0x4, and 0x8 are set, this PCM format is equivalent to kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked. 5/23/2012
1 103 Listing 6.3  Creating a MyAudioConverterSettings Struct and Opening a Source Audio File for Conversion
int main(int argc, const char *argv[])
{
MyAudioConverterSettings audioConverterSettings = {0};
CFURLRef inputFileURL =
CFURLCreateWithFileSystemPath(kCFAllocatorDefault,
kInputFileLocation,
kCFURLPOSIXPathStyle,
false);
CheckResult (AudioFileOpenURL(inputFileURL,
kAudioFileReadPermission,
0,
&audioConverterSettings.inputFile),
"AudioFileOpenURL failed");
CFRelease(inputFileURL);
Listing 6.3  Creating a MyAudioConverterSettings Struct and Opening a Source Audio File for Conversion
int main(int argc, const char *argv[])
{
MyAudioConverterSettings audioConverterSettings = {0};
CFURLRef inputFileURL =
CFURLCreateWithFileSystemPath(kCFAllocatorDefault,
kInputFileLocation,
kCFURLPOSIXPathStyle,
false);
CheckError (AudioFileOpenURL(inputFileURL,
kAudioFileReadPermission,
0,
&audioConverterSettings.inputFile),
"AudioFileOpenURL failed");
CFRelease(inputFileURL);
5/23/2012
1 p 103 Listing 6.4  Getting ASBD from an Input Audio File
UInt32 propSize = sizeof(audioConverterSettings.inputFormat);
CheckResult (AudioFileGetProperty(audioConverterSettings.inputFile,
kAudioFilePropertyDataFormat,
&propSize,
&audioConverterSettings.inputFormat),
"Couldn't get file's data format");
Listing 6.4  Getting ASBD from an Input Audio File
UInt32 propSize = sizeof(audioConverterSettings.inputFormat);
CheckError (AudioFileGetProperty(audioConverterSettings.inputFile,
kAudioFilePropertyDataFormat,
&propSize,
&audioConverterSettings.inputFormat),
"Couldn't get file's data format");
5/23/2012
1 p 103 Listing 6.5  Getting Packet Count and Maximum Packet Size Properties from
Input Audio File
// get the total number of packets in the file
propSize = sizeof(audioConverterSettings.inputFilePacketCount);
CheckResult (AudioFileGetProperty(audioConverterSettings.inputFile,
kAudioFilePropertyAudioDataPacketCount,
&propSize,
&audioConverterSettings.inputFilePacketCount),
"couldn't get file's packet count");
Listing 6.5  Getting Packet Count and Maximum Packet Size Properties from
Input Audio File
// get the total number of packets in the file
propSize = sizeof(audioConverterSettings.inputFilePacketCount);
CheckError (AudioFileGetProperty(audioConverterSettings.inputFile,
kAudioFilePropertyAudioDataPacketCount,
&propSize,
&audioConverterSettings.inputFilePacketCount),
"couldn't get file's packet count");
5/23/2012
1 p 104 Listing 6.5  Continued
// get size of the largest possible packet
propSize = sizeof(audioConverterSettings.inputFilePacketMaxSize);
CheckResult(AudioFileGetProperty(audioConverterSettings.inputFile,
kAudioFilePropertyMaximumPacketSize,
&propSize,
&audioConverterSettings.inputFilePacketMaxSize),
"couldn't get file's max packet size“);
Listing 6.5  Continued
// get size of the largest possible packet
propSize = sizeof(audioConverterSettings.inputFilePacketMaxSize);
CheckError(AudioFileGetProperty(audioConverterSettings.inputFile,
kAudioFilePropertyMaximumPacketSize,
&propSize,
&audioConverterSettings.inputFilePacketMaxSize),
"couldn't get file's max packet size“);
5/23/2012
1 p 104 Listing 6.6  Defining Output ASBD and Creating an Output Audio File
audioConverterSettings.outputFormat.mSampleRate = 44100.0;
audioConverterSettings.outputFormat.mFormatID = kAudioFormatLinearPCM;
audioConverterSettings.outputFormat.mFormatFlags =
kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger |
kAudioFormatFlagIsPacked;
audioConverterSettings.outputFormat.mBytesPerPacket = 4;
audioConverterSettings.outputFormat.mFramesPerPacket = 1;
audioConverterSettings.outputFormat.mBytesPerFrame = 4;
audioConverterSettings.outputFormat.mChannelsPerFrame = 2;
audioConverterSettings.outputFormat.mBitsPerChannel = 16;

CFURLRef outputFileURL =
CFURLCreateWithFileSystemPath(kCFAllocatorDefault,
CFSTR("output.aif"),
kCFURLPOSIXPathStyle,
false);
CheckResult (AudioFileCreateWithURL(outputFileURL,
kAudioFileAIFFType,
&audioConverterSettings.outputFormat,
kAudioFileFlags_EraseFile,
&audioConverterSettings.outputFile),
"AudioFileCreateWithURL failed");
CFRelease(outputFileURL);
Listing 6.6  Defining Output ASBD and Creating an Output Audio File
audioConverterSettings.outputFormat.mSampleRate = 44100.0;
audioConverterSettings.outputFormat.mFormatID = kAudioFormatLinearPCM;
audioConverterSettings.outputFormat.mFormatFlags =
kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger |
kAudioFormatFlagIsPacked;
audioConverterSettings.outputFormat.mBytesPerPacket = 4;
audioConverterSettings.outputFormat.mFramesPerPacket = 1;
audioConverterSettings.outputFormat.mBytesPerFrame = 4;
audioConverterSettings.outputFormat.mChannelsPerFrame = 2;
audioConverterSettings.outputFormat.mBitsPerChannel = 16;

CFURLRef outputFileURL =
CFURLCreateWithFileSystemPath(kCFAllocatorDefault,
CFSTR("output.aif"),
kCFURLPOSIXPathStyle,
false);
CheckError (AudioFileCreateWithURL(outputFileURL,
kAudioFileAIFFType,
&audioConverterSettings.outputFormat,
kAudioFileFlags_EraseFile,
&audioConverterSettings.outputFile),
"AudioFileCreateWithURL failed");
CFRelease(outputFileURL);
5/23/2012
1 p 105 Listing 6.8  Creating an Audio Converter
void Convert(MyAudioConverterSettings *mySettings)
{
// Create the audioConverter object
AudioConverterRef audioConverter;
CheckResult (AudioConverterNew(&mySettings->inputFormat,
&mySettings->outputFormat,
&audioConverter),
"AudioConveterNew failed");
Next, you have some math to do: You have to figure out how big of a packet descriptions array you need to allocate. You had a similar task in Chapter 5; again, you have to juggle multiple contingencies here: whether the format is variable bit rate, whether the buffer is big enough to hold at least one packet, and so on. You address the hard case—determining the sizes for the variable bit rate—first, in Listing 6.9.
Listing 6.8  Creating an Audio Converter
void Convert(MyAudioConverterSettings *mySettings)
{
// Create the audioConverter object
AudioConverterRef audioConverter;
CheckError (AudioConverterNew(&mySettings->inputFormat,
&mySettings->outputFormat,
&audioConverter),
"AudioConveterNew failed");
Next, you have some math to do: You have to figure out how big of a packet descriptions array you need to allocate. You had a similar task in Chapter 5; again, you have to juggle multiple contingencies here: whether the format is variable bit rate, whether the buffer is big enough to hold at least one packet, and so on. You address the hard case—determining the sizes for a variable bit rate—first, in Listing 6.9.
5/23/2012
1 p 106 Listing 6.9  Determining the Size of a Packet Buffers Array and Packets-per-Buffer Count for Variable Bit Rate Data
UInt32 packetsPerBuffer = 0;
UInt32 outputBufferSize = 32 * 1024; // 32 KB is a good starting point
UInt32 sizePerPacket = mySettings->inputFormat.mBytesPerPacket;
if (sizePerPacket == 0)
{
UInt32 size = sizeof(sizePerPacket);
CheckResult(AudioConverterGetProperty(audioConverter,
kAudioConverterPropertyMaximumOutputPacketSize,
&size,
&sizePerPacket),
Listing 6.9  Determining the Size of a Packet Buffers Array and Packets-per-Buffer Count for Variable Bit Rate Data
UInt32 packetsPerBuffer = 0;
UInt32 outputBufferSize = 32 * 1024; // 32 KB is a good starting point
UInt32 sizePerPacket = mySettings->inputFormat.mBytesPerPacket;
if (sizePerPacket == 0)
{
UInt32 size = sizeof(sizePerPacket);
CheckError(AudioConverterGetProperty(audioConverter,
kAudioConverterPropertyMaximumOutputPacketSize,
&size,
&sizePerPacket),
5/23/2012
1 p 114 Listing 6.24  Opening an Extended Audio File for Input
int main(int argc, const char *argv[])
{
MyAudioConverterSettings audioConverterSettings = {0};

// Open the input with ExtAudioFile
CFURLRef inputFileURL =
CFURLCreateWithFileSystemPath(kCFAllocatorDefault,
kInputFileLocation,
kCFURLPOSIXPathStyle,
false);
CheckResult(ExtAudioFileOpenURL(inputFileURL,
&audioConverterSettings.inputFile),
"ExtAudioFileOpenURL failed");
Listing 6.24  Opening an Extended Audio File for Input
int main(int argc, const char *argv[])
{
MyAudioConverterSettings audioConverterSettings = {0};

// Open the input with ExtAudioFile
CFURLRef inputFileURL =
CFURLCreateWithFileSystemPath(kCFAllocatorDefault,
kInputFileLocation,
kCFURLPOSIXPathStyle,
false);
CheckError(ExtAudioFileOpenURL(inputFileURL,
&audioConverterSettings.inputFile),
"ExtAudioFileOpenURL failed");
5/23/2012
1 p 114 CheckResult (AudioFileCreateWithURL(outputFileURL,
kAudioFileAIFFType,
&audioConverterSettings.outputFormat,
CheckError (AudioFileCreateWithURL(outputFileURL,
kAudioFileAIFFType,
&audioConverterSettings.outputFormat,
5/23/2012

p 115 Listing 6.26  Setting the Client Data Format Property on an Extended Audio File
CheckResult(ExtAudioFileSetProperty(audioConverterSettings.inputFile,
kExtAudioFileProperty_ClientDataFormat,
sizeof (AudioStreamBasicDescription),
&audioConverterSettings.outputFormat),
"Couldn't set client data format on input ext file");
Listing 6.26  Setting the Client Data Format Property on an Extended Audio File
CheckError(ExtAudioFileSetProperty(audioConverterSettings.inputFile,
kExtAudioFileProperty_ClientDataFormat,
sizeof (AudioStreamBasicDescription),
&audioConverterSettings.outputFormat),
"Couldn't set client data format on input ext file");
5/23/2012
1 p 117 Listing 6.31  Reading and Converting with ExtAudioFileRead()
UInt32 frameCount = packetsPerBuffer;
CheckResult(ExtAudioFileRead(mySettings->inputFile,
&frameCount,
&convertedData),
"Couldn't read from input file");
Listing 6.31  Reading and Converting with ExtAudioFileRead()
UInt32 frameCount = packetsPerBuffer;
CheckError(ExtAudioFileRead(mySettings->inputFile,
&frameCount,
&convertedData),
"Couldn't read from input file");
5/23/2012
1 p 117 Listing 6.33  Writing Converted Audio Data to an Output File
CheckResult (AudioFileWritePackets(mySettings->outputFile,
FALSE,
frameCount,
NULL,
outputFilePacketPosition /
mySettings->outputFormat.mBytesPerPacket,
&frameCount,
convertedData.mBuffers[0].mData),
"Couldn't write packets to file");
Listing 6.33  Writing Converted Audio Data to an Output File
CheckError (AudioFileWritePackets(mySettings->outputFile,
FALSE,
frameCount,
NULL,
outputFilePacketPosition /
mySettings->outputFormat.mBytesPerPacket,
&frameCount,
convertedData.mBuffers[0].mData),
"Couldn't write packets to file");
5/23/2012
1 p 126 Table 7.1  Audio Unit Subtypes for Generator Units (Type kAudioUnitType_
Generator)
Subtype Description
kAudioUnitSubType_ScheduledSoundPlayer Schedules audio to be played a specified time.
kAudioUnitSubType_AudioFilePlayer Plays audio from a file.
kAudioUnitSubType_NetReceive Receives network audio from a corresponding kAudioUnitSubType_NetSend unit on another host or in another application.
Table 7.1  Audio Unit Subtypes for Generator Units (Type kAudioUnitType_
Generator)
Subtype Description
kAudioUnitSubType_ScheduledSoundPlayer Schedules audio to be played at a specified time.
kAudioUnitSubType_AudioFilePlayer Plays audio from a file.
kAudioUnitSubType_NetReceive Receives network audio from a corresponding kAudioUnitSubType_NetSend unit on another host or in another application.
5/23/2012
1 p 139 Listing 7.14  Scheduling an AudioFileID with the AUFilePlayer
double PrepareFileAU(MyAUGraphPlayer *player)
Listing 7.14  Scheduling an AudioFileID with the AUFilePlayer
Float64 PrepareFileAU(MyAUGraphPlayer *player)
5/23/2012
1 p 153 Because you’re going to work directly with the audio unit, you start the output unit directly via AudioOutputUnitStart() instead of starting an AUGraph. You might notice that the cleanup functions are analogous to how you cleaned up a graph: Instead of using AUGraphStop(), AUGraphUninitialize(), and AUGraphClose(),you perform equivalent actions directly on the unit with AudioOutputUnitStop(), AudioUnitUnitialize(), and AudioComponentInstanceDispose(). Because you’re going to work directly with the audio unit, you start the output unit directly via AudioOutputUnitStart() instead of starting an AUGraph. You might notice that the cleanup functions are analogous to how you cleaned up a graph: Instead of using AUGraphStop(), AUGraphUninitialize(), and AUGraphClose(),you perform equivalent actions directly on the unit with AudioOutputUnitStop(), AudioUnitUnitialize(), and AudioComponentInstanceDispose(). 5/23/2012
1 p 154 Getting the Audio Unit itself requires use of some more Audio Component Manager functions. These calls are based on the legacy Component Manager API, which was originally designed to provide a means of discovering and using shared resources. You provide a description of the component you want and then iterate over matches (of which there could be zero, one, or many) until you find the component you want. You perform this iteration with the AudioComponentFindNext() function, which uses the odd semantic of having you pass in your last match (NULL on your first call), along with your component description. Listing 7.31 shows how to use it to get a component for the default output unit described.
Getting the Audio Unit itself requires use of some more Audio Component Manager functions. As mentioned earlier, these calls are based on the legacy Component Manager API, which was originally designed to provide a means of discovering and using shared resources. You provide a description of the component you want and then iterate over matches (of which there could be zero, one, or many) until you find the component you want. You perform this iteration with the AudioComponentFindNext() function, which uses the odd semantic of having you pass in your last match (NULL on your first call), along with your component description. Listing 7.31 shows how to use it to get a component for the default output unit described.
5/23/2012
1 p 167 1 The CARingBuffer included with the Core Audio SDK on Mac OS X 10.5 and 10.6 is buggy and can be fixed with an updated version of the class. See Apple Technical Q&A 1665, “CoreAudio PublicUtility—Installing the CARingBuffer Update” for an explanation and a link to the corrected code. Lion-based versions of Xcode provide the correct version of CARingBuffer, but starting in Xcode 4.3, Xcode does not install the Core Audio PublicUtility folder at all. Instead, Apple includes it in the “Audio Tools for Xcode” package, available via Xcode’s “More Developer Tools…” menu item. The items in this package can be installed anywhere you like. In the book’s downloadable code, we expect PublicUtility to be at the old location.” 1 The CARingBuffer included with the Core Audio SDK on Mac OS X 10.5 and 10.6 is buggy and can be fixed with an updated version of the class. See Apple Technical Q&A 1665, “CoreAudio PublicUtility—Installing the CARingBuffer Update” for an explanation and a link to the corrected code. Lion-based versions of Xcode provide the correct version of CARingBuffer, but starting in Xcode 4.3, Xcode does not install the Core Audio PublicUtility folder at all. Instead, Apple includes it in the “Audio Tools for Xcode” package, available via Xcode’s “More Developer Tools…” menu item. The items in this package can be installed anywhere you like. In the book’s downloadable code, we expect PublicUtility to be at the old location. 5/23/2012
1 p 215 So that you don’t spend too much breath on Audio Toolbox stuff, Listing 9.30 presents the entire function. Refers you back to the first example, or Chapter 6, if the setup or use of the ExtAudioFile throws you. You can #define STREAM_PATH to be any file playable by Core Audio; the online sample code uses a long jingle track from the iLife collection:
#define STREAM_PATH CFSTR ("/Library/Audio/Apple Loops/Apple/iLife Sound Effects/Jingles/Kickflip Long.caf")
But we’re not above a funny musical reference every now and then, either; for example, see Listing 9.30.
So that you don’t spend too much breath on Audio Toolbox stuff, Listing 9.30 presents the entire function. Refer back to the first example, or Chapter 6, if the setup or use of the ExtAudioFile throws you. You can #define STREAM_PATH to be any file playable by Core Audio; the online sample code uses a long jingle track from the iLife collection:
#define STREAM_PATH CFSTR ("/Library/Audio/Apple Loops/Apple/iLife Sound Effects/Jingles/Kickflip Long.caf")
But we’re not above a funny musical reference every now and then, either; for example, see the top of Listing 9.30.
5/23/2012
1 p 224 This also means there is no sharing of resources between applications, which has profound implications for Audio Units: You can’t make a third-party plug-in that other apps can see, so there isn’t a aftermarket for Audio Units as there is on OSX. This also means there is no sharing of resources between applications, which has profound implications for Audio Units: You can’t make a third-party plug-in that other apps can see, so there isn’t a aftermarket for Audio Units as there is on OS X. 5/23/2012
1 p 225 • kAudioSessionProperty_AudioRoute
The current output (and possibly input) route as a read-only CFString ("Headphone", "Speaker", "HeadsetInOut")
• kAudioSessionProperty_AudioRoute
The current output (and possibly input) route as a read-only CFString ("Headphone", "Speaker", "HeadsetInOut")
5/23/2012
1 p 232 On Mac OS X, kAudioFormatFlagsCanonical and kAudioFormat
FlagsAudioUnitCanonical both use floating point samples, but floating point is a significant expense on the low-power chips of ARM devices.
On Mac OS X, kAudioFormatFlagsCanonical and kAudioFormat
FlagsAudioUnitCanonical both use floating point samples, but floating point is a significant expense on the low-power chips of early ARM devices.
5/23/2012
1 p 244 Notice that, this time, you set kAudioSessionCategory_PlayAndRecord as the category for the audio session. This is crucial because you don’t get access to capture hardware unless you specifically ask for with a suitable category. Notice that, this time, you set kAudioSessionCategory_PlayAndRecord as the category for the audio session. This is crucial because you don’t get access to capture hardware unless you specifically ask for it with a suitable category. 5/23/2012
1 p 245 Now you need an instance of the I/O unit. iOS uses the Audio Component Manager API, described in Chapter 7, and has never included the legacy Component Manager; you need to worry about only the modern versions of these calls. Listing 10.22 shows how we get the Remote IO unit. Now you need an instance of the I/O unit. iOS uses the Audio Component Manager API, described in Chapter 7, and has never included the legacy Component Manager; you need to worry about only the modern versions of these calls. Listing 10.22 shows how we get the Remote IO unit. 5/23/2012
1 p 248 Next, you set up your render callback. Recall from Chapters 7 and 8 that this function is called every time the RemoteIO unit needs to pull a buffer full of samples. As you set the callback in Listing 10.25, you provide a single user data pointer to this function, and that’s what the EffectState struct is for.
Next, you set up your render callback. Recall from Chapters 7 and 8 that this function is called every time the RemoteIO unit needs to pull a buffer full of samples. As you set the callback in Listing 10.25, you provide a single user data pointer to this function, and that’s what the EffectState struct is for.
5/23/2012
1 p 248 That’s all the setup needed. Listing 10.26 starts the RemoteIO unit and lets applicationDidFinishLaunching:withOptions: finish its usual setup. That’s all the setup needed. Listing 10.26 starts the RemoteIO unit and lets applicationDidFinishLaunching:withOptions: finish its usual setup. 5/23/2012
1 p 249 As in the previous example, the case you care about is the end of an interruption, which happens, for example, when the user declines an incoming phone call. When this happens, try to reset the audio session active and restart the RemoteIO unit. As in the previous example, the case you care about is the end of an interruption, which happens, for example, when the user declines an incoming phone call. When this happens, try to reset the audio session active and restart the RemoteIO unit. 5/23/2012
1 p 250 by a single audio unit. When RemoteIO needs to play samples, you can just pull from RemoteIO’s own bus 1 capture buffers. by a single audio unit. When RemoteIO needs to play samples, you can just pull from RemoteIO’s own bus 1 capture buffers. 5/23/2012
1 p 250 Next, in Listing 10.29, you pull captured samples from the RemoteIO unit’s bus 1 output and put them into the ioData parameter that the RemoteIO unit passed in and expects you to fill. Next, in Listing 10.29, you pull captured samples from the RemoteIO unit’s bus 1 output and put them into the ioData parameter that the RemoteIO unit passed in and expects you to fill. 5/23/2012
1 p 272 Listing 11.17  Creating a MIDINetworkHost
-(void) connectToHost {
MIDINetworkHost *host = [MIDINetworkHost hostWithName:@"MyMIDIWifi"
address:DESTINATION_ADDRESS
port:5004];
if(!host)
return;
Listing 11.17  Creating a MIDINetworkHost
-(void) connectToHost {
MIDINetworkHost *host = [MIDINetworkHost hostWithName:@"MyMIDIWifi"
address:DESTINATION_ADDRESS
port:5004];
if(!host)
return;
5/23/2012
1 p 276 When the app is ready, run it on your device. You should see it appear in the MIDI Network Setup window (see Figure 11.6), where the iPhone Squall automatically appears in the directory and as one of Session 1’s participants. When the app is ready, run it on your device. You should see it appear in the MIDI Network Setup window (see Figure 11.6, where the iPhone “Squall” automatically appears in the directory and as one of Session 1’s participants). 5/23/2012