Aros/Developer/AHIDrivers

Retargetable Audio Devices
For other sound cards, a system called AHI was developed to support other sound cards other than using Paula (Amiga(TM)). AHI uses the ahi.device and extra drivers to support different sound cards which is set in the AHI preferences (in Prefs drawer). It can be programmed in a similar way as the old Amiga audio.device. More information is included with the AHI Developer files which you can download from the AHI homepage or Aminet.

Wikipedia page

Amiga Sourceforge DevHelp

Devices

Units 0 - 3 can be shared by as many programs as you define channels for them. Music Unit blocks the hardware which it is set to exclusively, so that no other program can play sound at the same time through this hardware. That's what .audio was invented, it is a virtual hardware for which sends its sound data to the unit you set it up. This way, although music unit pull-down option blocks .audio exclusively, other programs can still send sound to unit 0 - 3. Normally all programs use unit 0. Only very few programs use the music unit.

It works on two models, as a device driver (High-level) or library (Low-level). Although this confused me at first, having programmed the device model it is because this is simple just allowing you to send sound streams and the library model deals with samples for use with trackers which are preloaded.

Library Approach
The library (Low-level) approach use AHI functions like AHI_SetSound, AHI_SetVol and so on. In reality, this way  has one big problem: if you work with this method, you will not have any mixing functions - your program will lock ahi.device and while your program is running, all other ahi programs will not work. The only  advantage  of low-level coding as mentioned in the documentation is  "low overhead and much more advanced control over the playing sounds".

Here you get exclusive access to the audio hardware, and can do almost whatever you want, including monitoring.

Disadvantage is obviously, that the audio hardware is blocked for all other programs. Most of the drivers don't care about this situation, and if a program tries to access the hardware it mostly trashes everything and you need to restart your program, or re-allocate the audio hardware.

From AHI6 on, there is a nonblocking AHI mode called "device mode", but it doesn't allow recording. Playback only and has bad timing. Good enough for Playback something, but too bad for real time response.

To use two samples, you need to use the library model, you would open the ahi.device and extract the library base from the AHI device structure.

Device Approach
In this way you just use ahi.device as standard amiga device, and use CMD_WRITE for sending raw data to the AHI. With hi-level ahi coding you are allowed to mix sound, there are no 'locks' of ahi.device anymore and so on. For example, for mp3 playing, or for mod players, you just depack the data needed by the CPU and use this unpacked raw data with CMD_WRITE.

The device interface is much easier to program, and is suitable for system niose or mp3 players. It has a fix latency of 20ms, which suffices in most (non-musical) situations. It is non-blocking the hardware, so it is the first choice when it comes to quick play some audio.

It also supports the CMD_READ message but


 * As soon as you read, it blocks AHI exclusively.
 * it can sometimes gives the odd click while recording via device interfaces

All you do is use the CMD_WRITE command with the sample info set in the AHI IORequest structure, for more samples you will need to copy the IORequest and use that for the other samples, and so on. Essentially the structure is sent as a message to the AHI daemon, which is standard for Exec devices, so that is why it needs a copy. Otherwise it would try to link a message in the list which is already in there, of the same address, then crash!

I would start with the device API, not least because it's very simple. When you have loaded/generated sample data, opened the AHI device and allocated IORequest(s), you can use the Exec library functions (DoIO, SendIO, BeginIO...) to play the samples. However, there may be a limited amount of AHI channels, so IIRC in that case lower priority sounds will be queued and played later. You could create your own mixer routine which basically "streams" data using double-buffered IO requests (there is an example in the AHI SDK about double buffering).

Do I really need to record via ahi.device? If it is just the monitor feature, you can use ahi's internal monitor functionality (which has the lowest possible latency and might use the hardware possibility to monitor) or you can read, manipulate and copy to the outbuffer your data using the lib interface. The latency will be usually 20ms, depending on the driver, the application has no control over this.

You could also use datatypes.library to play samples but cannot say if it's very accurate timing wisely, but it's at least very simple to use.

CreateMsgPort CreateIORequest OpenDevice (ahi.device)

loop {   depack some music data fill AHIdatas SendIO((struct IORequest *) AHIdatas); }

Then, when i need, i just do for sound (which will be played on second channel (it's only solution to make it works at the same time by ahi):

CreateMsgPort CreateIORequest OpenDevice (ahi.device)

fill AHIdatas

DoIO/sendio

Find the default audio ID such as for unit 0 or the default unit. Then call AHI_AllocAudioA and pass it the ID, or AHI_DEFALUT_ID, and a AHIA_Channels tag with the minimum channels you need. Then check if it allocated the channels. If so you should know it has enough channels and you can then AHI_FreeAudio. If not that should mean it doesn't have enough provided you have passed all the required tags.

AHI device interface plays streams, not samples. AHI mixes as many streams together as you have channels set in the prefs. If you try to play more streams than channels are available, the additional streams are muted.

If you need to synchronise two samples (perhaps for stereo) then you can issue a CMD_STOP, do your CMD_WRITEs, then issue a CMD_START to start play. What you have gotta watch is that is affects all AHI applications and not just your own.

This brings me to another point. Are your sounds mono or stereo? As you would have read the proper way for stereo is to tell AHI to centre the pan and to give it a stereo sample. I don't know if it returns an error if it can't do it, possibly will accept the write but with a muted channel as it looks like you found out.

Another thing about multiple CMD_WRITEs from different AHI Requests. AHI will treat each instance separately and will mix the sound together on the same track. Providing the hardware supports it, the high level API can only offer panning and not let specify a direct track AFAIK.

http://utilitybase.com/forum/index.php?action=vthread&forum=201&topic=1565&page=-1

If you want to play multiple samples with only one channel through the device API, you have to create one stream from the samples yourself.

The AHI API uses OpenDevice to do CMD_READ, CMD_WRITE, CMD_START, CMD_STOP.

see here thread

Setting up
This will create a new message port, the create some IORequest structures and finally open the AHI device for which to write.

Playing a Sound
The AHI_IORequest structure is similar to audio device structures. p1 points to the actual raw sound data, length is the size of the data buffer, Frequency is the amount to reply at e.g. 8000 Hertz, Type is type of music data e.g. AHIST_M8S, then the Volume and Position of speakers. SendIO will start playing the sound and you may use WaitIO to wait until the buffer is played before starting on next block of data.

Freeing Audio

 * i call AHI_ControlAudion with play to false so that i make sure nothing is being played
 * i unload the sounds using AHI_UnloadSound to make sure sounds get unloaded
 * then i call AHI_FreeAudio

Closing down
Once you have finished with the AHI device, you need to close down the device. e.g.
 * Does a CloseDevice
 * then DeletIORequest
 * and last DeleteMsgPort

Updating Sound often
Take a look at the simpleplay example.

If you want to 'update' your sound on a regular base then there is already functionality available.

In AllocAudio you can provide a playerfunction using the AHIA_PlayerFunc field.

AHIA_PlayerFunc If you are going to play a musical score, you should use this "interrupt" source instead of VBLANK or CIA timers in order to get the best result with all audio drivers. If you cannot use this, you must not use any "non-realtime" modes (see AHI_GetAudioAttrsA in the autodocs, the AHIDB_Realtime tag).

AHIA_PlayerFreq If non-zero, it enables timing and specifies how many times per second PlayerFunc will be called. This must be specified if AHIA_PlayerFunc is! It is suggested that you keep the frequency below 100–200 Hz. Since the frequency is a fixpoint number AHIA_PlayerFreq should be less than 13107200 (that's 200 Hz).

That way it is for example possible to write some kind of replayer that decides which sounds needs to be stopped, or for example be slided, volume turned up/down etc.

Just make the main loop wait for the player to be 'done'.

You could do that by messaging, but for example also by using a signal. In order to stop the player you could use a boolean (that you set by pressing a button or whatever you want) that the player checks and then signals the main loop to quit.

Please take a look at the PlaySineEverywhere.c example in the AHI developer archive.

Misc
There are a number of things that are called "latency." The thing that concerns me most is the time between when audio (like a microphone) hits the input and when it comes out the monitor output. You can measure this by putting something with a short rise time (a cross stick sound is good) into one channel, and connect the output of that channel to the input of another channel. Record a few seconds on both channels. Stop the recording, zoom in on the waveform of the two channels, and measure the time difference between them. That's the input/output latency.

Latency when playing samples is trickier because it depends on the program that's supporting the VST instrument. If you have a MIDI keyboard with sounds you could choose a similar sound on the keyboard and from the VST library, connect the analog output of the sample playback channel to one input, connect the synth output to another input, play your sound, record it to two tracks, and look at the time difference between the tracks. That's not totally accurate but it will get you a ballpark measurement.

If you do the following literally (in your code):

filebuffer = Open("e.raw",MODE_OLDFILE); if (filebuffer==NULL) printf("nfilebuffer NULL"); else length1 = Read(filebuffer,p1,BUFFERSIZE);

filebuffer = Open("a.raw",MODE_OLDFILE); if (filebuffer==NULL) printf("nfilebuffer NULL"); else length2 = Read(filebuffer,p2,BUFFERSIZE);

filebuffer = Open("d.raw",MODE_OLDFILE); if (filebuffer==NULL) printf("nfilebuffer NULL"); else length3 = Read(filebuffer,p3,BUFFERSIZE);

filebuffer = Open("g.raw",MODE_OLDFILE); if (filebuffer==NULL) printf("nfilebuffer NULL"); else length4 = Read(filebuffer,p4,BUFFERSIZE);

filebuffer = Open("b.raw",MODE_OLDFILE); if (filebuffer==NULL) printf("nfilebuffer NULL"); else length5 = Read(filebuffer,p5,BUFFERSIZE);

filebuffer = Open("ec.raw",MODE_OLDFILE); if (filebuffer==NULL) printf("nfilebuffer NULL"); else length6 = Read(filebuffer,p6,BUFFERSIZE);

Then your variable "filebuffer" (which is a special pointer to the handle of the file) gets overwritten before the handle is closed.

red: So i kind of expected something like: filebuffer = Open("b.raw",MODE_OLDFILE); if (filebuffer==NULL) { printf("nfilebuffer NULL") } else { length5 = Read(filebuffer,p5,BUFFERSIZE); if close(filebuffer) {   printf("nfile b.raw closed successfully") } else {   printf("nfile b.raw did not close properly, but we cannot use the filehandle anymore because it is not valid anymore") } }

you have to unload/free every used or not used but allocated channels/sounds.

For example something like this will loop around until the last allocated channel.

For(chan_no=0;chan_no<num_of_channels;chan_no++) { If(channel[chan_no]) free(channel[chan_no]); }

Maybe If(channel[chan_no]!=NULL)

To make certain you can NULL every sound bank before exit.

Examples
Another example.

Double-buffering is required though.

struct MsgPort   *AHIPort = NULL; struct AHIRequest *AHIReq = NULL; BYTE              AHIDevice = -1; UBYTE             unit = AHI_DEFAULT_UNIT;

static int write_ahi_output (char * output_data, int output_size); static void close_ahi_output ( void );

static int open_ahi_output ( void ) { if (AHIPort = CreateMsgPort) {       if (AHIReq = (struct AHIRequest *) CreateIORequest(AHIPort, sizeof(struct AHIRequest))) {           AHIReq->ahir_Version = 4; if (!(AHIDevice = OpenDevice(AHINAME, unit, (struct IORequest *) AHIReq, NULL))) {               send_output = write_ahi_output; close_output = close_ahi_output; return 0; }           DeleteIORequest((struct IORequest *) AHIReq); AHIReq = NULL;

}       DeleteMsgPort(AHIPort); AHIPort = NULL; }

return -1; }

static int write_ahi_output (char * output_data, int output_size) { if (!CheckIO((struct IORequest *) AHIReq)) {       WaitIO((struct IORequest *) AHIReq); //AbortIO((struct IORequest *) AHIReq); }

AHIReq->ahir_Std.io_Command = CMD_WRITE; AHIReq->ahir_Std.io_Flags = 0; AHIReq->ahir_Std.io_Data = output_data; AHIReq->ahir_Std.io_Length = output_size; AHIReq->ahir_Std.io_Offset = 0; AHIReq->ahir_Frequency = rate; AHIReq->ahir_Type = AHIST_S16S; AHIReq->ahir_Volume = 0x10000; AHIReq->ahir_Position = 0x8000; AHIReq->ahir_Link = NULL; SendIO((struct IORequest *) AHIReq); return 0; }

static void close_ahi_output ( void ) { if (!CheckIO((struct IORequest *) AHIReq)) { WaitIO((struct IORequest *) AHIReq); AbortIO((struct IORequest *) AHIReq); }

if (AHIReq) { CloseDevice((struct IORequest *) AHIReq); AHIDevice = -1; DeleteIORequest((struct IORequest *) AHIReq); AHIReq = NULL; }

if (AHIPort) { DeleteMsgPort(AHIPort); AHIPort = NULL; }

}

High level ahi for sound playback - The idea is to create several i/o requests and then when you want to play a sound you pick one that is free and then simply start CMD_WRITE to it with BeginIO and then mark the i/o request as in use (ch->busy field in above code). What the SoundIO function does is check for replies from ahi.device that some i/o request has finished and then simply marks them as not in use any more. If no i/o request is free the PlaySnd function simply interrupts the one that has been playing for longest with AbortIO/WaitIO and then reuses that one.

char *snd_buffer[5]; int sound_file_size[5];

int number;

struct Process *sound_player; int sound_player_done = 0;

void load_sound(char *name, int number) {

FILE *fp_filename;

if((fp_filename = fopen(name,"rb")) == NULL) { printf("can't open sound file\n");exit(0);} ;

fseek (fp_filename,0,SEEK_END); sound_file_size[number] = ftell(fp_filename); fseek (fp_filename,0,SEEK_SET);

snd_buffer[number]=(char *)malloc(sound_file_size[number]);

fread(snd_buffer[number],sound_file_size[number],1,fp_filename);

//printf("%d\n",sound_file_size[number]);

fclose(fp_filename);

// free(snd_buffer[number]);

}

void play_sound_routine(void)

{

struct MsgPort   *AHImp_sound     = NULL; struct AHIRequest *AHIios_sound[2] = {NULL,NULL}; struct AHIRequest *AHIio_sound    = NULL; BYTE              AHIDevice_sound = -1; //ULONG sig_sound;

//-open/setup ahi

if((AHImp_sound=CreateMsgPort) != NULL) { if((AHIio_sound=(struct AHIRequest *)CreateIORequest(AHImp_sound,sizeof(struct AHIRequest))) != NULL) { AHIio_sound->ahir_Version = 4; AHIDevice_sound=OpenDevice(AHINAME,0,(struct IORequest *)AHIio_sound,0); } }

if(AHIDevice_sound) { Printf("Unable to open %s/0 version 4\n",AHINAME); goto sound_panic; }

AHIios_sound[0]=AHIio_sound; SetIoErr(0);

AHIios_sound[0]->ahir_Std.io_Message.mn_Node.ln_Pri = 127; AHIios_sound[0]->ahir_Std.io_Command = CMD_WRITE; AHIios_sound[0]->ahir_Std.io_Data    = snd_buffer[number];//sndbuf; AHIios_sound[0]->ahir_Std.io_Length  = sound_file_size[number];//fib_snd.fib_Size; AHIios_sound[0]->ahir_Std.io_Offset  = 0; AHIios_sound[0]->ahir_Frequency      = 8000;//44100; AHIios_sound[0]->ahir_Type           = AHIST_M8S;//AHIST_M16S; AHIios_sound[0]->ahir_Volume         = 0x10000;          // Full volume AHIios_sound[0]->ahir_Position       = 0x8000;           // Centered AHIios_sound[0]->ahir_Link           = NULL;

DoIO((struct IORequest *) AHIios_sound[0]);

sound_panic:

//printf("are we on sound_exit?\n"); if(!AHIDevice_sound) CloseDevice((struct IORequest *)AHIio_sound); DeleteIORequest((struct IORequest *)AHIio_sound); DeleteMsgPort(AHImp_sound); sound_player_done = 1;

}

void stop_sound(void) {

Signal(&sound_player->pr_Task, SIGBREAKF_CTRL_C ); while(sound_player_done !=1){}; sound_player_done=0; }

void play_sound(int num)

{     number=num;

#ifdef __MORPHOS__

sound_player = CreateNewProcTags(							NP_Entry, &play_sound_routine,							NP_Priority, 1,							NP_Name, "Ahi raw-sound-player Process",                         //  NP_Input, Input,                          //  NP_CloseInput, FALSE,                          //  NP_Output, Output,                          //  NP_CloseOutput, FALSE,

NP_CodeType, CODETYPE_PPC,

TAG_DONE);

#else

sound_player = CreateNewProcTags(							NP_Entry, &play_sound_routine,							NP_Priority, 1,							NP_Name, "Ahi raw-sound-player Process",                         //  NP_Input, Input,                          //  NP_CloseInput, FALSE,                          //  NP_Output, Output,                          //  NP_CloseOutput, FALSE,

TAG_DONE);        #endif

Delay(10); // little delay for make sounds finish

}

Low level for music playback

These steps will allow you to use low-level AHI functions: - Create message port and AHIRequest with appropriate functions from exec.library. - Open the device with OpenDevice giving AHI_NO_UNIT as a unit. - Get interface to the library with GetInterface giving as the first parameter io_Device field of IORequest.

struct AHIIFace *IAHI; struct Library *AHIBase; struct AHIRequest *ahi_request; struct MsgPort *mp;

if (mp = IExec->CreateMsgPort) {  if (ahi_request = (struct AHIRequest *)IExec->CreateIORequest(mp, sizeof(struct AHIRequest))) {     ahi_request->ahir_Version = 4; if (IExec->OpenDevice("ahi.device", AHI_NO_UNIT, (struct IORequest *)ahi_request, 0) == 0) {        AHIBase = (struct Library *)ahi_request->ahir_Std.io_Device; if (IAHI = (struct AHIIFace *)IExec->GetInterface(AHIBase, "main", 1, NULL)) {           // Interface got, we can now use AHI functions // ...           // Once we are done we have to drop interface and free resources IExec->DropInterface((struct Interface *)IAHI); }        IExec->CloseDevice((struct IORequest *)ahi_request); }     IExec->DeleteIORequest((struct IORequest *)ahi_request); }  IExec->DeleteMsgPort(mp); }

Once you get the AHI interface, its functions can be used. To start playing sounds you need to allocate audio (optionally you can ask user for Audio mode and frequency). Then you need to Load samples to use with AHI. You do it with AHI_AllocAudio, AHI_ControlAudio and AHI_LoadSound.

struct AHIAudioCtrl *ahi_ctrl;

if (ahi_ctrl = IAHI->AHI_AllocAudio( AHIA_AudioID, AHI_DEFAULT_ID, AHIA_MixFreq, AHI_DEFAULT_FREQ, AHIA_Channels, NUMBER_OF_CHANNELS, // the desired number of channels AHIA_Sounds, NUMBER_OF_SOUNDS, // maximum number of sounds used TAG_DONE)) {  IAHI->AHI_ControlAudio(ahi_ctrl, AHIC_Play, TRUE, TAG_DONE); int i;  for (i = 0; i < NUMBER_OF_SOUNDS; i++) {     // These variables need to be initialized uint32 type; APTR samplearray; uint32 length; struct AHISampleInfo sample;

sample.ahisi_Type = type; // where type is the type of sample, for example AHIST_M8S for 8-bit mono sound sample.ahisi_Address = samplearray; // where samplearray must point to sample data sample.ahisi_Length = length / IAHI->AHI_SampleFrameSize(type); if (IAHI->AHI_LoadSound(i + 1, AHIST_SAMPLE, &sample, ahi_ctrl)) != 0)     {         // error while loading sound, cleanup      }   }   // everything OK, play the sounds   // ...   // then unload sounds and free the audio   for (i = 0; i < NUMBER_OF_SOUNDS; i++)      IAHI->AHI_UnloadSound(i + 1, ahi_ctrl);   IAHI->AHI_ControlAudio(ahi_ctrl, AHIC_Play, FALSE, TAG_DONE);   IAHI->AHI_FreeAudio(ahi_ctrl); }

use the functions AHI_SetVol to set volume, AHI_SetFreq to set frequency, AHI_SetSound to play the sounds.


 * 1) include 
 * 2) include 
 * 3) include 
 * 4) include 
 * 5) include 

struct UserArgs {	STRPTR file; LONG  *freq; };

CONST TEXT Version[] = "$VER: ShellPlayer 1.0 (4.4.06)";

STATIC struct Library *PtPlayBase; STATIC struct Task *maintask; STATIC APTR modptr; STATIC LONG frequency; STATIC VOLATILE int player_done = 0;

STATIC VOID AbortAHI(struct MsgPort *port, struct IORequest *r1, struct IORequest *r2) {	if (!CheckIO(r1)) {		AbortIO(r1); WaitIO(r1); }

if (!CheckIO(r2)) {		AbortIO(r2); WaitIO(r2); }

GetMsg(port); GetMsg(port); }

STATIC VOID StartAHI(struct AHIRequest *r1, struct AHIRequest *r2, WORD *buf1, WORD *buf2) {	PtRender(modptr, (BYTE *)(buf1), (BYTE *)(buf1+1), 4, frequency, 1, 16, 2); PtRender(modptr, (BYTE *)(buf2), (BYTE *)(buf2+1), 4, frequency, 1, 16, 2);

r1->ahir_Std.io_Command = CMD_WRITE; r1->ahir_Std.io_Offset = 0; r1->ahir_Std.io_Data   = buf1; r1->ahir_Std.io_Length = frequency*2*2; r2->ahir_Std.io_Command = CMD_WRITE; r2->ahir_Std.io_Offset = 0; r2->ahir_Std.io_Data   = buf2; r2->ahir_Std.io_Length = frequency*2*2;

r1->ahir_Link = NULL; r2->ahir_Link = r1;

SendIO((struct IORequest *)r1); SendIO((struct IORequest *)r2); }

STATIC VOID PlayerRoutine(void) {	struct AHIRequest req1, req2; struct MsgPort *port; WORD *buf1, *buf2;

buf1 = AllocVec(frequency*2*2, MEMF_ANY); buf2 = AllocVec(frequency*2*2, MEMF_ANY);

if (buf1 && buf2) {		port = CreateMsgPort;

if (port) {			req1.ahir_Std.io_Message.mn_Node.ln_Pri = 0; req1.ahir_Std.io_Message.mn_ReplyPort = port; req1.ahir_Std.io_Message.mn_Length = sizeof(req1); req1.ahir_Version = 2;

if (OpenDevice("ahi.device", 0, (struct IORequest *)&req1, 0) == 0) {				req1.ahir_Type          = AHIST_S16S; req1.ahir_Frequency     = frequency; req1.ahir_Volume        = 0x10000; req1.ahir_Position      = 0x8000;

CopyMem(&req1, &req2, sizeof(struct AHIRequest));

StartAHI(&req1, &req2, buf1, buf2);

for {					struct AHIRequest *io; ULONG sigs;

sigs = Wait(SIGBREAKF_CTRL_C | 1 << port->mp_SigBit);

if (sigs & SIGBREAKF_CTRL_C) break;

if ((io = (struct AHIRequest *)GetMsg(port))) {						if (GetMsg(port)) {							// Both IO request finished, restart

StartAHI(&req1, &req2, buf1, buf2); }						else {							APTR link; WORD *buf;

if (io == &req1) {								link = &req2; buf = buf1; }							else {								link = &req1; buf = buf2; }

PtRender(modptr, (BYTE *)buf, (BYTE *)(buf+1), 4, frequency, 1, 16, 2);

io->ahir_Std.io_Command = CMD_WRITE; io->ahir_Std.io_Offset = 0; io->ahir_Std.io_Length = frequency*2*2; io->ahir_Std.io_Data   = buf; io->ahir_Link = link;

SendIO((struct IORequest *)io); }					}				}

AbortAHI(port, (struct IORequest *)&req1, (struct IORequest *)&req2); CloseDevice((struct IORequest *)&req1); }

DeleteMsgPort(port); }	}

FreeVec(buf1); FreeVec(buf2);

Forbid; player_done = 1; Signal(maintask, SIGBREAKF_CTRL_C); }

int main(void) {	struct RDArgs *args; struct UserArgs params;

int rc = RETURN_FAIL;

maintask = FindTask(NULL);

args = ReadArgs("FILE/A,FREQ/K/N", (IPTR *)&params, NULL);

if (args) {		PtPlayBase = OpenLibrary("ptplay.library", 0);

if (PtPlayBase) {			BPTR fh;

if (params.freq) {				frequency = *params.freq; }

if (frequency < 4000 || frequency > 96000) frequency = 48000;

fh = Open(params.file, MODE_OLDFILE);

if (fh) {				struct FileInfoBlock fib; APTR buf;

ExamineFH(fh, &fib);

buf = AllocVec(fib.fib_Size, MEMF_ANY);

if (buf) {					Read(fh, buf, fib.fib_Size); }

Close(fh);

if (buf) {					ULONG type;

type = PtTest(params.file, buf, 1200);

modptr = PtInit(buf, fib.fib_Size, frequency, type);

if (modptr) {						struct Process *player;

player = CreateNewProcTags(							NP_Entry, &PlayerRoutine,							NP_Priority, 1,							NP_Name, "Player Process",							#ifdef __MORPHOS__							NP_CodeType, CODETYPE_PPC,							#endif							TAG_DONE);

if (player) {							rc = RETURN_OK; Printf("Now playing \033[1m%s\033[22m at %ld Hz... Press CTRL-C to abort.\n", params.file, frequency);

do {								Wait(SIGBREAKF_CTRL_C);

Forbid; if (!player_done) {									Signal(&player->pr_Task, SIGBREAKF_CTRL_C); }								Permit; }							while (!player_done); }

PtCleanup(modptr); }					else {						PutStr("Unknown file!\n"); }				}				else {					PutStr("Not enough memory!\n"); }			}			else {				PutStr("Could not open file!\n"); }

CloseLibrary(PtPlayBase); }

FreeArgs(args); }

if (rc = RETURN_FAIL) PrintFault(IoErr, NULL); return rc; }

Other Examples
Master volume utility Anyone can make such a relative simple utility. It's a matter of calling AHI_SetEffect with a master volume structure. You can make a window with a slider and call said function easily.

You write to AHI device, and AHI will write to a sound card, the native hardware, or even to a file. These options are user-configurable. AHI also performs the software mixing duties so that more than one sound can be played simultaneously.

AHI provides four 'units' for audio. This makes it possible to have a program play on the native hardware, and another play on a sound card by attaching the appropriate AHI driver to a unit number. For the software developer, AHI provides two ways to play audio. One is the AUDIO: DOS device. AHI can create a volume called AUDIO: that works like an AmigaDOS volume. You can read and write data directly to it and it plays through the speakers. This is the easiest way to write PCM, but it's not the best.

First of all, if a user takes the AUDIO: entry out of the mountlist, your program won't work and you get bombarded with stupid support questions like 'I've got AHI, why doesn't it work?'. The better option is to send IORequests to AHI. This allows you to control volume and balance settings while the program runs (with AUDIO: you set these when you open the file and you can't change them without closing and re-opening AUDIO:) and you can use a neat trick called double-buffering to improve efficiency. Double-buffering allows you to fill one audio buffer while another one is playing. This kind of asynchronous operation can prevent 'choppy' audio on slower systems.

We initialize AHI and then prepare and send AHI request to the ahi.device.

It's very important to calculate the number of bytes you want AHI to read from the buffer. You can cause a nasty crash indeed if it's incorrect! To do this, multiply the PCM count by the number of channels by the number of AHI buffers.

A quick note about volume and position: AHI uses a fairly arcane datatype called Fixed. A Fixed number consists of 32 bits: a sign bit, a 15-bit integer part, and a 16-bit fractional part. When I construct the AHI request, I multiply this number by 0x00010000 to convert it to the fixed value. If I use this code as part of a DOS background process, I can change the volume and balance on the fly so the next sample that's queued will be played louder or quieter. It's also possible to interrupt AHI so the change takes effect immediately, but I won't go into that.

Once the request is sent, we put the requisite bits in to check for CTRL-C and any AHI interrupt messages. Then it's time to swap buffers around.

Hooks
An old idea which is best avoided if possible. The hook function should be used to play/control the sample(s). It is called at the frequency that it was initialized with (in your case 100).

So in your 'normal' code you would flip a switch somewhere, telling the hookfunction to start playing a sample (or do with it whatever you want).

In the hookfunction you then start playing the sample and/or apply effects with the ahi ctrl function (and others).

One such example could be for instance be that module (as in well now .mod fileformat) data is being processed for each channel and apply effect, etc.

In your case it would be a bit simpler as a mod player. You want to start playing a note and stop it at your will. For instance when a counter reached a certain value.

The (probable) reason your number it not printing is because that routine is called _a lot_ every second

The gist is that you have to find a mechanism (that suits your purpose the best) that uses your mouseclicks (or keyclicks) to 'feed' the player (hookfunct) and find something that 'tells' the player to do something else with the played sample (stopping it, applying an effect etc).

You can use the data property of the hookfunc to 'give' / push a structure to your 'replay' routine so that you can for example tell the player that a certain sample started being played. The player can then decide (if counter reached a value for example) to actually stop the sample from playing and setting/changing the status in that structure so that the main program knows the sample can be 'played'/triggered again.

AmiArcadia also uses AHI and has c src and ScummVM AGA from AmiNet....all the source code is there. To create the AHI callback hook you'll also need to include the SDI header files.

AHI_AllocAudioA
audioctrl = AHI_AllocAudioA( tags );

struct AHIAudioCtrl *AHI_AllocAudioA( struct TagItem * );

audioctrl = AHI_AllocAudio( tag1, ... );

struct AHIAudioCtrl *AHI_AllocAudio( Tag, ... );

AHI_AllocAudioRequestA
requester = AHI_AllocAudioRequestA( tags );

struct AHIAudioModeRequester *AHI_AllocAudioRequestA(struct TagItem * );

requester = AHI_AllocAudioRequest( tag1, ... );

struct AHIAudioModeRequester *AHI_AllocAudioRequest( Tag, ... );

AHI_AudioRequestA
success = AHI_AudioRequestA( requester, tags );

BOOL AHI_AudioRequestA( struct AHIAudioModeRequester *, struct TagItem * );

result = AHI_AudioRequest( requester, tag1, ... );

BOOL AHI_AudioRequest( struct AHIAudioModeRequester *, Tag, ... );

AHI_BestAudioIDA
ID = AHI_BestAudioIDA( tags );

ULONG AHI_BestAudioIDA( struct TagItem * );

ID = AHI_BestAudioID( tag1, ... );

ULONG AHI_BestAudioID( Tag, ... );

AHI_ControlAudioA
error = AHI_ControlAudioA( audioctrl, tags );

ULONG AHI_ControlAudioA( struct AHIAudioCtrl *, struct TagItem * );

error = AHI_ControlAudio( AudioCtrl, tag1, ...);

ULONG AHI_ControlAudio( struct AHIAudioCtrl *, Tag, ... );

AHI_FreeAudio
AHI_FreeAudio( audioctrl );

void AHI_FreeAudio( struct AHIAudioCtrl * );

AHI_FreeAudioRequest
AHI_FreeAudioRequest( requester );

void AHI_FreeAudioRequest( struct AHIAudioModeRequester * );

AHI_GetAudioAttrsA
success = AHI_GetAudioAttrsA( ID, [audioctrl], tags );

BOOL AHI_GetAudioAttrsA( ULONG, struct AHIAudioCtrl *, struct TagItem * );

success = AHI_GetAudioAttrs( ID, [audioctrl], attr1, &result1, ...);

BOOL AHI_GetAudioAttrs( ULONG, struct AHIAudioCtrl *, Tag, ... );

AHI_LoadSound
error = AHI_LoadSound( sound, type, info, audioctrl );

ULONG AHI_LoadSound( UWORD, ULONG, IPTR, struct AHIAudioCtrl * );

AHI_NextAudioID
next_ID = AHI_NextAudioID( last_ID );

ULONG AHI_NextAudioID( ULONG );

AHI_PlayA
AHI_PlayA( audioctrl, tags );

void AHI_PlayA( struct AHIAudioCtrl *, struct TagItem * );

AHI_Play( AudioCtrl, tag1, ...);

void AHI_Play( struct AHIAudioCtrl *, Tag, ... );

AHI_SampleFrameSize
size = AHI_SampleFrameSize( sampletype );

ULONG AHI_SampleFrameSize( ULONG );

AHI_SetEffect
error = AHI_SetEffect( effect, audioctrl );

ULONG AHI_SetEffect( IPTR, struct AHIAudioCtrl * );

AHI_SetFreq
AHI_SetFreq( channel, freq, audioctrl, flags );

void AHI_SetFreq( UWORD, ULONG, struct AHIAudioCtrl *, ULONG );

AHI_SetSound
AHI_SetSound( channel, sound, offset, length, audioctrl, flags );

void AHI_SetSound( UWORD, UWORD, ULONG, LONG, struct AHIAudioCtrl *, ULONG );

AHI_SetVol
AHI_SetVol( channel, volume, pan, audioctrl, flags );

void AHI_SetVol( UWORD, Fixed, sposition, struct AHIAudioCtrl *, ULONG );

AHI_UnloadSound
AHI_UnloadSound( sound, audioctrl );

void AHI_UnloadSound( UWORD, struct AHIAudioCtrl * );