delorie.com/archives/browse.cgi   search  
Mail Archives: djgpp/1999/11/02/09:26:02

Date: Tue, 2 Nov 1999 14:14:34 +0200 (IST)
From: Eli Zaretskii <eliz AT is DOT elta DOT co DOT il>
X-Sender: eliz AT is
To: Rob Kramer <robk AT cyberway DOT com DOT sg>
cc: djgpp AT delorie DOT com
Subject: Re: DMA despair.
In-Reply-To: <199911020932.RAA25646@westgate.cyberway.com.sg>
Message-ID: <Pine.SUN.3.91.991102140455.21122A-100000@is>
MIME-Version: 1.0
Reply-To: djgpp AT delorie DOT com
X-Mailing-List: djgpp AT delorie DOT com
X-Unsubscribes-To: listserv AT delorie DOT com

On Tue, 2 Nov 1999, Rob Kramer wrote:

> Size is 256k (262144), and the allocation works OK.

To make sure this is *really* OK, invoke system("mem /c") from your 
program before and after the allocation, and make sure the reported DOS 
(conventional) memory changes accordingly.

> The DMA controller 
> only needs to know about the buffer's physical address (bit 0 & 1 hardwired 
> to zero), the buffer size, whether it should be cyclic or not, and what 
> stepsize I would like to use.

I'm confused.  How does the controller know that the data is there?  In 
other words, how do you tell it that it can begin working on the data?

> Each 'step' report should result in a PCI INTA# 
> interrupt. The chip is ignoring my data, so I haven't had a single interrupt so 
> far.

Perhaps the problem is with receiving the interrupts, rather than with 
your buffer setup.  What interrupt (which IRQ) does the board trigger?  
(I'm not sure I understand what do you mean by ``PCI INTA# interrupt''.)

> My buffer is allocated starting at DOS segment 0x320c. So I initially fill the 
> buffer at 0x320c0 (linear) and pass that address to the DMA conroller.

Are you sure the call to dosmemput is okay?  How exactly is that coded?

> I 
> never get to refill the buffer since there's no interrupt. The controller seems 
> to be cycling through the buffer over and over, but it doesn't do anything 
> with it. I made a little real-mode program that dumps data to the screen, and 
> if I run that after killing the MPEG application, it shows correct MPEG 
> header data at 0x3000:20c0. (It's not overwritten yet, running on vanilla 
> DOS).

I'm not sure I understand the meaning of this.  Are you telling me that a 
real-mode program that uses essentially the same code does work, while 
the DJGPP version doesn't?

Or am I to understand that the first buffer seems to be processed and the 
results put at 0x3000:20c0, but there's no interrupt for your application 
to know that it can fetch the processed data?

By ``MPEG application'' do you mean your DJGPP program that doesn't seem 
to work, or some other program?

- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019