Mail Archives: geda-user/2013/09/01/17:22:50
Guys -
On Sun, Sep 01, 2013 at 12:47:54PM -0600, John Doty wrote:
> On Aug 31, 2013, at 10:38 PM, Larry Doolittle wrote:
> > I'm very interested in being
> > able to write generic numeric code, have it simulate (at first)
> > at "infinite" precision, then establish real-life bounds and precision
> > needs based on SNR goals, resulting in concrete scaled-fixed-point
> > variables. That is well beyond existing language capabilities.
> Well, you can't really simulate at infinite precision. However, you *can* do algebraic circuit analysis at infinite precision. That's what gnetlist -g mathematica is for.
Double-precision floating-point counts as "infinite" precision in
my world. But I want to see simulations at that abstraction level
work (e.g., pass regression tests) before I (or Free Software under
my control) decides that variable x can be represented as an 18-bit
word with binary point between bits 14 and 15, without degrading the
S/N of result y by more than 0.2 dB.
Without having any such fancy software to help, I do what I think
is typical in industry: prototype my DSP in Octave, then transcribe
to Verilog, estimating the required scaling and precision by hand,
and start the debugging all over again.
- Larry
- Raw text -