X-Authentication-Warning: delorie.com: mail set sender to geda-user-bounces using -f X-Recipient: geda-user AT delorie DOT com Date: Thu, 5 Feb 2015 08:08:11 +0100 (CET) X-X-Sender: igor2 AT igor2priv To: geda-user AT delorie DOT com X-Debug: to=geda-user AT delorie DOT com from="gedau AT igor2 DOT repo DOT hu" From: gedau AT igor2 DOT repo DOT hu Subject: [geda-user] FOSDEM/edacore - preferences on file format, language Message-ID: User-Agent: Alpine 2.00 (DEB 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; format=flowed; charset=US-ASCII Reply-To: geda-user AT delorie DOT com Hi all, I tried to search the web for edacore but couldn't find too much details, so I am not sure if it'd be a code library or symbol/footprint library or just conventions. I don't think the ideal solution is to have a single code base. There are very different needs which can be statisfied by very different code. Having standardized interchange formats and reference libraries for them is an excellent idea, tho. I'd like to join the general buzz of the mailing list sharing my own personal preferences on this topic. 1. File format: please keep it as simple as possible. Assume different tools written in exotic languages will try to use it. Don't assume using some generic container format (xml, json, etc.) will magically solve all issues, domain specific details are very important. If the decision is made for using such a format, please try to choose the simplest one, don't assume all languages will have ready-to-use libs that fully understand the corner cases and exotic features of the format. Keep data as data, don't use script/code to describe footprints, it will make an importer/exporter many times more expensive to write from scratch. Obviously use text (easier version controlling, easier to write tools/scripts for, if they get big it's easy to compress them). 2. database, lib, groupping: please don't be too specific on this! Having standard index "file" format and saying "a package may contain footprints, symbols and this index "file" describing them" is totally ok. But specifying how exactly: - these files or packages are versioned, and/or - how they are transmitted over the network, and/or - how they are collected and organized in large libs is imho a bad idea, because of preferences on these vary a lot and specific solutions don't scale. For example I prefer having plain files in directories (using my file system "as a database") and I would not use any tool that wanted to store my symbols in SQL. However, I do realize if I had millions of symbols and footprints, files might not work very efficiently. What works in small don't always work in big, but spaceships designed for the big case won't scale down to small cases. 3. Symbols, footprints... and glue! I have my own set of patches/addons for gnetlist to provide the glue layer and it works great in my practice. My three groups are: - generic symbols that don't know anything about footprints (e.g. single channel opamp), ideally they don't know pin numbers and slots either - generic footprints that don't know anything about symbols (e.g. so8) - glue: a small file that connects a symbol to a footprint, mapping the pins as needed On the schematics I use the symbol and specify a glue instead of a footprint. No more heavy symbols encoding slots and pinouts, easy to switch between breadboard and pcb. Which leads to... 4. in a symbol put on schematics, leave room for "flavor" or "target". By that I mean properties with different values for different uses, e.g. two glue files (or footprints), one for the "breadboard", one for the "pcb" flow. Ideally any property could have multiple values, each tagged with a target. This also allows attributes that are considered only in a simulation flow or only when printing the schematics. 5. design for filters: there will be scripts that are interested only in a small subset of the info. Ideally extracting that subset should be easy and should not depend on understeding too much of the irrelevant parts. Irrelevant parts should be easy to store and reproduce verbatim, without having to parse them. This lets tools for simple tasks stay simple filters that read the file, parse/manipulate the relevant parts and copy the rest blindly. This assumes there's some sort of a high level "frame" of the language and a low level "details" part that encodes the actual content; a filter needs to understand the frame, but doesn't need to understand all sort of details, only the type it is interested in. Regards, Igor2