Ok my argument for DNS is this
1. Its proven technology for name spaces - even for user names (see Athena)
2. It replicates automatically. There isnt a big "write a replicating
dbase" problem
3. Everyone has their machine already configured to query it
4. Everyone already has C library support for querying it
5. It scales - measure the size of the entire .com space against your
CD collection. Think how many queries/second the entire DNS
infrastructure is handling
6. It scales - you can add 10 or 20 or more servers and you can delegate
hash spaces easily
7. Its fault tolerant. The entire system was designed to keep working when
one server is dead.
8. Its firewall friendly on the whole.
So I believe DNS solves the main problem. Which is lookups for most users.
Anyone with a bit of perl can write an html->dns proxy service so you can
handle the hard cases
> I will confess I don't know of a way of updating my zone files from a client
> remotely without using something other than DNS such as an http query.
DDNS. But I don't believe that is the right model.
> You'll have to enlighten me there... (This would be when people add
> tracks/cds etc)...
I am assuming one or more sites would collate new entries into some other
database or archive, and this would be used to write 256 or later 65536
sparse name tables (each section can handle just its own fraction of the
database). DNS will then mirror the new tables around the world and slowly
expire caches etc
Alan