[csw-maintainers] confused newbie

Philip Brown phil at bolthole.com
Thu Mar 12 17:38:52 CET 2009


On Thu, Mar 12, 2009 at 03:35:41PM +0100, Dagobert Michelsen wrote:
> Am 12.03.2009 um 10:26 schrieb Wojtek Meler:
> > There is also stated:
> >
> > "Don't forget the "other" architecture(s) !
> > Now you get to do it ALL OVER AGAIN, on the other build machine!"
> >
> > It forced me to build package on all systems which breaks the rule
> > "single package for all solaris revs of a cpu". I think that technote
> > from http://www.opencsw.org/standards/core_principles should be  
> > reminded
> >  there.
> 
> I also asked this sometime ago and there was no conclusion. I see two
> directions to go:
> 
> (1) Build one package for sparc and i386 and dispatch everything else  
> inside
>      the package for NFS-sharing

Yikes...
that statement is a bit ambigous, particularly for the new maintainer.

For new maintainer purposes, let us explicitly state, that the usual
expected standard is,

"build one package for sparc, [AND ONE PACKAGE FOR] i386"

:-)



> Personally, I think NFS-sharing is bad, because for many packages it is
> difficult to cleanly roll out.

"many"? I dont think your "many" is a description backed with actual
numbers ;-)

>This is at least true for all packages
> starting
> daemons and worse for packages using SMF.

well, actually, if we transition demons to use cswclassutils, it becomes
a lot easier, if we can then agree on a central start script being housed
in /opt/csw/[init.d?]

At that point, it will be then relatively trivial to run an initialization
script on each machine, that looks in there, and "does the right thing", 
for the machine it is being run on.


That being said, I dont think that running demons over NFS is a good idea
at all! :-D But it can be done, and is done, at some sites.

[sadly, where I work is one example of this. They have one master
 site-local init script, that is rdisted out to all machines, and
 does some magical detection at boot time to determine which demons
 to start]




More information about the maintainers mailing list