Discussion:
[multiarch] Proposal for *-dev packages
Goswin von Brederlow
2004-01-14 18:26:53 UTC
Permalink
Hi,

You all have seen the other thread about multiarch? This one is a
different part of the puzzle.

The proposal is to make all *-dev packages "Architecture: all". This
should be a strong should or must directive and a must for
built-essential for sarge+1.


Why should *-dev be "Architecture: all"?
----------------------------------------

When installing packages of multiple archs some packages will contain
the same files, which is a problem of cause. Also when compiling
packages the -dev packages for each lib has to be installed. Now we
have *-dev packages that are "Architecture: all" and others that are
"Architecture: <arch>".

The later are a problem for multiarch systems. To compile 32 bit
programms the 32bit flavour must be installed and to compile 64bit
programms the 64bit flavour. Overlaps in the files make it complicated
to install both. Both having the same name would be a problem in dpkg,
both having different names would mean changing basically every source
package in Debian.

It would be best if all *-dev packages could be "Architecture:
all". That way there would be only one package that works for
everyone.


Lets look at why some *-dev packages are not "Architecture: all". If I
forgot some reasons (as I certainly have) feel free to add it,
esspecially if you add a solution to avoid it too.


Why are some *-dev packages "Architecture: <arch>"?
---------------------------------------------------

1. static libraries

Most sources and people don't need the static libraries. The static
libraries could be split into -static packages and sources that do
need them can depend specifically on them.

2. different header files per architecture

The differences in the headers can be merged by using preprocessor
conditionals.

Only package that has this problem at a larger scale is glibc +
linux-kernel-header. Those two would best be an exception to the
rule. They are messy enough as it is.

3. support binaries in the -dev package

I'm thinking about gtk-config, sdl-config, kde-config, ...
Are all of those scripts or are there some compiled programs in hte
mix?

I suggest splitting any binary programms (if there are any) out into a
-helper package and have the -dev depend on them.

4. debug libraries

Debug libraries can go into -dbg like so many do.



Having "Architecture: all" *-dev packages would simplify the work for
multiarch support greatly so I hope noone comes up with a stronger
reason against it as the ones above.

Let the flames burn brightly,
Goswin
Daniel Kobras
2004-01-14 19:58:11 UTC
Permalink
Post by Goswin von Brederlow
2. different header files per architecture
The differences in the headers can be merged by using preprocessor
conditionals.
How to include arch-specific information that is probed at configure
time? What about porting to new archs when there's no preprocessor
conditional yet?
Post by Goswin von Brederlow
3. support binaries in the -dev package
I'm thinking about gtk-config, sdl-config, kde-config, ...
Are all of those scripts or are there some compiled programs in hte
mix?
Same problem here. Those scripts may contain arch-specific data, eg.
arch-dependent CFLAGS.
Post by Goswin von Brederlow
I suggest splitting any binary programms (if there are any) out into a
-helper package and have the -dev depend on them.
Now instead of -dev, the -helper packages conflict. So what was the
purpose again?

Daniel.
Goswin von Brederlow
2004-01-14 20:32:18 UTC
Permalink
Post by Daniel Kobras
Post by Goswin von Brederlow
2. different header files per architecture
The differences in the headers can be merged by using preprocessor
conditionals.
How to include arch-specific information that is probed at configure
That would be either static information that can be gathered once and
hardcoded into the headers or its information particular to the buildd,
which should never make it into the -dev package.
Post by Daniel Kobras
time? What about porting to new archs when there's no preprocessor
conditional yet?
Add one to gcc first thing in the morning. Or set -D__NEW__ in the
compile flags.
Post by Daniel Kobras
Post by Goswin von Brederlow
3. support binaries in the -dev package
I'm thinking about gtk-config, sdl-config, kde-config, ...
Are all of those scripts or are there some compiled programs in hte
mix?
Same problem here. Those scripts may contain arch-specific data, eg.
arch-dependent CFLAGS.
There are just so many archs that can easily all be included and
the right one chosen at runtime.
Post by Daniel Kobras
Post by Goswin von Brederlow
I suggest splitting any binary programms (if there are any) out into a
-helper package and have the -dev depend on them.
Now instead of -dev, the -helper packages conflict. So what was the
purpose again?
The -helper package should either work for both archs or both should
be installable at the same time (/usr/lib/package/helper and
/usr/lib64/package/helper works fine). I don't expect this to be more
than a handfull cases if at all.

MfG
Goswin
Daniel Jacobowitz
2004-01-14 20:52:46 UTC
Permalink
Post by Goswin von Brederlow
Post by Daniel Kobras
Post by Goswin von Brederlow
2. different header files per architecture
The differences in the headers can be merged by using preprocessor
conditionals.
How to include arch-specific information that is probed at configure
That would be either static information that can be gathered once and
hardcoded into the headers or its information particular to the buildd,
which should never make it into the -dev package.
So you've just created:
- Another possible bug, since everyone installing from source builds
the headers on the target system. Who knows whether configure-time
information makes it into a header or not? There's a good chance the
Debian maintainer won't even notice.
- Another set of new packages, since almost every -dev contains
static libraries nowadays.

Solve the basic problem first and then deal with this sort of detail.
I personally think you can't do this without impairing the robustness
of package builds something terrible.
--
Daniel Jacobowitz
MontaVista Software Debian GNU/Linux Developer
Daniel Kobras
2004-01-14 21:29:16 UTC
Permalink
Post by Goswin von Brederlow
Post by Daniel Kobras
How to include arch-specific information that is probed at configure
That would be either static information that can be gathered once and
hardcoded into the headers or its information particular to the buildd,
which should never make it into the -dev package.
So before I upload a new library package I log into 11+ architectures,
run configure, compare headers and patch in the diffs wrapped in #ifdef
<ARCH_FOO>. Sure, I can do that as long as you don't mind me cursing you
to hell and back along the way. 'find /usr/include -name "*config.h"'
should get you a rough idea of the magnitude of this problem, by the way.
Post by Goswin von Brederlow
Post by Daniel Kobras
time? What about porting to new archs when there's no preprocessor
conditional yet?
Add one to gcc first thing in the morning. Or set -D__NEW__ in the
compile flags.
With __NEW__ invoking what?

#ifdef __NEW__
#error Architecture not yet supported
#endif

?
Post by Goswin von Brederlow
Post by Daniel Kobras
Same problem here. Those scripts may contain arch-specific data, eg.
arch-dependent CFLAGS.
There are just so many archs that can easily all be included and
the right one chosen at runtime.
And how to determine what needs to go into mips CFLAGS when I'm
packaging on x86? Not to mention Hurd or the BSDs. Those scripts are
generated from configure for a reason. Furthermore, I don't assume such
changes stand a chance of ever being merged upstream. I for one don't
find the idea of dragging such cruft along in diff.gz forever appealing.

Daniel.
Goswin von Brederlow
2004-01-14 22:38:59 UTC
Permalink
Post by Daniel Kobras
Post by Goswin von Brederlow
Post by Daniel Kobras
How to include arch-specific information that is probed at configure
That would be either static information that can be gathered once and
hardcoded into the headers or its information particular to the buildd,
which should never make it into the -dev package.
So before I upload a new library package I log into 11+ architectures,
run configure, compare headers and patch in the diffs wrapped in #ifdef
<ARCH_FOO>. Sure, I can do that as long as you don't mind me cursing you
to hell and back along the way. 'find /usr/include -name "*config.h"'
should get you a rough idea of the magnitude of this problem, by the way.
Rename libfoo-dev to libfoo-arch-dev and have lib-foo-dev depend on
the right set of specific debs. Is that better?
Post by Daniel Kobras
Post by Goswin von Brederlow
Post by Daniel Kobras
time? What about porting to new archs when there's no preprocessor
conditional yet?
Add one to gcc first thing in the morning. Or set -D__NEW__ in the
compile flags.
With __NEW__ invoking what?
#ifdef __NEW__
#error Architecture not yet supported
#endif
?
Post by Goswin von Brederlow
Post by Daniel Kobras
Same problem here. Those scripts may contain arch-specific data, eg.
arch-dependent CFLAGS.
There are just so many archs that can easily all be included and
the right one chosen at runtime.
And how to determine what needs to go into mips CFLAGS when I'm
packaging on x86? Not to mention Hurd or the BSDs. Those scripts are
generated from configure for a reason. Furthermore, I don't assume such
changes stand a chance of ever being merged upstream. I for one don't
find the idea of dragging such cruft along in diff.gz forever appealing.
Daniel.
If the information comes from configure get it from configure. Don't
stick it into the headers at compile time, thats just stupid.

MfG
Goswin
Daniel Kobras
2004-01-14 23:24:40 UTC
Permalink
Post by Goswin von Brederlow
Rename libfoo-dev to libfoo-arch-dev and have lib-foo-dev depend on
the right set of specific debs. Is that better?
Yes, but wasn't your aim to make libfoo-arch1-dev and libfoo-arch2-dev
installable in parallel?
Post by Goswin von Brederlow
If the information comes from configure get it from configure. Don't
stick it into the headers at compile time, thats just stupid.
I don't see why it would make sense to re-run configure on a user's
system when the information is already present at compile/configure
time. And I don't see the stupidity in

foo.h:
/* Want to be portable to systems w/o int64_t */
typedef long foo_int64;

foo.pc:
Libs: -L${libdir} -lpthread

You're trying to shoehorn data into binary-all that is inherently
architecture dependent.

Daniel.
Goswin von Brederlow
2004-01-15 09:50:39 UTC
Permalink
Post by Daniel Kobras
Post by Goswin von Brederlow
Rename libfoo-dev to libfoo-arch-dev and have lib-foo-dev depend on
the right set of specific debs. Is that better?
Yes, but wasn't your aim to make libfoo-arch1-dev and libfoo-arch2-dev
installable in parallel?
Post by Goswin von Brederlow
If the information comes from configure get it from configure. Don't
stick it into the headers at compile time, thats just stupid.
I don't see why it would make sense to re-run configure on a user's
system when the information is already present at compile/configure
time. And I don't see the stupidity in
/* Want to be portable to systems w/o int64_t */
typedef long foo_int64;
Thats just plain wrong on 32bit systems. You need an #ifdef construct
there already, you have to check the size of long at some point.

But keeping it your way, the right[TM] way for such compatibility
would be to test for stdint.h in the configure script and then

#ifndef HAVE_STDINT
typedef long int64_t
#endif
Post by Daniel Kobras
Libs: -L${libdir} -lpthread
You're trying to shoehorn data into binary-all that is inherently
architecture dependent.
foo.pc files aparently are in /usr/lib/, that means they go to
/usr/lib64/ for 64bit multiarch. But even though there is no
architecture dependend information in your example. That foo.pc file
could be copied or linked to both places in the -dev arch: all
package.

If it realy is architecture dependend move it into the -helper,
-static, -whatever package you choose to use for the architecture
dependend stuff. And yes that means you have to create that one new
package you have to create anyway for multiarch.

MfG
Goswin
Daniel Kobras
2004-01-15 10:19:44 UTC
Permalink
Post by Goswin von Brederlow
Post by Daniel Kobras
I don't see why it would make sense to re-run configure on a user's
system when the information is already present at compile/configure
time. And I don't see the stupidity in
/* Want to be portable to systems w/o int64_t */
typedef long foo_int64;
Thats just plain wrong on 32bit systems. You need an #ifdef construct
there already, you have to check the size of long at some point.
That's precisely the point. I'm talking about configure here, so foo.h
would be auto-generated from

foo.h.in:
typedef @FOO_INT64_T@ foo_int64;

Thought that was obvious...
Post by Goswin von Brederlow
Post by Daniel Kobras
Libs: -L${libdir} -lpthread
You're trying to shoehorn data into binary-all that is inherently
architecture dependent.
foo.pc files aparently are in /usr/lib/, that means they go to
/usr/lib64/ for 64bit multiarch. But even though there is no
architecture dependend information in your example.
In fact it is, and that's why I chose this example. Looks innocent, but
there are at least 12 different ways you have to link with pthreads on
different architectures. Furthermore, just because you link with libfoo
on x86 doesn't necessarily mean you have to link with it on ppc. Might
be an optimized x86-only lib, for instance.
Post by Goswin von Brederlow
That foo.pc file could be copied or linked to both places in the -dev
arch: all package.
But one does not know the contents of foo.pc for architecture bar before
the package has been built on that arch.

Daniel.
Goswin von Brederlow
2004-01-15 14:48:38 UTC
Permalink
Post by Daniel Kobras
Post by Goswin von Brederlow
Post by Daniel Kobras
I don't see why it would make sense to re-run configure on a user's
system when the information is already present at compile/configure
time. And I don't see the stupidity in
/* Want to be portable to systems w/o int64_t */
typedef long foo_int64;
Thats just plain wrong on 32bit systems. You need an #ifdef construct
there already, you have to check the size of long at some point.
That's precisely the point. I'm talking about configure here, so foo.h
would be auto-generated from
Thought that was obvious...
Which should result in int64_t for every debian arch. No problem there.
Post by Daniel Kobras
Post by Goswin von Brederlow
Post by Daniel Kobras
Libs: -L${libdir} -lpthread
You're trying to shoehorn data into binary-all that is inherently
architecture dependent.
foo.pc files aparently are in /usr/lib/, that means they go to
/usr/lib64/ for 64bit multiarch. But even though there is no
architecture dependend information in your example.
In fact it is, and that's why I chose this example. Looks innocent, but
there are at least 12 different ways you have to link with pthreads on
different architectures. Furthermore, just because you link with libfoo
on x86 doesn't necessarily mean you have to link with it on ppc. Might
be an optimized x86-only lib, for instance.
Post by Goswin von Brederlow
That foo.pc file could be copied or linked to both places in the -dev
arch: all package.
But one does not know the contents of foo.pc for architecture bar before
the package has been built on that arch.
Daniel.
Which is fine. Just keep it out of the -dev arch: all package. I don't
see a problem with splitting them out.

MfG
Goswin
Stephen Frost
2004-01-14 20:20:00 UTC
Permalink
Post by Daniel Kobras
Post by Goswin von Brederlow
2. different header files per architecture
The differences in the headers can be merged by using preprocessor
conditionals.
How to include arch-specific information that is probed at configure
time? What about porting to new archs when there's no preprocessor
conditional yet?
I wouldn't worry too much about porting to a new arch prior to their
being a preprocessor conditional. If that's the case then the toolchain
isn't ready yet and we shouldn't be trying to distribute packages on
those archs yet anyway. As for arch-specific information, not sure what
you mean there, if it's arch-specific in a header file, put preprocessor
conditionals around it.
Post by Daniel Kobras
Post by Goswin von Brederlow
3. support binaries in the -dev package
I'm thinking about gtk-config, sdl-config, kde-config, ...
Are all of those scripts or are there some compiled programs in hte
mix?
Same problem here. Those scripts may contain arch-specific data, eg.
arch-dependent CFLAGS.
So you check what the architecture is in the script, that's not terribly
difficult.
Post by Daniel Kobras
Post by Goswin von Brederlow
I suggest splitting any binary programms (if there are any) out into a
-helper package and have the -dev depend on them.
Now instead of -dev, the -helper packages conflict. So what was the
purpose again?
The purpose is to avoid having to go through every source package and
change it's Build-Depend line, if at all possible.

Stephen
Daniel Kobras
2004-01-15 09:18:51 UTC
Permalink
Post by Stephen Frost
I wouldn't worry too much about porting to a new arch prior to their
being a preprocessor conditional. If that's the case then the toolchain
isn't ready yet and we shouldn't be trying to distribute packages on
those archs yet anyway. As for arch-specific information, not sure what
you mean there, if it's arch-specific in a header file, put preprocessor
conditionals around it.
The preprocessor conditional is not the problem. The content to put within
conditionals is. Putting arch-specific data in a binary-all header means
I need to know the results of, say, configure on any arch the header
will be installed on when I'm initially putting together the package.
Post by Stephen Frost
So you check what the architecture is in the script, that's not terribly
difficult.
Again the problem is not with the check but with the desired output.

Regards,

Daniel.
Goswin von Brederlow
2004-01-15 14:45:22 UTC
Permalink
Post by Daniel Kobras
Post by Stephen Frost
I wouldn't worry too much about porting to a new arch prior to their
being a preprocessor conditional. If that's the case then the toolchain
isn't ready yet and we shouldn't be trying to distribute packages on
those archs yet anyway. As for arch-specific information, not sure what
you mean there, if it's arch-specific in a header file, put preprocessor
conditionals around it.
The preprocessor conditional is not the problem. The content to put within
conditionals is. Putting arch-specific data in a binary-all header means
I need to know the results of, say, configure on any arch the header
will be installed on when I'm initially putting together the package.
You already do otherwise you couldn't put the result into your
configure script to be tested in the first place.
Post by Daniel Kobras
Post by Stephen Frost
So you check what the architecture is in the script, that's not terribly
difficult.
Again the problem is not with the check but with the desired output.
Regards,
Daniel.
Do you have an example for your problem?

MfG
Goswin
Goswin von Brederlow
2004-01-15 10:00:32 UTC
Permalink
Post by Stephen Frost
Post by Daniel Kobras
Post by Goswin von Brederlow
3. support binaries in the -dev package
I'm thinking about gtk-config, sdl-config, kde-config, ...
Are all of those scripts or are there some compiled programs in hte
mix?
Same problem here. Those scripts may contain arch-specific data, eg.
arch-dependent CFLAGS.
So you check what the architecture is in the script, that's not terribly
difficult.
Post by Daniel Kobras
Post by Goswin von Brederlow
I suggest splitting any binary programms (if there are any) out into a
-helper package and have the -dev depend on them.
Now instead of -dev, the -helper packages conflict. So what was the
purpose again?
The purpose is to avoid having to go through every source package and
change it's Build-Depend line, if at all possible.
Stephen
And the -helper packages can have different names without much trouble
and don't need to conflict or be multiarch capable. Its easy enough to
check for the target cpu at runtime and behave acordingly.

MfG
Goswin
Isaac Clerencia
2004-01-14 20:28:11 UTC
Permalink
Post by Goswin von Brederlow
3. support binaries in the -dev package
I'm thinking about gtk-config, sdl-config, kde-config, ...
Are all of those scripts or are there some compiled programs in hte
mix?
I suggest splitting any binary programms (if there are any) out into a
-helper package and have the -dev depend on them.
I have 46 *-config installed, and all of them are scripts.
And they also are being gradually replaced by pkg-config (.pc) files[0]

[0] http://www.freedesktop.org/software/pkgconfig/
--
Isaac Clerencia | Using Debian GNU/Linux | JID: ***@jabber.org
----------------------------------------------------------------
Alternativas libres :: http://alts.homelinux.net
Mi bitacora :: http://isaac.is-a-geek.net/blog
----------------------------------------------------------------
Please encrypt your messages when e-mailing me, GPG ID: 54E672DE
Thomas Viehmann
2004-01-14 19:26:47 UTC
Permalink
Post by Goswin von Brederlow
You all have seen the other thread about multiarch? This one is a
different part of the puzzle.
Why should *-dev be "Architecture: all"?
Why is adding hundreds or thousands of packages better than one header
field ("can be installed in parallel with itself")?
Why can't Provides/Conflicts be used to do this?

Your point would get stronger you'd discuss why these couldn't be used
to solve the problem.

Regards

T.
--
Thomas Viehmann, <http://beamnet.de/tv/>
Goswin von Brederlow
2004-01-15 09:57:52 UTC
Permalink
Post by Thomas Viehmann
Post by Goswin von Brederlow
You all have seen the other thread about multiarch? This one is a
different part of the puzzle.
Why should *-dev be "Architecture: all"?
Why is adding hundreds or thousands of packages better than one header
field ("can be installed in parallel with itself")?
It is not. It is far inferior. But allowing "Abi: ..." and having
multiple packages with the same name installed aparently isn't liked
at all.
Post by Thomas Viehmann
Why can't Provides/Conflicts be used to do this?
Your point would get stronger you'd discuss why these couldn't be used
to solve the problem.
Say we have the following:

Package: libfoo-dev
Version: 1.2.3

Package: lib64foo-dev
Version: 1.2.3
Provides: libfoo-dev, libfoo-dev=1.2.3

Source: bla
Build-Depends: libfoo (>= 1.2.3)

Now the user does:

apt-get install libfoo-dev
(some month pass)
apt-get build-dep bla
apt-get -b source bla

FTBFS, the wrong package is installed. lib64foo-dev is missing.

Provides don't prevent the wrong package to be installed.

And Conflict means that you can't have a user wanting 32bit programms
and one wanting 64bit programms on the same system.

MfG
Goswin
Scott James Remnant
2004-01-14 20:18:56 UTC
Permalink
Post by Goswin von Brederlow
The proposal is to make all *-dev packages "Architecture: all". This
should be a strong should or must directive and a must for
built-essential for sarge+1.
Yet again you've forgotten the .la and .pc files contained in a large
majority the -dev packages, which are not intended to be architecture
independent.
Post by Goswin von Brederlow
Having "Architecture: all" *-dev packages would simplify the work for
multiarch support greatly so I hope noone comes up with a stronger
reason against it as the ones above.
Those reasons are strong enough, I thought you *didn't* want to change
every package in the archive; now suddenly you do... make up your mind,
eh? :p

Scott
--
Have you ever, ever felt like this?
Had strange things happen? Are you going round the twist?
Sven Luther
2004-01-14 22:41:22 UTC
Permalink
Post by Goswin von Brederlow
Hi,
You all have seen the other thread about multiarch? This one is a
different part of the puzzle.
The proposal is to make all *-dev packages "Architecture: all". This
should be a strong should or must directive and a must for
built-essential for sarge+1.
Notice that this only is ok for C header files. I believe some -dev also
contain static object files for static linking, and i also have some
ocaml -dev, which contain object files too.

Friendly,

Sven Luther
Anthony DeRobertis
2004-01-15 01:16:52 UTC
Permalink
Post by Goswin von Brederlow
When installing packages of multiple archs some packages will contain
the same files, which is a problem of cause.
... Now that is a PITA ...
Post by Goswin von Brederlow
The later [Architecture: <arch>] are a problem for multiarch systems.
To compile 32 bit
programms the 32bit flavour must be installed and to compile 64bit
programms the 64bit flavour. Overlaps in the files make it complicated
to install both.
But you don't need to install both. You install the i386 one when doing
the 32-bit build, and the amd64 one when doing the 64-bit build.
Post by Goswin von Brederlow
Both having the same name would be a problem in dpkg,
both having different names would mean changing basically every source
package in Debian.
I thought that having, e.g., coreutils:amd64 and coreutils:i386
installed at the same time was impossible (without --force, of
course[0]). So having the same name wouldn't be a problem for dpkg. In
order to install libfoo-dev:amd64, you'd have to remove
libfoo-dev:i386.
Post by Goswin von Brederlow
all". That way there would be only one package that works for
everyone.
Yes, it would. It'd even have the beneficial effect of reducing archive
size, etc.

But, alas, I think it is not feasible to require it for all packages. A
"should", maybe; a "must", no.
Post by Goswin von Brederlow
Why are some *-dev packages "Architecture: <arch>"?
---------------------------------------------------
1. static libraries
Most sources and people don't need the static libraries. The static
libraries could be split into -static packages and sources that do
need them can depend specifically on them.
I disagree strongly. Many people do need static libraries. Currently,
it is a "must" in policy for -dev packages with libraries to provide
them.

I object strongly to going from "must" to "must not" in a single
release. That is contrary to our normal policy procedure of gradual
change to allow people to adjust. I mean, it took us how many years to
move from /usr/doc to /usr/share/doc?
Post by Goswin von Brederlow
2. different header files per architecture
The differences in the headers can be merged by using preprocessor
conditionals.
In C[++], yes. In assembly, not sure. In many other languages, no.
Though, thankfully, this does not affect many packages. However, when
it does, merging through preprocessor conditionals (if such are even
available) is no trivial undertaking, and I don't think its something
we can reasonably require of packagers.
Post by Goswin von Brederlow
3. support binaries in the -dev package
I'm thinking about gtk-config, sdl-config, kde-config, ...
Are all of those scripts or are there some compiled programs in hte
mix?
Compiled programs aplenty. Quick one that comes to mind is dpkg-dev.
libc has a few as well.

Even the scripts are architecture-dependent. Consider:

***@bohr:anthony$ gtk-config --libs gtk
-L/usr/lib -L/usr/X11R6/lib -lgtk -lgdk -rdynamic -lgmodule -lglib -ldl
-lXi -lXext -lX11 -lm
***@bohr:anthony$ gtk-config --cflags gtk
-I/usr/include/gtk-1.2 -I/usr/include/glib-1.2 -I/usr/lib/glib/include
Post by Goswin von Brederlow
I suggest splitting any binary programms (if there are any) out into a
-helper package and have the -dev depend on them.
How does this help? So instead of having the problem with libx-dev
being both i386 and amd64, you have the problem with libx-helper being
i386 and amd64. I don't see how that is any different, other than being
one more package to make dselect and aptitude just a tad bit less
usable.
Post by Goswin von Brederlow
Having "Architecture: all" *-dev packages would simplify the work for
multiarch support greatly so I hope noone comes up with a stronger
reason against it as the ones above.
I have a few more:

-dev packages include a symlink from lib.so to lib.so.SONAME. I don't
know of any guarantee that the soname is architecture-independent. I
would not be surprised to see cases where it isn't.

Actually, this is a very large problem: How is an
architecture-independent package supposed to provide a symlink from
lib.so to lib.so.SONAME when the location of lib.so and lib.so.SONAME
vary between architectures? (/lib, /lib64, another for MIPS, etc.)

-dev packages should include documentation. That would be yet another
package, -doc.

You've essentially proposed splitting -dev packages into:
-static Static libraries
-helper Helper binaries
-doc (mine) documentation
-dev header files, .so link

So, from one package, to at least two (almost everyone needs a -doc
splitoff). Up to four. So that means we'd be introducing at least

egrep 'Package: .*-dev$' /var/lib/dpkg/available | wc -l
1118

1118 new packages. A reasonable guess would, I think[1], be around two
new packages per dev package, I think, so ~2200 new packages.

Just what dselect and aptitude need. 20% more packages to make it even
harder for users to look through the lists.

Currently adding gasoline to the fire,
Anthony
Post by Goswin von Brederlow
Let the flames burn brightly,
Goswin
[0] Anyone who mentions a horse gets thrown in as fuel for the
flames.
[1] Meaning I'm unwilling to do actual research into 1100 packages
to find out.
Goswin von Brederlow
2004-01-15 09:29:25 UTC
Permalink
Post by Anthony DeRobertis
Post by Goswin von Brederlow
When installing packages of multiple archs some packages will contain
the same files, which is a problem of cause.
... Now that is a PITA ...
Post by Goswin von Brederlow
The later [Architecture: <arch>] are a problem for multiarch
systems. To compile 32 bit
programms the 32bit flavour must be installed and to compile 64bit
programms the 64bit flavour. Overlaps in the files make it complicated
to install both.
But you don't need to install both. You install the i386 one when
doing the 32-bit build, and the amd64 one when doing the 64-bit build.
Some packages (libc) need to be installed for both 32 bit and 64 bit
at least on the build (to compile gcc).

Other could do with conflicting -dev packages. But the majority of
people I talked to so far would like to havethem installed in parallel
instead of having to purge and reinstall them each time they compile
for a different bit depth.

Just consider compiling a benchmark that tests 32 bit and 64 bit
support. That would be hell.
Post by Anthony DeRobertis
Post by Goswin von Brederlow
Both having the same name would be a problem in dpkg,
both having different names would mean changing basically every source
package in Debian.
I thought that having, e.g., coreutils:amd64 and coreutils:i386
installed at the same time was impossible (without --force, of
course[0]). So having the same name wouldn't be a problem for dpkg. In
order to install libfoo-dev:amd64, you'd have to remove
libfoo-dev:i386.
Libfoo (libfoo:i386) and lib64foo (libfoo:amd64) at the same time is
practicaly a must for essential and base packages. Its an absolute
must for glibc.

Of cause this creates problems all over the places, thats what we try
to solve.

Renaming all dual installable packages for one arch is one way.
Using the ABI: <abi> field to differentiate packages and allow
packages with equal names (but different abis) is another.
Having *-dev binary-all is some sort of middle ground.
Post by Anthony DeRobertis
Post by Goswin von Brederlow
all". That way there would be only one package that works for
everyone.
Yes, it would. It'd even have the beneficial effect of reducing
archive size, etc.
But, alas, I think it is not feasible to require it for all
packages. A "should", maybe; a "must", no.
Post by Goswin von Brederlow
Why are some *-dev packages "Architecture: <arch>"?
---------------------------------------------------
1. static libraries
Most sources and people don't need the static libraries. The static
libraries could be split into -static packages and sources that do
need them can depend specifically on them.
I disagree strongly. Many people do need static libraries. Currently,
it is a "must" in policy for -dev packages with libraries to provide
them.
I'm not saying they should disapear. They should only be moved into a
seperate arch: <any> packages on which the -dev arch: all package can
depend on (if the static lib is so important).
Post by Anthony DeRobertis
I object strongly to going from "must" to "must not" in a single
release. That is contrary to our normal policy procedure of gradual
change to allow people to adjust. I mean, it took us how many years to
move from /usr/doc to /usr/share/doc?
For multiarch -dev packages to work there has to be a split one way or
the other:

1. make -dev arch: all and move any architecure dependend files into
another package.

2. move any architecture independent files into a -common package.

Point 2 means changing a lot of existing Build-Depends lines. Doesn't
realy change the amount of new packages or the work to get header
files consistent for multiarch.
Post by Anthony DeRobertis
Post by Goswin von Brederlow
2. different header files per architecture
The differences in the headers can be merged by using preprocessor
conditionals.
In C[++], yes. In assembly, not sure. In many other languages,
no. Though, thankfully, this does not affect many packages. However,
when it does, merging through preprocessor conditionals (if such are
even available) is no trivial undertaking, and I don't think its
something we can reasonably require of packagers.
Worst case you ship different directories or the package gets excluded
from multiarch -dev support. Its probably acceptable for non
build-essential packages to conflict in such cases but it shouldn't be
the rule.
Post by Anthony DeRobertis
Post by Goswin von Brederlow
3. support binaries in the -dev package
I'm thinking about gtk-config, sdl-config, kde-config, ...
Are all of those scripts or are there some compiled programs in hte
mix?
Compiled programs aplenty. Quick one that comes to mind is
dpkg-dev. libc has a few as well.
dpkg-dev:amd64 works to build i386 and amd64 packages and the right
architecture can be pulled in by build-essential. It could be split
but its not a neccessity here.
Post by Anthony DeRobertis
-L/usr/lib -L/usr/X11R6/lib -lgtk -lgdk -rdynamic -lgmodule -lglib
-ldl -lXi -lXext -lX11 -lm
-I/usr/include/gtk-1.2 -I/usr/include/glib-1.2 -I/usr/lib/glib/include
That has to be patched for multiarch to check the target cpu for what
arch it is actually going to compile. Depending on that the result of
the script has to change to have /lib/ or /lib64/.

As it is now, for non multiarch, where is there a bit of architecture
dependend information in there?
Post by Anthony DeRobertis
Post by Goswin von Brederlow
I suggest splitting any binary programms (if there are any) out into a
-helper package and have the -dev depend on them.
How does this help? So instead of having the problem with libx-dev
being both i386 and amd64, you have the problem with libx-helper being
i386 and amd64. I don't see how that is any different, other than
being one more package to make dselect and aptitude just a tad bit
less usable.
The existing sources depend on the one libx-dev package, which in turn
pulls in the required set of libx-helper packages relevant for the
architecture:

Depends: libx-helper [i386, m68k, alpha, ia64, mips, sparc, powerpc, s390x], lib64x-helper [amd64, mips64, sparc64, s390x, ppc64]

On non-biarch systems you get libx-helper, on multiarch systems you
get libx-helper and lib64x-helper and on pure amd64, mips64, sparc64,
s390x, ppc64 systems you only get lib64x-helper.
Post by Anthony DeRobertis
Post by Goswin von Brederlow
Having "Architecture: all" *-dev packages would simplify the work for
multiarch support greatly so I hope noone comes up with a stronger
reason against it as the ones above.
-dev packages include a symlink from lib.so to lib.so.SONAME. I don't
know of any guarantee that the soname is architecture-independent. I
would not be surprised to see cases where it isn't.
Actually, this is a very large problem: How is an
architecture-independent package supposed to provide a symlink from
lib.so to lib.so.SONAME when the location of lib.so and lib.so.SONAME
vary between architectures? (/lib, /lib64, another for MIPS, etc.)
Postint script. Depending on the subarchs for the host one, two or
three links are set.
Post by Anthony DeRobertis
-dev packages should include documentation. That would be yet another
package, -doc.
You have arch dependent docs?
Post by Anthony DeRobertis
-static Static libraries
-helper Helper binaries
-doc (mine) documentation
-dev header files, .so link
So, from one package, to at least two (almost everyone needs a -doc
splitoff). Up to four. So that means we'd be introducing at least
If you have -static, -helper, -doc and all are arch dependend noone is
stopping you from putting them all into one deb. The names are just
examples.
Post by Anthony DeRobertis
egrep 'Package: .*-dev$' /var/lib/dpkg/available | wc -l
1118
1118 new packages. A reasonable guess would, I think[1], be around two
new packages per dev package, I think, so ~2200 new packages.
One is enough to make it work. And one (libfoo-common) is what would
be required for libfoo-dev and lib64foo-dev to work too. Nothing
gained, nothing lost. The lib64foo-dev way is actually more package
names to deal with for dselect/aptitude.
Post by Anthony DeRobertis
Just what dselect and aptitude need. 20% more packages to make it even
harder for users to look through the lists.
There also seem to be ~1300 packages that have files in some /lib/
dir. All those need to be changed to /lib64/ for amd64. That doesn't
create 1300 new packages for amd64 but renames them. But then dselect
and aptitud have to combine i386 and amd64 packages into one list.

Hey, now instead of 13K packages you have 26K packages with ~16K
having uniq names.


I don't think 13K or 26K or 1M Packages makes a difference
anymore. A list of all packages is just unusable anyway. Think of
something better.

MfG
Goswin
Tollef Fog Heen
2004-01-15 11:48:22 UTC
Permalink
* Goswin von Brederlow

| Other could do with conflicting -dev packages. But the majority of
| people I talked to so far would like to havethem installed in parallel
| instead of having to purge and reinstall them each time they compile
| for a different bit depth.
|
| Just consider compiling a benchmark that tests 32 bit and 64 bit
| support. That would be hell.

We aren't gentoo. Users aren't supposed to do that, but if they do,
they should use a chroot. Optimize for the common case.
--
Tollef Fog Heen ,''`.
UNIX is user friendly, it's just picky about who its friends are : :' :
`. `'
`-
Goswin von Brederlow
2004-01-15 14:40:56 UTC
Permalink
Post by Tollef Fog Heen
* Goswin von Brederlow
| Other could do with conflicting -dev packages. But the majority of
| people I talked to so far would like to havethem installed in parallel
| instead of having to purge and reinstall them each time they compile
| for a different bit depth.
|
| Just consider compiling a benchmark that tests 32 bit and 64 bit
| support. That would be hell.
We aren't gentoo. Users aren't supposed to do that, but if they do,
they should use a chroot. Optimize for the common case.
You couldn't use a chroot. You couldn't install both 32bit and 64bit
dev packages in it.

The common case includes all the DDs tat will go out and buy an amd64
system next. They should be ablte to compile, develope and test i386
and amd64 debs easily. Having 4 systems (2 for work, 2 for
sbuild/pbuilder for uploads) is some bloat. You just trippeled the
number of chroots needed.

MfG
Goswin
Daniel Jacobowitz
2004-01-15 15:14:51 UTC
Permalink
Post by Goswin von Brederlow
Post by Tollef Fog Heen
* Goswin von Brederlow
| Other could do with conflicting -dev packages. But the majority of
| people I talked to so far would like to havethem installed in parallel
| instead of having to purge and reinstall them each time they compile
| for a different bit depth.
|
| Just consider compiling a benchmark that tests 32 bit and 64 bit
| support. That would be hell.
We aren't gentoo. Users aren't supposed to do that, but if they do,
they should use a chroot. Optimize for the common case.
You couldn't use a chroot. You couldn't install both 32bit and 64bit
dev packages in it.
The common case includes all the DDs tat will go out and buy an amd64
system next. They should be ablte to compile, develope and test i386
and amd64 debs easily. Having 4 systems (2 for work, 2 for
sbuild/pbuilder for uploads) is some bloat. You just trippeled the
number of chroots needed.
So instead you'd rather double the number of -dev packages in the
archive for everyone?

No.
--
Daniel Jacobowitz
MontaVista Software Debian GNU/Linux Developer
Stephen Frost
2004-01-15 15:44:26 UTC
Permalink
Post by Goswin von Brederlow
You couldn't use a chroot. You couldn't install both 32bit and 64bit
dev packages in it.
The point is that you could install the 32bit -dev's in a chroot and
have the 64bit dev's on the main system.
Post by Goswin von Brederlow
The common case includes all the DDs tat will go out and buy an amd64
system next. They should be ablte to compile, develope and test i386
and amd64 debs easily. Having 4 systems (2 for work, 2 for
sbuild/pbuilder for uploads) is some bloat. You just trippeled the
number of chroots needed.
Personally I'm not likely to compile i386 debs on my amd64 system. I
expect we're going to continue to have i386 buildd's, and that could be
on an amd64 system if someone sets up a chroot for it (and is confident
it won't break things). The only time I see this claim as being valid
for is prior to libraries being ported to amd64 and in that case those
who need it might be better off just installing i386 to begin with.

Stephen
Goswin von Brederlow
2004-01-15 16:54:29 UTC
Permalink
Post by Stephen Frost
Post by Goswin von Brederlow
You couldn't use a chroot. You couldn't install both 32bit and 64bit
dev packages in it.
The point is that you could install the 32bit -dev's in a chroot and
have the 64bit dev's on the main system.
Post by Goswin von Brederlow
The common case includes all the DDs tat will go out and buy an amd64
system next. They should be ablte to compile, develope and test i386
and amd64 debs easily. Having 4 systems (2 for work, 2 for
sbuild/pbuilder for uploads) is some bloat. You just trippeled the
number of chroots needed.
Personally I'm not likely to compile i386 debs on my amd64 system. I
expect we're going to continue to have i386 buildd's, and that could be
on an amd64 system if someone sets up a chroot for it (and is confident
it won't break things). The only time I see this claim as being valid
for is prior to libraries being ported to amd64 and in that case those
who need it might be better off just installing i386 to begin with.
Stephen
Every time the maintainer of a library make a new version he should
test that the library works for nativ and multiarch correctly if
possible. If it just means running "linux32 debuild" and "debuild"
thats a reasonable request. Thats what we want but not neccessarily
need.

MfG
Goswin
Stephen Frost
2004-01-15 17:34:02 UTC
Permalink
Post by Goswin von Brederlow
Every time the maintainer of a library make a new version he should
test that the library works for nativ and multiarch correctly if
possible. If it just means running "linux32 debuild" and "debuild"
thats a reasonable request. Thats what we want but not neccessarily
need.
Honestly, I tend to disagree with this. We could ask the same of all
maintainers to check on all archs they can but it gets to a point where
it's not entirely reasonable. I expect those building on amd64 to
test/upload amd64.deb's and those building on i386 to test/upload
i386.deb's. We have the buildds and we have unstable, things will get
tested on the other archs pretty quickly and developers will quickly
pick up on problem areas and where they need to be careful, just like we
do for other archs. Probably better since amd64 will likely be more
popular than some of our other archs.

So, to put it simply, even if we made it easy for DDs with amd64's, I
wouldn't expect it to happen and I'm not sure we should try and force it
to.

Stephen
Goswin von Brederlow
2004-01-15 18:07:07 UTC
Permalink
Post by Stephen Frost
Post by Goswin von Brederlow
Every time the maintainer of a library make a new version he should
test that the library works for nativ and multiarch correctly if
possible. If it just means running "linux32 debuild" and "debuild"
thats a reasonable request. Thats what we want but not neccessarily
need.
Honestly, I tend to disagree with this. We could ask the same of all
maintainers to check on all archs they can but it gets to a point where
it's not entirely reasonable. I expect those building on amd64 to
They should test it on all archs they have reasonable access to. Of
cause I don't expect them to test a package with a typo in the
desription on every arch but for feature changes they should. At least
a representable set, i.e. 32 and 64 bit, little and big endian.

Not enough maintainer do.
Post by Stephen Frost
test/upload amd64.deb's and those building on i386 to test/upload
i386.deb's. We have the buildds and we have unstable, things will get
tested on the other archs pretty quickly and developers will quickly
pick up on problem areas and where they need to be careful, just like we
do for other archs. Probably better since amd64 will likely be more
popular than some of our other archs.
So, to put it simply, even if we made it easy for DDs with amd64's, I
wouldn't expect it to happen and I'm not sure we should try and force it
to.
Stephen
The easier you make it the more maintainer will do it.

MfG
Goswin
Stephen Frost
2004-01-15 18:13:36 UTC
Permalink
Post by Goswin von Brederlow
They should test it on all archs they have reasonable access to. Of
cause I don't expect them to test a package with a typo in the
desription on every arch but for feature changes they should. At least
a representable set, i.e. 32 and 64 bit, little and big endian.
Not enough maintainer do.
We all (at one point anyway) had reasonable access to various different
archs. Very few maintainers ever took advantage of this except after
problems were reported. I don't see why you think this is going to be
any different. Most DDs aren't going to be runing out and buying an
amd64 box next week, or even in the next year so it's unlikely that
they're going to be doing testing of the amd64 builds and are probably
just going to have to incorporate patches sent to them by those of us
who actually have amd64 systems.
Post by Goswin von Brederlow
The easier you make it the more maintainer will do it.
Honestly, we've really made it pretty easy to build chroot's now, so
that's certainly a viable method. If DD's (as they 'should') are
already building in a chroot then it'll be relatively simple for them to
put together one for i386 and one for amd64 on their new amd64 system.

Of course, most don't build in a chroot, even though it is pretty easy
for them to. Let's not pander to ideals that aren't actually going to
happen.

Stephen
Claus Färber
2004-01-17 11:54:00 UTC
Permalink
Post by Stephen Frost
Personally I'm not likely to compile i386 debs on my amd64 system.
You are if you maintain a package that has not yet been ported.

Claus
--
http://www.faerber.muc.de
Paul Brook
2004-01-15 12:12:16 UTC
Permalink
Post by Tollef Fog Heen
| Other could do with conflicting -dev packages. But the majority of
| people I talked to so far would like to havethem installed in parallel
| instead of having to purge and reinstall them each time they compile
| for a different bit depth.
|
| Just consider compiling a benchmark that tests 32 bit and 64 bit
| support. That would be hell.
We aren't gentoo. Users aren't supposed to do that, but if they do,
they should use a chroot. Optimize for the common case.
I disagree.

I suspect a good number of Debian users are developers like myself. A
multiarch system is IMHO fairly useless if you can only use it to develop
software for the 'primary' subarch. This is especially true eg. on mips where
preferred subarch depends on the application (n32 for speed vs. n64 for
address space).

Also, providing a multiarch gcc/libc is only of limited use if all the other
-dev packages only support a single arch. Reinstalling the -dev package to
compile for a different subarch really isn't practical.

Paul
C. Scott Ananian
2004-01-16 02:30:50 UTC
Permalink
I think this discussion has veered way off track.
glibc is the bottom line. If you cannot install multiple subarchs of
glibc in parallel, then multiarch support is basically broken: you
*cannot* run i386 binaries on a system with an amd64 glibc (without a
chroot, which isn't a solution for users) and the backwards compatibility
of the platform is completely lost. Likewise if you install a i386 glibc
on a user's amd64 machine, they might as well have bought a pentium.
--purge and --reinstall to switch subarchs is just not a valid option for
glibc, on which just about every debian package depends.

Let's not talk as if this was a reasonable thing.

The whole point of having separate /lib, /lib64, /lib32 directories is so
that subarch packages *don't* conflict. it doesn't matter if the so-names
are the same or different, or if there are executable *-config files;
the live in separate directories. debian has broken this to some
degree because every library package puts some files in /usr/share/doc,
and those files conflict.

I don't think *anything* which is not 'Architecture: all' should be
putting *anything* in a 'share' directory. The filesystem specification
explicitly states that this directory is for files which should be shared
between architectures. This was relevant for shared multi-arch NFS roots
far before these recent subarch discussions have arisen.
--scott [flame away!]

Ortega President Marxist for Dummies mail drop interception affinity group
Treasury Noriega operation Hussein supercomputer assassination direct action
( http://cscott.net/ )
Tollef Fog Heen
2004-01-16 11:01:27 UTC
Permalink
* "C. Scott Ananian"

| I think this discussion has veered way off track. glibc is the
| bottom line. If you cannot install multiple subarchs of glibc in
| parallel, then multiarch support is basically broken: you *cannot*
| run i386 binaries on a system with an amd64 glibc (without a chroot,
| which isn't a solution for users) and the backwards compatibility of
| the platform is completely lost. Likewise if you install a i386
| glibc on a user's amd64 machine, they might as well have bought a
| pentium. --purge and --reinstall to switch subarchs is just not a
| valid option for glibc, on which just about every debian package
| depends.

We are talking about development packages, not library packages.
--
Tollef Fog Heen ,''`.
UNIX is user friendly, it's just picky about who its friends are : :' :
`. `'
`-
Daniel Jacobowitz
2004-01-16 18:31:26 UTC
Permalink
Post by C. Scott Ananian
I think this discussion has veered way off track.
glibc is the bottom line. If you cannot install multiple subarchs of
glibc in parallel, then multiarch support is basically broken: you
*cannot* run i386 binaries on a system with an amd64 glibc (without a
chroot, which isn't a solution for users) and the backwards compatibility
of the platform is completely lost. Likewise if you install a i386 glibc
on a user's amd64 machine, they might as well have bought a pentium.
--purge and --reinstall to switch subarchs is just not a valid option for
glibc, on which just about every debian package depends.
No, you've missed the point. Glibc already handles this issue. It's a
matter of "everything else"...
--
Daniel Jacobowitz
MontaVista Software Debian GNU/Linux Developer
Goswin von Brederlow
2004-01-16 07:50:13 UTC
Permalink
Post by C. Scott Ananian
I think this discussion has veered way off track.
glibc is the bottom line. If you cannot install multiple subarchs of
glibc in parallel, then multiarch support is basically broken: you
*cannot* run i386 binaries on a system with an amd64 glibc (without a
chroot, which isn't a solution for users) and the backwards compatibility
of the platform is completely lost. Likewise if you install a i386 glibc
on a user's amd64 machine, they might as well have bought a pentium.
--purge and --reinstall to switch subarchs is just not a valid option for
glibc, on which just about every debian package depends.
Let's not talk as if this was a reasonable thing.
The argument was that you need the developement files for both archs
not just the library itself.

MfG
Goswin
Tollef Fog Heen
2004-01-16 11:00:01 UTC
Permalink
* Paul Brook

I have a Mail-Followup-To set. Please respect it, or at least respect
the policy on Debian lists not to Cc the one you are replying to.

| On Thursday 15 January 2004 11:48 am, Tollef Fog Heen wrote:
| > | Other could do with conflicting -dev packages. But the majority of
| > | people I talked to so far would like to havethem installed in parallel
| > | instead of having to purge and reinstall them each time they compile
| > | for a different bit depth.
| > |
| > | Just consider compiling a benchmark that tests 32 bit and 64 bit
| > | support. That would be hell.
| >
| > We aren't gentoo. Users aren't supposed to do that, but if they do,
| > they should use a chroot. Optimize for the common case.
|
| I disagree.

What are you disagreeing with? That we aren't gentoo? That users
aren't supposed to do that? Or that we should optimize for the common
case?

| I suspect a good number of Debian users are developers like myself. A
| multiarch system is IMHO fairly useless if you can only use it to develop
| software for the 'primary' subarch. This is especially true eg. on mips where
| preferred subarch depends on the application (n32 for speed vs. n64 for
| address space).

Why is running pdebuild32 (or whatever it'll be called) so much worse
than running debuild? Having co- (or tri-) installable -dev packages
will be very, very tricky and will require you to massively increase
the number of packages in the archive or break a number of assumptions
a lot of places in the packaging system.

| Also, providing a multiarch gcc/libc is only of limited use if all the other
| -dev packages only support a single arch. Reinstalling the -dev package to
| compile for a different subarch really isn't practical.

That is why you compile in a chroot. AMD64 systems will have >= 256MB
RAM and many gigs of HDD space, so this shouldn't be a real problem.
--
Tollef Fog Heen ,''`.
UNIX is user friendly, it's just picky about who its friends are : :' :
`. `'
`-
Paul Brook
2004-01-16 12:43:55 UTC
Permalink
Post by Tollef Fog Heen
* Paul Brook
I have a Mail-Followup-To set. Please respect it, or at least respect
the policy on Debian lists not to Cc the one you are replying to.
Sorry. My mail client (kmail) has obviously never head of that header.

<snip>
Post by Tollef Fog Heen
| > We aren't gentoo. Users aren't supposed to do that, but if they do,
| > they should use a chroot. Optimize for the common case.
|
| I disagree.
What are you disagreeing with? That we aren't gentoo? That users
aren't supposed to do that? Or that we should optimize for the common
case?
I don't really see how gentoo is relevant. AFAIK gentoo isn't multiarch, so
any comparisons don't really really seem valid.

My main disagreement was with "users aren't supposed to do that".

I'm also wary of "optimize for the common case", if it penalises a
significant, but maybe slightly less common case.
Post by Tollef Fog Heen
| I suspect a good number of Debian users are developers like myself. A
| multiarch system is IMHO fairly useless if you can only use it to develop
| software for the 'primary' subarch. This is especially true eg. on mips
| where preferred subarch depends on the application (n32 for speed vs. n64
| for address space).
Why is running pdebuild32 (or whatever it'll be called) so much worse
than running debuild? Having co- (or tri-) installable -dev packages
will be very, very tricky and will require you to massively increase
the number of packages in the archive or break a number of assumptions
a lot of places in the packaging system.
I was making my point from the view of a general developer who uses a debian
system, not a debian package maintainer. I'd say building debian packages is
a relatively uncommon case, and arguably should be done in a clean chroot
anyway.

Hopefully it should be possible to implement proper multiarch system without
creating an excessive number of new packages. There have been a number of
suggestions on this list how to do this. Admittedly there are still
unresolved issues with all of these, but noone said making a proper multiarch
system was going to be easy:)
Post by Tollef Fog Heen
| Also, providing a multiarch gcc/libc is only of limited use if all the
| other -dev packages only support a single arch. Reinstalling the -dev
| package to compile for a different subarch really isn't practical.
That is why you compile in a chroot. AMD64 systems will have >= 256MB
RAM and many gigs of HDD space, so this shouldn't be a real problem.
It's not just the disk space requirements. With a chroot you effectively have
two seperate (abeit closely tied) systems to maintain. As a developer this
negates many of the benefits of a multiarch system vs seperate machines.

Paul
Andy MacKay
2004-01-16 17:42:41 UTC
Permalink
Note: I'm not a developer, I'm just a long-time Debian user and
(non-Debian) software developer subscribed to this list -- if that means
you automatically ignore me, so be it.

There are two things being said here, and they aren't necessarily
contradictory. The first is the argument that being able to
simultaneously install -dev packages from multiple sub-architectures
will make things hugely complicated. The second is the argument that on
an AMD64 system, because the hardware happens to be able to run i386
binaries with no performance penalty, developers *may* want to be able
to build i386 binaries without resorting to a chroot.

Aside from their actual merit let's say that both of these are mostly
true, for the sake of argument.

An alternative no one has mentioned is creating a "pure" AMD64 Debian
arch and adding a suite of i386 cross-development packages to the
distribution. That would go some way towards solving the problem at
hand without making all the Debian developers' lives more difficult, and
give architectures besides AMD64 (IA64 is another notable one where it
might be interesting) the ability to build i386 packages without having
to resort to lots of library and compiler building and installation in
/usr/local/i386-cross (or something similar, which is what I'll probably
have to do if Debian doesn't offer a more convenient solution).

Anyway, I'll go back to lurking now -- in case no one's said it lately,
thanks for all your hard and often underappreciated work to make Debian
the pleasure it is to use and admin. Wish I had time to actually help
you all out more than just cheerleading and making suggestions from the
sidelines.

- Andy
Post by Paul Brook
Post by Tollef Fog Heen
* Paul Brook
I have a Mail-Followup-To set. Please respect it, or at least respect
the policy on Debian lists not to Cc the one you are replying to.
Sorry. My mail client (kmail) has obviously never head of that header.
<snip>
Post by Tollef Fog Heen
| > We aren't gentoo. Users aren't supposed to do that, but if they do,
| > they should use a chroot. Optimize for the common case.
|
| I disagree.
What are you disagreeing with? That we aren't gentoo? That users
aren't supposed to do that? Or that we should optimize for the common
case?
I don't really see how gentoo is relevant. AFAIK gentoo isn't multiarch, so
any comparisons don't really really seem valid.
My main disagreement was with "users aren't supposed to do that".
I'm also wary of "optimize for the common case", if it penalises a
significant, but maybe slightly less common case.
Post by Tollef Fog Heen
| I suspect a good number of Debian users are developers like myself. A
| multiarch system is IMHO fairly useless if you can only use it to develop
| software for the 'primary' subarch. This is especially true eg. on mips
| where preferred subarch depends on the application (n32 for speed vs. n64
| for address space).
Why is running pdebuild32 (or whatever it'll be called) so much worse
than running debuild? Having co- (or tri-) installable -dev packages
will be very, very tricky and will require you to massively increase
the number of packages in the archive or break a number of assumptions
a lot of places in the packaging system.
I was making my point from the view of a general developer who uses a debian
system, not a debian package maintainer. I'd say building debian packages is
a relatively uncommon case, and arguably should be done in a clean chroot
anyway.
Hopefully it should be possible to implement proper multiarch system without
creating an excessive number of new packages. There have been a number of
suggestions on this list how to do this. Admittedly there are still
unresolved issues with all of these, but noone said making a proper multiarch
system was going to be easy:)
Post by Tollef Fog Heen
| Also, providing a multiarch gcc/libc is only of limited use if all the
| other -dev packages only support a single arch. Reinstalling the -dev
| package to compile for a different subarch really isn't practical.
That is why you compile in a chroot. AMD64 systems will have >= 256MB
RAM and many gigs of HDD space, so this shouldn't be a real problem.
It's not just the disk space requirements. With a chroot you effectively have
two seperate (abeit closely tied) systems to maintain. As a developer this
negates many of the benefits of a multiarch system vs seperate machines.
Paul
--
--
Anderson MacKay <***@ghs.com>
Green Hills Software -- Hardware Target Connections
Paul Brook
2004-01-16 18:00:36 UTC
Permalink
Post by Andy MacKay
An alternative no one has mentioned is creating a "pure" AMD64 Debian
arch and adding a suite of i386 cross-development packages to the
distribution. That would go some way towards solving the problem at
hand without making all the Debian developers' lives more difficult, and
give architectures besides AMD64 (IA64 is another notable one where it
might be interesting) the ability to build i386 packages without having
to resort to lots of library and compiler building and installation in
/usr/local/i386-cross (or something similar, which is what I'll probably
have to do if Debian doesn't offer a more convenient solution).
The toolchain-source and dpkg-cross packages already do exactly what you are
suggesting.

Paul
Stephen Frost
2004-01-16 20:12:48 UTC
Permalink
Post by Andy MacKay
There are two things being said here, and they aren't necessarily
contradictory. The first is the argument that being able to
simultaneously install -dev packages from multiple sub-architectures
will make things hugely complicated. The second is the argument that on
an AMD64 system, because the hardware happens to be able to run i386
binaries with no performance penalty, developers *may* want to be able
to build i386 binaries without resorting to a chroot.
Unfortunately it's more complicated than that, there's also the issue
that people want to be able to run closed-source binaries built for i386
on amd64.

Stephen
Goswin von Brederlow
2004-01-16 18:09:46 UTC
Permalink
Post by Andy MacKay
Note: I'm not a developer, I'm just a long-time Debian user and
(non-Debian) software developer subscribed to this list -- if that means
you automatically ignore me, so be it.
There are two things being said here, and they aren't necessarily
contradictory. The first is the argument that being able to
simultaneously install -dev packages from multiple sub-architectures
will make things hugely complicated. The second is the argument that on
an AMD64 system, because the hardware happens to be able to run i386
binaries with no performance penalty, developers *may* want to be able
to build i386 binaries without resorting to a chroot.
That means you can just as well install a chroot.

The big goal is to have an amd64 system behave exactly like i386 if
the 32 bit environment is set.

linux32 $SHELL ---> Your on i386 for all compiling and packaging
intents and purposes.

That would be perfect.
Post by Andy MacKay
Aside from their actual merit let's say that both of these are mostly
true, for the sake of argument.
An alternative no one has mentioned is creating a "pure" AMD64 Debian
arch and adding a suite of i386 cross-development packages to the
distribution. That would go some way towards solving the problem at
hand without making all the Debian developers' lives more difficult, and
give architectures besides AMD64 (IA64 is another notable one where it
might be interesting) the ability to build i386 packages without having
to resort to lots of library and compiler building and installation in
/usr/local/i386-cross (or something similar, which is what I'll probably
have to do if Debian doesn't offer a more convenient solution).
Cross developement packages will never fully make "linux32 apt-get
build-dep foo; linux32 apt-get -b source foo" work as nicely.

And if apt-get build-dep and similar doesn't work a chroot is just
better.
Post by Andy MacKay
Anyway, I'll go back to lurking now -- in case no one's said it lately,
thanks for all your hard and often underappreciated work to make Debian
the pleasure it is to use and admin. Wish I had time to actually help
you all out more than just cheerleading and making suggestions from the
sidelines.
MfG
Goswin
Theodore Ts'o
2004-01-17 17:04:03 UTC
Permalink
Post by Goswin von Brederlow
That means you can just as well install a chroot.
The big goal is to have an amd64 system behave exactly like i386 if
the 32 bit environment is set.
linux32 $SHELL ---> Your on i386 for all compiling and packaging
intents and purposes.
That would be perfect.
Folks should consider that with the 2.6 kernel, there are a few
additional tools we can bring to bear on the problem; specifically, we
can use "mount --bind" to allow part of the filesystem to be overmount
another part of the filesystem, and namespaces, so that different
programs get to see a different set of mounts.

This may allow for some additional creative solutions that people may
not have considered up until now....

- Ted

Anthony DeRobertis
2004-01-15 16:20:43 UTC
Permalink
Post by Goswin von Brederlow
Post by Anthony DeRobertis
But you don't need to install both. You install the i386 one when
doing the 32-bit build, and the amd64 one when doing the 64-bit build.
Some packages (libc) need to be installed for both 32 bit and 64 bit
at least on the build (to compile gcc).
OK, so Build-Essential are an exception to this.
Post by Goswin von Brederlow
Other could do with conflicting -dev packages. But the majority of
people I talked to so far would like to havethem installed in parallel
instead of having to purge and reinstall them each time they compile
for a different bit depth.
Remove, not purge, I'd hope.

That's why I say it is a nice "should".
Post by Goswin von Brederlow
Just consider compiling a benchmark that tests 32 bit and 64 bit
support. That would be hell.
Agreed.
Post by Goswin von Brederlow
Post by Anthony DeRobertis
Post by Goswin von Brederlow
Why are some *-dev packages "Architecture: <arch>"?
---------------------------------------------------
1. static libraries
Most sources and people don't need the static libraries. The static
libraries could be split into -static packages and sources that do
need them can depend specifically on them.
I disagree strongly. Many people do need static libraries. Currently,
it is a "must" in policy for -dev packages with libraries to provide
them.
I'm not saying they should disapear. They should only be moved into a
seperate arch: <any> packages on which the -dev arch: all package can
depend on (if the static lib is so important).
If there was actually a good reason to separate out the static libs,
you can't depend on them. The goal was to allow a libfoo-dev to be
installed that would work for both 32-bit and 64-bit builds.

If it is impossible to have static libraries for both 32-bit and 64-bit
in libfoo-dev itself, it is no more possible for libfoo-dev to depend
on them.

Then again, why are .a files a problem? Don't the i386 ones go in
.../lib, and the amd64 ones in .../lib64?
Post by Goswin von Brederlow
For multiarch -dev packages to work there has to be a split one way or
1. make -dev arch: all and move any architecure dependend files into
another package.
2. move any architecture independent files into a -common package.
Point 2 means changing a lot of existing Build-Depends lines. Doesn't
realy change the amount of new packages or the work to get header
files consistent for multiarch.
2 doesn't involve changing Build-Depends lines, AFAICT. If both
libfoo-dev:amd64 and libfoo-dev:i386 depend on libfoo-dev-common:all
then you can still install them both at once.
Post by Goswin von Brederlow
Worst case you ship different directories or the package gets excluded
from multiarch -dev support. Its probably acceptable for non
build-essential packages to conflict in such cases but it shouldn't be
the rule.
Then there is a good reason against a "must" in policy.
Post by Goswin von Brederlow
Post by Anthony DeRobertis
-L/usr/lib -L/usr/X11R6/lib -lgtk -lgdk -rdynamic -lgmodule -lglib
-ldl -lXi -lXext -lX11 -lm
-I/usr/include/gtk-1.2 -I/usr/include/glib-1.2 -I/usr/lib/glib/include
That has to be patched for multiarch to check the target cpu for what
arch it is actually going to compile. Depending on that the result of
the script has to change to have /lib/ or /lib64/.
A little harder to do for the packagecfg files, as someone else pointed
out... But I suppose those will be split by /lib vs. /lib64.
Post by Goswin von Brederlow
As it is now, for non multiarch, where is there a bit of architecture
dependend information in there?
None in the GTK one, AFAICT. cflags could be, though.
Post by Goswin von Brederlow
The existing sources depend on the one libx-dev package, which in turn
pulls in the required set of libx-helper packages relevant for the
Depends: libx-helper [i386, m68k, alpha, ia64, mips, sparc, powerpc,
s390x], lib64x-helper [amd64, mips64, sparc64, s390x, ppc64]
On non-biarch systems you get libx-helper, on multiarch systems you
get libx-helper and lib64x-helper and on pure amd64, mips64, sparc64,
s390x, ppc64 systems you only get lib64x-helper.
What does that gain? Surely, most every package is going to be built
for i386 and amd64 separately? And you're proposing changing,
apparently, ~1000 packages.

How about instead we deal with the architecture-independent parts by
splitting that off if we absolutely have to (I have an idea how we
could avoid this...)? Then we interpret:
Build-Depends: foo

as

Build-Depends: foo:$ARCH

where $ARCH is the architecture we're building for. Packages that build
for both architectures at the same time (glibc, what else) could:

Build-Depends: foo:amd64 [amd64], foo:i386 [amd64], foo [!amd64]

This has the advantage of working properly even if foo:amd64 and
foo:i386 conflict with each other. We can finish the port and then work
on optimizing things by removing lib conflicts.
Post by Goswin von Brederlow
Post by Anthony DeRobertis
Actually, this is a very large problem: How is an
architecture-independent package supposed to provide a symlink from
lib.so to lib.so.SONAME when the location of lib.so and lib.so.SONAME
vary between architectures? (/lib, /lib64, another for MIPS, etc.)
Postint script. Depending on the subarchs for the host one, two or
three links are set.
Then the .so files don't get in dpkg's file database, which certainly
isn't a benefit.
Post by Goswin von Brederlow
Post by Anthony DeRobertis
-dev packages should include documentation. That would be yet another
package, -doc.
You have arch dependent docs?
ummm... err... oops... good point.
Post by Goswin von Brederlow
Post by Anthony DeRobertis
egrep 'Package: .*-dev$' /var/lib/dpkg/available | wc -l
1118
1118 new packages. A reasonable guess would, I think[1], be around two
new packages per dev package, I think, so ~2200 new packages.
One is enough to make it work. And one (libfoo-common) is what would
be required for libfoo-dev and lib64foo-dev to work too. Nothing
gained, nothing lost. The lib64foo-dev way is actually more package
names to deal with for dselect/aptitude.
OK, so we're only talking 1100 new packages. Certainly better than
2200...
Post by Goswin von Brederlow
Post by Anthony DeRobertis
Just what dselect and aptitude need. 20% more packages to make it even
harder for users to look through the lists.
There also seem to be ~1300 packages that have files in some /lib/
dir. All those need to be changed to /lib64/ for amd64. That doesn't
create 1300 new packages for amd64 but renames them.
FYI, I thin renaming the packages is not a good idea.
Goswin von Brederlow
2004-01-16 08:12:19 UTC
Permalink
Post by Anthony DeRobertis
Post by Goswin von Brederlow
Post by Anthony DeRobertis
Post by Goswin von Brederlow
1. static libraries
Most sources and people don't need the static libraries. The static
libraries could be split into -static packages and sources that do
need them can depend specifically on them.
I disagree strongly. Many people do need static libraries. Currently,
it is a "must" in policy for -dev packages with libraries to provide
them.
I'm not saying they should disapear. They should only be moved into a
seperate arch: <any> packages on which the -dev arch: all package can
depend on (if the static lib is so important).
If there was actually a good reason to separate out the static libs,
you can't depend on them. The goal was to allow a libfoo-dev to be
installed that would work for both 32-bit and 64-bit builds.
If it is impossible to have static libraries for both 32-bit and
64-bit in libfoo-dev itself, it is no more possible for libfoo-dev to
depend on them.
Then again, why are .a files a problem? Don't the i386 ones go in
.../lib, and the amd64 ones in .../lib64?
They are no problem due to /lib and /lib64. Problem is just you need
to specifically depend on both of them. Limiting the number of
packages needing this change is the point. Its not like debian is
building static kde applications. :)
Post by Anthony DeRobertis
Post by Goswin von Brederlow
For multiarch -dev packages to work there has to be a split one way or
1. make -dev arch: all and move any architecure dependend files into
another package.
2. move any architecture independent files into a -common package.
Point 2 means changing a lot of existing Build-Depends lines. Doesn't
realy change the amount of new packages or the work to get header
files consistent for multiarch.
2 doesn't involve changing Build-Depends lines, AFAICT. If both
libfoo-dev:amd64 and libfoo-dev:i386 depend on libfoo-dev-common:all
then you can still install them both at once.
Only if the first idea of allowing multiple packages with the same
name is accepted, which had objections.

Otherwise its libfoo-dev:i386 and lib64foo-dev:amd64, which means
changing the Build-Depends.

The idea for the Arch:all -dev packages is that since we have to
touch, patch and split the -dev packages anyway lets split out the
binary stuff and make the -dev package the arch: all.

I think its the same amount of work to split it either way but arch:
all -dev saves changing the Build-Depends.
Post by Anthony DeRobertis
Post by Goswin von Brederlow
Worst case you ship different directories or the package gets excluded
from multiarch -dev support. Its probably acceptable for non
build-essential packages to conflict in such cases but it shouldn't be
the rule.
Then there is a good reason against a "must" in policy.
Yes, for non build-essential packages. Certainly a must for everything
is out of the question for sarge+1.

I think that header files that are generated for each arch at compile
time are a pretty broken design. Just look at linux-kernel-headers
bugs for some problems.
Post by Anthony DeRobertis
Post by Goswin von Brederlow
Post by Anthony DeRobertis
-L/usr/lib -L/usr/X11R6/lib -lgtk -lgdk -rdynamic -lgmodule -lglib
-ldl -lXi -lXext -lX11 -lm
-I/usr/include/gtk-1.2 -I/usr/include/glib-1.2 -I/usr/lib/glib/include
That has to be patched for multiarch to check the target cpu for what
arch it is actually going to compile. Depending on that the result of
the script has to change to have /lib/ or /lib64/.
A little harder to do for the packagecfg files, as someone else
pointed out... But I suppose those will be split by /lib vs. /lib64.
pkg-config has to look at /usr/lib or /usr/lib64 depending on the
target host. If pkg-config itself is not adaptable a little wraper
script that calls /usr/lib/pkg-config/pkg-config or
/usr/lib64/pkg-config/pkg-config depending on the target is simple
enough.
Post by Anthony DeRobertis
Post by Goswin von Brederlow
As it is now, for non multiarch, where is there a bit of architecture
dependend information in there?
None in the GTK one, AFAICT. cflags could be, though.
CFLGAS ar stored in the dpkg subarchtable and should be used from
there. The subarch table includes special cpu flags for different
subarchs pkg-config is not supposed to know or care about too.

But thats a different storry.
Post by Anthony DeRobertis
Post by Goswin von Brederlow
The existing sources depend on the one libx-dev package, which in turn
pulls in the required set of libx-helper packages relevant for the
Depends: libx-helper [i386, m68k, alpha, ia64, mips, sparc, powerpc,
s390x], lib64x-helper [amd64, mips64, sparc64, s390x, ppc64]
On non-biarch systems you get libx-helper, on multiarch systems you
get libx-helper and lib64x-helper and on pure amd64, mips64, sparc64,
s390x, ppc64 systems you only get lib64x-helper.
What does that gain? Surely, most every package is going to be built
for i386 and amd64 separately? And you're proposing changing,
apparently, ~1000 packages.
Every library package has to be changed for 64bit multiarchs
anyway. Only packages that have to be changed or not depending on the
design are binaries.
Post by Anthony DeRobertis
How about instead we deal with the architecture-independent parts by
splitting that off if we absolutely have to (I have an idea how we
Build-Depends: foo
as
Build-Depends: foo:$ARCH
where $ARCH is the architecture we're building for. Packages that
Build-Depends: foo:amd64 [amd64], foo:i386 [amd64], foo [!amd64]
This has the advantage of working properly even if foo:amd64 and
foo:i386 conflict with each other. We can finish the port and then
work on optimizing things by removing lib conflicts.
That mean I need a make:amd64 before anything can be build. But a make
i386 is perfectly fine. Ok, no problem with make. But there are a lot
of tools needed to compile packages and anything thats binary can be
used as 32 bit version.

Requiring a 64bit toolchain might be ok for amd64, but mips, sparc,
ppc64, s390x certainly don't need a complete 64 bit toolchain, which
would be bigger and slower.
Post by Anthony DeRobertis
Post by Goswin von Brederlow
Post by Anthony DeRobertis
Actually, this is a very large problem: How is an
architecture-independent package supposed to provide a symlink from
lib.so to lib.so.SONAME when the location of lib.so and lib.so.SONAME
vary between architectures? (/lib, /lib64, another for MIPS, etc.)
Postint script. Depending on the subarchs for the host one, two or
three links are set.
Then the .so files don't get in dpkg's file database, which certainly
isn't a benefit.
Post by Goswin von Brederlow
Post by Anthony DeRobertis
-dev packages should include documentation. That would be yet another
package, -doc.
You have arch dependent docs?
ummm... err... oops... good point.
Post by Goswin von Brederlow
Post by Anthony DeRobertis
egrep 'Package: .*-dev$' /var/lib/dpkg/available | wc -l
1118
1118 new packages. A reasonable guess would, I think[1], be around two
new packages per dev package, I think, so ~2200 new packages.
One is enough to make it work. And one (libfoo-common) is what would
be required for libfoo-dev and lib64foo-dev to work too. Nothing
gained, nothing lost. The lib64foo-dev way is actually more package
names to deal with for dselect/aptitude.
OK, so we're only talking 1100 new packages. Certainly better than
2200...
Post by Goswin von Brederlow
Post by Anthony DeRobertis
Just what dselect and aptitude need. 20% more packages to make it even
harder for users to look through the lists.
There also seem to be ~1300 packages that have files in some /lib/
dir. All those need to be changed to /lib64/ for amd64. That doesn't
create 1300 new packages for amd64 but renames them.
FYI, I thin renaming the packages is not a good idea.
So do I but others seem not to agree. The "one package one name"
'rule' stands in the way of that.

MfG
Goswin
Goswin von Brederlow
2004-01-15 14:17:03 UTC
Permalink
... allowing "Abi: ..." and having
multiple packages with the same name installed aparently isn't liked
at all.
It's not likely for sarge, but if you get it working, I'd doubt that it'd be barred
forever.
Never was ment for sarge. No way.
Post by Thomas Viehmann
Why can't Provides/Conflicts be used to do this?
Your point would get stronger you'd discuss why these couldn't be used
to solve the problem.
Package: libfoo-dev
Version: 1.2.3
Package: lib64foo-dev
Version: 1.2.3
Provides: libfoo-dev, libfoo-dev=1.2.3
Source: bla
Build-Depends: libfoo (>= 1.2.3)
Package: libfooX-dev (arch i386)
Provides: libfoo-dev
Conflicts: libfoo-dev
That makes versioned depends complicated and everything would have to
be converted to actually use such a Provides all the time. Too many
debs don't. But its a possibility.
Package: libfooX-dev (arch amd64)
Provides: libfoo-dev
Conflicts: libfoo-dev
apt-get install libfoo-dev
(some month pass)
apt-get build-dep bla
apt-get -b source bla
So now some libfoo-dev package will be installed. I don't see how your binary-all
-dev plus arch dependencies is any better than binary-arch -dev packages.
[...]
The -dev package would install _both_ architectures arch dependend
packages for multiarch and just one for non-multiarch. That way
multiarch has some bloat (just on the users harddisk) but will allways
have the right version.

And people can still do non multiarch amd64.
And Conflict means that you can't have a user wanting 32bit programms
and one wanting 64bit programms on the same system.
Hu? Conflicts for -dev files doesn't mean anything for a user
wanting any programs. I can see that there is the problem of i386
-dev packages being installed but I cannot see how this is different
from installing all -dev packages with i386 arch dependencies where
you'd need the 64bit versions.
Wanting to compile, sorry.
You stated and claimed to solve the problem "multiple versions of
the same -dev packages should not be (aptempted to be) installed"
and suggested a solution. I was merely pointing out that the common
"fooSOVER-dev provides/conflicts foo-dev" practice solves the very
same problem just as well. I'm not saying that there isn't any
problem, but that your suggestion doesn't solve any that cannot be
dealt with otherwise.
The "fooSOVER-dev provides/conflicts foo-dev" doesn't help getting the
right -dev package for the target arch the user chooses to be
available or even prevent i386 -dev packages from falsely providing
stuff for amd64 compiles (if on 64bit developement is supported for amd64).

But if there already is a fooSOVER-dev a foo-dev package depending on
the right set of fooSOVER-dev packages instead of provides/conflicts
would do the trick.

MfG
Goswin
Thomas Viehmann
2004-01-15 16:51:15 UTC
Permalink
Post by Goswin von Brederlow
... allowing "Abi: ..." and having
multiple packages with the same name installed aparently isn't liked
at all.
It's not likely for sarge, but if you get it working, I'd doubt that it'd be barred
forever.
Never was ment for sarge. No way.
Oh. Sorry. I've just seen some todo list where one item was "this little
thing for sage...".
Post by Goswin von Brederlow
The -dev package would install _both_ architectures arch dependend
packages for multiarch and just one for non-multiarch. That way
multiarch has some bloat (just on the users harddisk) but will allways
have the right version.
And people can still do non multiarch amd64.
Ah. Now I'm getting closer to see how this should work. I just don't
know how you'd get packages to require both arch dependend stuff without
a lot of exceptions.
Post by Goswin von Brederlow
And Conflict means that you can't have a user wanting 32bit programms
and one wanting 64bit programms on the same system.
Wanting to compile, sorry.
Ah. I'm not quite sure why this should be a feature common enough for
not being solved by 32 bit chroots, but if you say that this is
required, I'm almost convinced.
Post by Goswin von Brederlow
The "fooSOVER-dev provides/conflicts foo-dev" doesn't help getting the
right -dev package for the target arch the user chooses to be
available or even prevent i386 -dev packages from falsely providing
stuff for amd64 compiles (if on 64bit developement is supported for amd64).
Ah. Sorry. I falsely understood that the -dev packages should always be
64 bit...

I'd still think that a better approach would eventually be accepted, but
thanks for taking the time to explain the issue.

Cheers

T.
--
Thomas Viehmann, <http://beamnet.de/tv/>
Goswin von Brederlow
2004-01-16 11:40:18 UTC
Permalink
Post by Thomas Viehmann
Post by Goswin von Brederlow
... allowing "Abi: ..." and having
multiple packages with the same name installed aparently isn't liked
at all.
It's not likely for sarge, but if you get it working, I'd doubt that it'd be barred
forever.
Never was ment for sarge. No way.
Oh. Sorry. I've just seen some todo list where one item was "this little
thing for sage...".
Yes, keeping the "Architecture: ...." field in the status field of
dpkg. Thats removing 2 lines in one file and changing a third line in
another. Patch is in BTS now. Its just removing the lines currently
deleting the field.
Post by Thomas Viehmann
Post by Goswin von Brederlow
The -dev package would install _both_ architectures arch dependend
packages for multiarch and just one for non-multiarch. That way
multiarch has some bloat (just on the users harddisk) but will allways
have the right version.
And people can still do non multiarch amd64.
Ah. Now I'm getting closer to see how this should work. I just don't
know how you'd get packages to require both arch dependend stuff without
a lot of exceptions.
Post by Goswin von Brederlow
And Conflict means that you can't have a user wanting 32bit programms
and one wanting 64bit programms on the same system.
Wanting to compile, sorry.
Ah. I'm not quite sure why this should be a feature common enough for
not being solved by 32 bit chroots, but if you say that this is
required, I'm almost convinced.
A chroot means you need two compleet toolchains you have to store and
maintain. Two make, two automake, two autoconf, two libtool, ...

And you need a whole bunch of setuid programs that put people into the
right chroot depending on the arguments.

./configure TARGET=i386 -> run configure in the chroot
make -> run in chroot since configure did
./configure HOST=i386 TARGET=amd64 -> build in the chroot but for
outside the chroot

It just gets real messy.
Post by Thomas Viehmann
Post by Goswin von Brederlow
The "fooSOVER-dev provides/conflicts foo-dev" doesn't help getting the
right -dev package for the target arch the user chooses to be
available or even prevent i386 -dev packages from falsely providing
stuff for amd64 compiles (if on 64bit developement is supported for amd64).
Ah. Sorry. I falsely understood that the -dev packages should always be
64 bit...
I'd still think that a better approach would eventually be accepted, but
thanks for taking the time to explain the issue.
Cheers
T.
If someone comes up with one. Sofar its just going around in circles
raising more and more problems with each solution so far.

Best one is still the <pkg>:<arch> changes to dpkg.

MfG
Goswin
Chris Cheney
2004-01-15 07:45:15 UTC
Permalink
Post by Goswin von Brederlow
Hi,
You all have seen the other thread about multiarch? This one is a
different part of the puzzle.
The proposal is to make all *-dev packages "Architecture: all". This
should be a strong should or must directive and a must for
built-essential for sarge+1.
Among all the other reasons this is a bad idea there is one that makes
it very annoying.

libfoo1
Arch: any

libfoo1-dev
Arch: all
Depends: libfoo1 (${Source-Version})

A new version of libfoo1 is uploaded the archive, m68k (eg) hasn't built
it yet but needs to build something that depends on it.

m68k
----
libfoo1 1.0-1

libfoo1-dev 1.0-2
Depends: libfoo1 1.0-2

Therefore, libfoo1-dev isn't installable until it has been built on that
arch. Right now this problem exists in the archive for the dev packages
that essentially violate policy by being Arch: all. With all the dev
packages being Arch: all it will be much more of an issue.

Practical example is libqt3-mt-dev since it depends on a separate split
out headers package which is Arch: all. Since it took over a week for it
to build on m68k all the packages uploaded since then that depend on it
have failed and have to be manually set to Dep-Wait.

So unless you come up with a way to have the old -dev packages exist on
the arches that still need them its a very bad idea for them to be
Arch: all.

BTW - kde-config is used at runtime and is a c++ program not a script.

Chris
Goswin von Brederlow
2004-01-16 11:30:47 UTC
Permalink
Post by Chris Cheney
Post by Goswin von Brederlow
Hi,
You all have seen the other thread about multiarch? This one is a
different part of the puzzle.
The proposal is to make all *-dev packages "Architecture: all". This
should be a strong should or must directive and a must for
built-essential for sarge+1.
Among all the other reasons this is a bad idea there is one that makes
it very annoying.
libfoo1
Arch: any
libfoo1-dev
Arch: all
Depends: libfoo1 (${Source-Version})
A new version of libfoo1 is uploaded the archive, m68k (eg) hasn't built
it yet but needs to build something that depends on it.
This a very strong reason. I allways thought that -all packages should
be handled seperatly per arch and checked for dependency problems,
very similar to the testing scripts. But thats a major restructuring
of the archive scripts.

One strong point for the "against" side. I think that tips the scale
on this one.

MfG
Goswin
Thomas Viehmann
2004-01-15 13:25:18 UTC
Permalink
... allowing "Abi: ..." and having
multiple packages with the same name installed aparently isn't liked
at all.
It's not likely for sarge, but if you get it working, I'd doubt that it'd be barred
forever.
Post by Thomas Viehmann
Why can't Provides/Conflicts be used to do this?
Your point would get stronger you'd discuss why these couldn't be used
to solve the problem.
Package: libfoo-dev
Version: 1.2.3
Package: lib64foo-dev
Version: 1.2.3
Provides: libfoo-dev, libfoo-dev=1.2.3
Source: bla
Build-Depends: libfoo (>= 1.2.3)
Common practice (see libpkg-guide):
Package: libfooX-dev (arch i386)
Provides: libfoo-dev
Conflicts: libfoo-dev

Package: libfooX-dev (arch amd64)
Provides: libfoo-dev
Conflicts: libfoo-dev
apt-get install libfoo-dev
(some month pass)
apt-get build-dep bla
apt-get -b source bla
So now some libfoo-dev package will be installed. I don't see how your binary-all
-dev plus arch dependencies is any better than binary-arch -dev packages.
[...]
And Conflict means that you can't have a user wanting 32bit programms
and one wanting 64bit programms on the same system.
Hu? Conflicts for -dev files doesn't mean anything for a user wanting any programs.
I can see that there is the problem of i386 -dev packages being installed but I
cannot see how this is different from installing all -dev packages with i386 arch
dependencies where you'd need the 64bit versions.

You stated and claimed to solve the problem "multiple versions of the same -dev
packages should not be (aptempted to be) installed" and suggested a solution. I was
merely pointing out that the common "fooSOVER-dev provides/conflicts foo-dev"
practice solves the very same problem just as well.
I'm not saying that there isn't any problem, but that your suggestion doesn't solve
any that cannot be dealt with otherwise.

Cheers

T.


Cheers

T.
--
To UNSUBSCRIBE, email to debian-devel-***@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact ***@lists.debian.org
Loading...