Discussion:
Why not prelink?
(too old to reply)
Manchardley de lWmprad
2005-03-11 20:35:13 UTC
Permalink
Hello, all

Q: Given the advantages of prelinking - faster startup and sometimes
better memory usage - why do not all distros do it by default. SUSE and
Ubunto do not, in my recent experience.

I can only assume that there must be a downside to prelinking in some
circumstances. I just cannot figure out what hey might be!

Any pointers to contraindications of prelinking would be welcome.

Regards
M
S.Chang
2005-03-11 23:06:16 UTC
Permalink
Post by Manchardley de lWmprad
Hello, all
Q: Given the advantages of prelinking - faster startup and sometimes
better memory usage - why do not all distros do it by default. SUSE and
Ubunto do not, in my recent experience.
I can only assume that there must be a downside to prelinking in some
circumstances. I just cannot figure out what hey might be!
Any pointers to contraindications of prelinking would be welcome.
Regards
M
Broken dependency, every time you upgrade or update the libraries
prelinked, you have to run prelink again.

Prelink only helps the load time of the application, does not affect the
actual runtime speed, personally, I can live without Prelink on a GHz
system wikth plenty of RAM.

S.Chang
Nix
2005-03-14 17:22:10 UTC
Permalink
Post by Manchardley de lWmprad
Hello, all
Q: Given the advantages of prelinking - faster startup and sometimes
better memory usage - why do not all distros do it by default. SUSE and
Ubunto do not, in my recent experience.
I can only assume that there must be a downside to prelinking in some
circumstances. I just cannot figure out what hey might be!
Any pointers to contraindications of prelinking would be welcome.
Broken dependency, every time you upgrade or update the libraries prelinked, you have to run prelink again.
Prelink only helps the load time of the application
... and the amount of pages dirtied (== swap used, memory used per
process).
the actual runtime speed, personally, I can live without Prelink on a
GHz system wikth plenty of RAM.
You don't start OpenOffice or KDE programs much, do you :)
--
...Hires Root Beer...
What we need these days is a stable, fast, anti-aliased root beer
with dynamic shading. Not that you can let just anybody have root.
--- John M. Ford
S.Chang
2005-03-14 21:07:26 UTC
Permalink
Post by Nix
Post by Manchardley de lWmprad
Hello, all
Q: Given the advantages of prelinking - faster startup and sometimes
better memory usage - why do not all distros do it by default. SUSE and
Ubunto do not, in my recent experience.
I can only assume that there must be a downside to prelinking in some
circumstances. I just cannot figure out what hey might be!
Any pointers to contraindications of prelinking would be welcome.
Broken dependency, every time you upgrade or update the libraries prelinked, you have to run prelink again.
Prelink only helps the load time of the application
... and the amount of pages dirtied (== swap used, memory used per
process).
malloc is more effective, for 100 pounds you can get extra 1GB of RAM,
beats the hell out of using swap.
Post by Nix
the actual runtime speed, personally, I can live without Prelink on a
GHz system wikth plenty of RAM.
You don't start OpenOffice or KDE programs much, do you :)
Not really, vi or vim for small tasks, emacs to write my papers with
LaTeX, I heard with OO you can write TeX in WYSIWYG form, may try it out
later on.

TWM is my default window manager on Linux, CDE and 4DWM on Unix, I run X
over ssh a lot, so GNOME or KDE are not really for me;
and whatever Apple put in OSX as window manager for my portable need.

S.Chang
Nix
2005-03-18 11:37:03 UTC
Permalink
Post by S.Chang
Post by Nix
Post by S.Chang
Prelink only helps the load time of the application
... and the amount of pages dirtied (== swap used, memory used per
process).
malloc is more effective, for 100 pounds you can get extra 1GB of RAM,
beats the hell out of using swap.
Er, adding extra RAM will *not* speed up ld-linux.so.2 unless you're
desperately short of memory, and avoiding dirtying PLT pages will reduce
icache usage (and the icache is vastly vaster than RAM).

On modern CPUs, size *is* speed.


(Oh, and adding more RAM can slow you down: it increases the amount of
work the kernel has to do to maintain the page tables... 4% of
non-swappable RAM goes on those, don't forget)
Post by S.Chang
Post by Nix
Post by S.Chang
the actual runtime speed, personally, I can live without Prelink on a
GHz system wikth plenty of RAM.
You don't start OpenOffice or KDE programs much, do you :)
Not really, vi or vim for small tasks, emacs to write my papers with
LaTeX,
vi is small enough that it has a fairly low number of relocations: Emacs
is pretty much a single huge binary, so the number of relocations is
also quite low, and it's also started rarely.

Where prelink makes a difference is for things with large shared
libraries (especially large C++ shared libraries, because those have a
*vast* number of exported symbols, and the symbol names are very long
too so prelink has to do a lot more work comparing them).

Try running your favourite program with LD_DEBUG=statistics set in the
environment: watch the difference when prelinked and when not prelinked.

For some apps (e.g. KDE, OpenOffice) you can see *ten second
differences* in app startup time, and Mb of saved RAM per process.
Post by S.Chang
I heard with OO you can write TeX in WYSIWYG form, may try it
out later on.
<http://preview-latex.sf.net/> lets you write WYSIWYG stuff with Emacs
too :)
Post by S.Chang
TWM is my default window manager on Linux, CDE and 4DWM on Unix, I run
TWM? Ick bleah. I've never understood how anyone could survive without
virtual desktops.
Post by S.Chang
X over ssh a lot, so GNOME or KDE are not really for me; and whatever
So do I, but I use KDE without difficulty. (If my wm and kicker were on
a remote machine, perhaps I'd disagree, but I tend to have a desktop box
with enough RAM To run a wm, at least!)
--
Post by S.Chang
...Hires Root Beer...
What we need these days is a stable, fast, anti-aliased root beer
with dynamic shading. Not that you can let just anybody have root.
--- John M. Ford
Tony Houghton
2005-03-14 19:28:42 UTC
Permalink
Post by S.Chang
Post by Manchardley de lWmprad
Any pointers to contraindications of prelinking would be welcome.
Broken dependency, every time you upgrade or update the libraries
prelinked, you have to run prelink again.
A package manager should be able to handle that, but it would probably
need to be designed with prelinking in mind from the ground up. And it
wouldn't help with libraries you've compiled yourself.

Couldn't the linking system deal with the problem itself? Eg each
library could have a hash code or something, which is also added to the
prelinked application. If the linker sees that the codes don't match, it
could fall back to dynamic linking, or redo the prelinking if it has
write permission on the binary.

I also really like the idea of having all binaries stripped for
leanness, but saving the debugging information in a separate file so
they can still be debugged.
--
The address in the Reply-To is genuine and should not be edited.
See <http://www.realh.co.uk/contact.html> for more reliable contact addresses.
S.Chang
2005-03-14 21:45:06 UTC
Permalink
Post by Tony Houghton
Post by S.Chang
Post by Manchardley de lWmprad
Any pointers to contraindications of prelinking would be welcome.
Broken dependency, every time you upgrade or update the libraries
prelinked, you have to run prelink again.
A package manager should be able to handle that, but it would probably
need to be designed with prelinking in mind from the ground up. And it
wouldn't help with libraries you've compiled yourself.
Something like MONDO can help you maintaining dynamic links, something
similar should be avaliable for prelink.
Post by Tony Houghton
Couldn't the linking system deal with the problem itself? Eg each
library could have a hash code or something, which is also added to the
prelinked application. If the linker sees that the codes don't match, it
could fall back to dynamic linking, or redo the prelinking if it has
write permission on the binary.
If you don't run prelink again after you upgrade the dependant
libraries, you lose the boost of performance, just the same as going
back to dynamic linking.
I aggree on a slower system with limited RAM, prelink can be extreme
helpful and it's avaliable for most distro just not used by default.
I think it will eventually be merged with apt/emerge/yum using their
dependency tree to maintain the prelinked libraries.
Post by Tony Houghton
I also really like the idea of having all binaries stripped for
leanness, but saving the debugging information in a separate file so
they can still be debugged.
Nix
2005-03-18 12:02:20 UTC
Permalink
Post by S.Chang
Post by Tony Houghton
Post by Manchardley de lWmprad
Any pointers to contraindications of prelinking would be welcome.
Broken dependency, every time you upgrade or update the libraries prelinked, you have to run prelink again.
A package manager should be able to handle that, but it would probably
need to be designed with prelinking in mind from the ground up. And it
wouldn't help with libraries you've compiled yourself.
Something like MONDO can help you maintaining dynamic links, something
similar should be avaliable for prelink.
Mondo? I don't really see how a rescue CD relates to prelink.

(You can run prelink on a running system, you know: the only downside is
increased memory usage and disk consumption until you reboot, because
the prelinked binaries are of course put in new inodes and the old ones
unlinked.)
Post by S.Chang
I aggree on a slower system with limited RAM, prelink can be extreme
helpful and it's avaliable for most distro just not used by default.
It doesn't work on all architectures, either --- but as it has no effect
bar performance this is hardly a killer.
Post by S.Chang
I think it will eventually be merged with apt/emerge/yum using their
dependency tree to maintain the prelinked libraries.
That would be nonportable and dangerous. The dependency tree that
matters here is the dependencies as seen by ld-linux.so.2/ldd(1);
knowing that package A depends on package B would be useless to prelink,
as prelink and ld-linux.so.2 need to know exactly which libraries depend
on which libraries, and further their timestamps and checksums. The
package manager can't really help here.

(Possibly it could reduce the amount of disk searching prelink needs to
do, but you can do that already by just writing out an /etc/prelink.conf
with suitable -b options; just blacklist everything other then the
conventional binary directories and subdirectories thereof, and prelink
will speed up quite substantially. It still won't be *fast*, mind you,
but then prelink's whole job is to do the expensive stuff only once
rather than taking the hit whenever you start a binary.)
--
Post by S.Chang
...Hires Root Beer...
What we need these days is a stable, fast, anti-aliased root beer
with dynamic shading. Not that you can let just anybody have root.
--- John M. Ford
Nix
2005-03-18 11:57:18 UTC
Permalink
Post by Tony Houghton
Post by S.Chang
Post by Manchardley de lWmprad
Any pointers to contraindications of prelinking would be welcome.
Broken dependency, every time you upgrade or update the libraries
prelinked, you have to run prelink again.
A package manager should be able to handle that, but it would probably
need to be designed with prelinking in mind from the ground up.
It should have knowledge of prelink's --verify, --md5 or --sha options
in any case :)
Post by Tony Houghton
And it
wouldn't help with libraries you've compiled yourself.
Well, no. (Duh. ;) )
Post by Tony Houghton
Couldn't the linking system deal with the problem itself? Eg each
library could have a hash code or something, which is also added to the
prelinked application. If the linker sees that the codes don't match, it
could fall back to dynamic linking,
Hash codes aren't good enough on their own: if there were a hash
collision, you'd get an unexplained crash.

So something smarter is already implemented. All dependent libraries are
checksummed and timestamped, and if any of them differ from the value on
the executable, prelinking is disabled:

,----[ glibc-2.3.4/elf/rtld.c:dl_main ]
| for (; r_list < r_listend && liblist < liblistend; r_list++)
| {
| l = *r_list;
|
| if (l == main_map)
| continue;
|
| /* If the library is not mapped where it should, fail. */
| if (l->l_addr)
| break;
|
| /* Next, check if checksum matches. */
| if (l->l_info [VALIDX(DT_CHECKSUM)] == NULL
| || l->l_info [VALIDX(DT_CHECKSUM)]->d_un.d_val
| != liblist->l_checksum)
| break;
|
| if (l->l_info [VALIDX(DT_GNU_PRELINKED)] == NULL
| || l->l_info [VALIDX(DT_GNU_PRELINKED)]->d_un.d_val
| != liblist->l_time_stamp)
| break;
|
| if (! _dl_name_match_p (strtab + liblist->l_name, l))
| break;
|
| ++liblist;
| }
|
|
| if (r_list == r_listend && liblist == liblistend)
| prelinked = true;
|
| if (__builtin_expect (GLRO(dl_debug_mask) & DL_DEBUG_LIBS, 0))
| _dl_debug_printf ("\nprelink checking: %s\n",
| prelinked ? "ok" : "failed");
`----

(Some analogue of) this code has been present since the earliest days
of prelink support in glibc. prelink has always kept the checksum and
timestamp fields in dependent libraries updated.
Post by Tony Houghton
or redo the prelinking if it has
write permission on the binary.
Icck, no thanks. Prelinking needs to work across the whole system at
once to have a significant effect, and even running on *one binary*
can take >50s.

Plus, I really don't want ld-linux.so.2 fork()/exec()ing as root without
asking me. The bloody thing's complicated enough as it is. :)

(In addition, to have any effect, it'd have to do this and then
*re-relocate itself*, or re-exec() itself. Both strike me as, uh,
*unwise* tasks for a dynamic linker to try to do. Even the initial
relocation is agonising enough and a cause of enough subtle bugs
on new targets, and the limitations on what you can do before
relocation is complete are... extreme. :) )
Post by Tony Houghton
I also really like the idea of having all binaries stripped for
leanness, but saving the debugging information in a separate file so
they can still be debugged.
Already done :) see objcopy's --add-gnu-debuglink and objcopy/strip's
--only-keep-debug options.
--
Post by Tony Houghton
...Hires Root Beer...
What we need these days is a stable, fast, anti-aliased root beer
with dynamic shading. Not that you can let just anybody have root.
--- John M. Ford
Tony Houghton
2005-03-18 15:12:36 UTC
Permalink
Post by Nix
Post by Tony Houghton
Couldn't the linking system deal with the problem itself? Eg each
library could have a hash code or something, which is also added to the
prelinked application. If the linker sees that the codes don't match, it
could fall back to dynamic linking,
Hash codes aren't good enough on their own: if there were a hash
collision, you'd get an unexplained crash.
So something smarter is already implemented. All dependent libraries are
checksummed and timestamped, and if any of them differ from the value on
[...]

So it's already the situation that you can try to use prelinking, and if
bits haven't been updated properly it'll fall back to on-the-fly with no
harm done? That's all right then.
Post by Nix
Post by Tony Houghton
or redo the prelinking if it has
write permission on the binary.
Icck, no thanks. Prelinking needs to work across the whole system at
once to have a significant effect, and even running on *one binary*
can take >50s.
Why is it so slow compared to doing the linking on the fly?
Post by Nix
Plus, I really don't want ld-linux.so.2 fork()/exec()ing as root without
asking me. The bloody thing's complicated enough as it is. :)
Oh no, I wasn't suggesting that it should always be given that
permission, only that it could prelink if it already had the permission.
Post by Nix
(In addition, to have any effect, it'd have to do this and then
*re-relocate itself*, or re-exec() itself. Both strike me as, uh,
*unwise* tasks for a dynamic linker to try to do. Even the initial
relocation is agonising enough and a cause of enough subtle bugs
on new targets, and the limitations on what you can do before
relocation is complete are... extreme. :) )
Do you mean that everything depends on ld.so, so if it needed to prelink
itself it would have a bit of trouble?
Post by Nix
Post by Tony Houghton
I also really like the idea of having all binaries stripped for
leanness, but saving the debugging information in a separate file so
they can still be debugged.
Already done :) see objcopy's --add-gnu-debuglink and objcopy/strip's
--only-keep-debug options.
I know it can be done, I was just thinking it would be nice if it was
applied across the system in a standard way, eg add something like a
debug directory for every bin directory in the FHS.

On a related topic, I was playing with VDR a while back, which uses
dynamic libraries as plugins. I noticed that if I changed and recompiled
one of the plugins while it was running, it would crash. What would
cause that?
--
The address in the Reply-To is genuine and should not be edited.
See <http://www.realh.co.uk/contact.html> for more reliable contact addresses.
Darren Salt
2005-03-19 18:16:33 UTC
Permalink
I demand that Tony Houghton may or may not have written...

[snip]
On a related topic, I was playing with VDR a while back, which uses dynamic
libraries as plugins. I noticed that if I changed and recompiled one of the
plugins while it was running, it would crash. What would cause that?
Open, truncate, write as opposed to unlink, open, write? Use of install or
'cp --remove-destination' (and a quick post to the VDR list) should be
adequate to fix this.
--
| Darren Salt | nr. Ashington, | d youmustbejoking,demon,co,uk
| Debian, | Northumberland | s zap,tartarus,org
| RISC OS | Toon Army | @
| Retrocomputing: a PC card in a Risc PC

array: n. a blast from a CRT.
Nix
2005-03-21 16:03:13 UTC
Permalink
Post by Tony Houghton
Post by Nix
So something smarter is already implemented. All dependent libraries are
checksummed and timestamped, and if any of them differ from the value on
[...]
So it's already the situation that you can try to use prelinking, and if
bits haven't been updated properly it'll fall back to on-the-fly with no
harm done? That's all right then.
Exactly so. You can just run prelink out of cron and it'll just work :)
Post by Tony Houghton
Post by Nix
Post by Tony Houghton
or redo the prelinking if it has
write permission on the binary.
Icck, no thanks. Prelinking needs to work across the whole system at
once to have a significant effect, and even running on *one binary*
can take >50s.
Why is it so slow compared to doing the linking on the fly?
Because prelinking has to compute addresses for shared libraries that
require no relocation for *any binary on the system* (if possible).

This means it has to determine sizes and load addresses and things for
every accessible binary and every shared library that one of those
binaries uses before it knows where to relocate an unprelinked shared
library to.
Post by Tony Houghton
Post by Nix
Plus, I really don't want ld-linux.so.2 fork()/exec()ing as root without
asking me. The bloody thing's complicated enough as it is. :)
Oh no, I wasn't suggesting that it should always be given that
permission, only that it could prelink if it already had the permission.
ld-linux.so.2 doesn't know how to prelink, though, only how to use the
few extra things that prelink adds (mostly checksum checking and biasing
stuff). Prelinking is far more complex --- `global relocation', sort of.
In some respects it does the same job as the dynamic linker: in other
ways, it does more.
Post by Tony Houghton
Post by Nix
(In addition, to have any effect, it'd have to do this and then
*re-relocate itself*, or re-exec() itself. Both strike me as, uh,
*unwise* tasks for a dynamic linker to try to do. Even the initial
relocation is agonising enough and a cause of enough subtle bugs
on new targets, and the limitations on what you can do before
relocation is complete are... extreme. :) )
Do you mean that everything depends on ld.so, so if it needed to prelink
itself it would have a bit of trouble?
Well, if you need to re-prelink the C library you're going to need
to re-prelink everything else anyway, yes --- but what I meant there was
that it'd either have to do the prelinking before doing any in-memory
relocation of itself, or it'd have to reinvoke itself to pick up
the changes --- and both are really hard.

(Among other things, the non-relocated ld.so is forbidden from calling
routines or accessing data in other *translation units*, because this
would go through the GOT or PLT, neither of which have been set up yet.
This is... rather confining.)
Post by Tony Houghton
Post by Nix
Already done :) see objcopy's --add-gnu-debuglink and objcopy/strip's
--only-keep-debug options.
I know it can be done, I was just thinking it would be nice if it was
applied across the system in a standard way, eg add something like a
debug directory for every bin directory in the FHS.
I think Fedora already does something like this (shipping debugging
info in the -devel packages or something like that).
Post by Tony Houghton
On a related topic, I was playing with VDR a while back, which uses
dynamic libraries as plugins. I noticed that if I changed and recompiled
one of the plugins while it was running, it would crash. What would
cause that?
The was at one point a misfeature in the Linux kernel such that
truncate() on a running binary yielded ETXTBSY, but doing the same on a
shared library was allowed (and crashed everything using that shared
library when the next page from that library took place). That
misfeature may still be there: I'm not sure, as I last verified its
presence in the 2.2.x days.

Of course you should always install things via unlink()-and-recreate
anyway.
--
Post by Tony Houghton
...Hires Root Beer...
What we need these days is a stable, fast, anti-aliased root beer
with dynamic shading. Not that you can let just anybody have root.
--- John M. Ford
Tony Houghton
2005-03-22 20:33:08 UTC
Permalink
Post by Nix
Because prelinking has to compute addresses for shared libraries that
require no relocation for *any binary on the system* (if possible).
This means it has to determine sizes and load addresses and things for
every accessible binary and every shared library that one of those
binaries uses before it knows where to relocate an unprelinked shared
library to.
How does dynamic linking avoid the need to know about every binary a
library might be linked with? Does it reserve two different address
ranges for libraries and binaries?

Has anyone thought of making a compromise, using simple arrays for
relocation tables instead of having to look up names at load time?
--
The address in the Reply-To is genuine and should not be edited.
See <http://www.realh.co.uk/contact.html> for more reliable contact addresses.
Nix
2005-03-23 12:28:40 UTC
Permalink
Post by Tony Houghton
Post by Nix
Because prelinking has to compute addresses for shared libraries that
require no relocation for *any binary on the system* (if possible).
This means it has to determine sizes and load addresses and things for
every accessible binary and every shared library that one of those
binaries uses before it knows where to relocate an unprelinked shared
library to.
How does dynamic linking avoid the need to know about every binary a
library might be linked with? Does it reserve two different address
ranges for libraries and binaries?
Well, normally when ld.so relocates a binary it only needs to know about
symbols in libraries that *this binary* (transitively) needs: it ensures
that all symbols in all those libraries are at addresses distinct from
all other such symbols.

As such, you could easily have this situation:

bin1:
libreadline.so.4 => /lib/libreadline.so.4 (0x70048000)
libhistory.so.4 => /lib/libhistory.so.4 (0x70090000)
libncurses.so.5 => /lib/libncurses.so.5 (0x700a8000)
libdl.so.2 => /lib/tls/libdl.so.2 (0x70104000)
libc.so.6 => /lib/tls/libc.so.6 (0x70118000)
/lib/ld-linux.so.2 (0x70000000)

bin2:
libc.so.6 => /lib/tls/libc.so.6 (0x70048000)
/lib/ld-linux.so.2 (0x70000000)

Note that libc.so.6 in bin2 has an address which *overlaps* the address
used for libreadline.so.4 in bin1. In non-prelinked, non-execstacked,
non-PaX Linux systems, this is how it always works: libraries are
relocated starting from the lowest available address, so they'll overlap
all over the place, and essentially every library needs relocation at
runtime to make them not overlap when starting any binary that uses that
library. This relocation is normally lazy --- i.e., it happens to each
function when that function is called --- but it still costs both at
runtime and at startup time. Plus, because relocation involves writing
to text pages (the GOT and PLT), it dirties those pages, leading to
greater memory consumption (those pages must now be swapped out, not
paged out, because they're no the same as they were in the binary).

You can see that this relocation processing is happening:

***@amaterasu 37 /tmp% LD_DEBUG=statistics ./bin2
22949:
22949: runtime linker statistics:
22949: total startup time in dynamic loader: 667605 clock cycles
22949: time needed for relocation: 339611 clock cycles (50.8%)
22949: number of relocations: 107
22949: number of relocations from cache: 5
22949: number of relative relocations: 3889
22949: time needed to load objects: 176055 clock cycles (26.3%)
22949:
22949: runtime linker statistics:
22949: final number of relocations: 126
22949: final number of relocations from cache: 5

Here's the same pair of binaries on a prelinked system (well, actually
this is a different architecture, but that's academic):

bin1:
libreadline.so.4 => /lib/libreadline.so.4 (0x47d17000)
libhistory.so.4 => /lib/libhistory.so.4 (0x47cc4000)
libdl.so.2 => /lib/tls/libdl.so.2 (0x42019000)
libc.so.6 => /lib/tls/libc.so.6 (0x41ed1000)
libncurses.so.5 => /lib/libncurses.so.5 (0x433c6000)
/lib/ld-linux.so.2 (0x41eb8000)

bin2:
libc.so.6 => /lib/tls/libc.so.6 (0x41ed1000)
/lib/ld-linux.so.2 (0x41eb8000)

Note that the addresses of libc.so.6 on these two independent binaries
are *identical*, and they're all nonoverlapping for both these
binaries. The same is true of the largest possible set of shared
libraries on the system, given address space constraints. Therefore,
no relocation is necessary for either of those binaries to start.

But note that allocating addresses such that every shared library has
a nonoverlapping address in every binary on the system (or as close to
that state as possible) requires a global pass over *every* binary
and *all* those binaries' dependent shared libraries: furthermore,
expanding one shared library by even a few Kb might cause it to
collide with some other library's


Here's proof of lack-of-relocation-processing:

***@hades 1 /home/nix% LD_DEBUG=statistics ./bin2
19661:
19661: runtime linker statistics:
19661: total startup time in dynamic loader: 1250146 clock cycles
19661: time needed for relocation: 14762 clock cycles (1.1%)
19661: number of relocations: 0
19661: number of relocations from cache: 19
19661: number of relative relocations: 0
19661: time needed to load objects: 217524 clock cycles (17.3%)
19661:
19661: runtime linker statistics:
19661: final number of relocations: 0
19661: final number of relocations from cache: 19

Note the difference in `time needed for relocation': 350,000 versus
15,000 ticks. (The increased `time needed to load objects' is because
this machine is faster than the nonprelinked one, so the disk hits take
more CPU time.)

If you turn on LD_BIND_NOW, the number of relocations without prelink
goes up to the thousands.

This was a tiny binary: for larger ones, like OpenOffice, the time
difference can be titanic. I mean, here's a nonprelinked copy of koffice
starting up and then getting ctrl-C'ed because I don't have an X
server on this console so I can't quit it normally ;)

22996:
22996: runtime linker statistics:
22996: total startup time in dynamic loader: 1687696747 clock cycles
22996: time needed for relocation: 1259756503 clock cycles (74.6%)
22996: number of relocations: 27617
22996: number of relocations from cache: 60427
22996: number of relative relocations: 64362
22996: time needed to load objects: 414493752 clock cycles (24.5%)
[...]
23001:
23001: runtime linker statistics:
23001: final number of relocations: 27092
23001: final number of relocations from cache: 58000

That's not a small cost: this was on a 500MHz UltraSPARC, so that ate at
*least* 2.5 seconds, and that was only one of about a dozen processes
that got started when kwrite started up. With prelink, all of that cost
vanishes.
Post by Tony Houghton
Has anyone thought of making a compromise, using simple arrays for
relocation tables instead of having to look up names at load time?
Windows does this (using integer `ordinals' as well as symbol names). It
speeds up some things (symbol comparison) and makes maintaining ABI
compatibility an utter nightmare. It's one to avoid. In any case, ELF
requires the symbol tables. :)

There are certain fairly simple tricks involving merging of relocation
sections that can reduce the number of symbol name comparisons required
by a large factor (usually about 30--80%): this is implemented via the
`-z combreloc' linker flag, which is on by default. Almost certainly
everything on your system is linked with this option already.

(Note that combreloc merely speeds up part of relocation processing:
prelink *eliminates* it.)


A good read on prelink operation is the prelink paper prelink.pdf,
probably packaged with your copy of prelink. Section 2 discusses
relocation section merging and `-z combreloc'. (Parts of this are hard
to comprehend without a good understanding of ELF, but much can be
grasped anyway.)
--
This is like system("/usr/funky/bin/perl -e 'exec sleep 1'");
--- Peter da Silva
Tim Haynes
2005-03-16 20:07:35 UTC
Permalink
Post by Manchardley de lWmprad
Hello, all
Q: Given the advantages of prelinking - faster startup and sometimes
better memory usage - why do not all distros do it by default. SUSE and
Ubunto do not, in my recent experience.
I can only assume that there must be a downside to prelinking in some
circumstances. I just cannot figure out what hey might be!
Any pointers to contraindications of prelinking would be welcome.
The results of updates to prelinking may give you minor heartattacks if
you're running aide or tripwire.

~Tim
Nix
2005-03-18 12:02:58 UTC
Permalink
Post by Tim Haynes
Post by Manchardley de lWmprad
Hello, all
Q: Given the advantages of prelinking - faster startup and sometimes
better memory usage - why do not all distros do it by default. SUSE and
Ubunto do not, in my recent experience.
I can only assume that there must be a downside to prelinking in some
circumstances. I just cannot figure out what hey might be!
Any pointers to contraindications of prelinking would be welcome.
The results of updates to prelinking may give you minor heartattacks
if you're running aide or tripwire.
These programs should be using `prelink --verify' at the appropriate
places. That's what it's for.
--
Post by Tim Haynes
...Hires Root Beer...
What we need these days is a stable, fast, anti-aliased root beer
with dynamic shading. Not that you can let just anybody have root.
--- John M. Ford
Loading...