I understand the distinction between public and private variables and functions. I don't have a grasp about why this gcc visibility thing makes a difference.
Why is enabling this good for me as a builder/packager?
As an end-user?
I get the feeling with hiding the private non shared stuff that executables and libraries are smaller in size, which theoretically reduces load times, which theoretically should improve response on the desktop. But how much?
A layman's explanation is much appreciated. :)
Darrell
On 10 January 2012 16:42, Darrell Anderson humanreadable@yahoo.com wrote:
I understand the distinction between public and private variables and functions. I don't have a grasp about why this gcc visibility thing makes a difference.
Why is enabling this good for me as a builder/packager?
As an end-user?
I get the feeling with hiding the private non shared stuff that executables and libraries are smaller in size, which theoretically reduces load times, which theoretically should improve response on the desktop. But how much?
A layman's explanation is much appreciated. :)
Darrell
To unsubscribe, e-mail: trinity-users-unsubscribe@lists.pearsoncomputing.net For additional commands, e-mail: trinity-users-help@lists.pearsoncomputing.net Read list messsages on the Web archive: http://trinity-users.pearsoncomputing.net/ Please remember not to top-post: http://trinity.pearsoncomputing.net/mailing_lists/#top-posting
10 seconds on google:
from: http://gcc.gnu.org/wiki/Visibility
Put simply, it hides most of the ELF symbols which would have previously (and unnecessarily) been public. This means:
-
*It very substantially improves load times of your DSO (Dynamic Shared Object).* For example, a huge C++ template-based library which was tested (the TnFOX Boost.Python bindings library) now loads in eight seconds rather than over six minutes! -
*It lets the optimiser produce better code.* PLT indirections (when a function call or variable access must be looked up via the Global Offset Table such as in PIC code) can be completely avoided, thus substantially avoiding pipeline stalls on modern processors and thus much faster code. Furthermore when most of the symbols are bound locally, they can be safely elided (removed) completely through the entire DSO. This gives greater latitude especially to the inliner which no longer needs to keep an entry point around "just in case". -
*It reduces the size of your DSO by 5-20%.* ELF's exported symbol table format is quite a space hog, giving the complete mangled symbol name which with heavy template usage can average around 1000 bytes. C++ templates spew out a huge amount of symbols and a typical C++ library can easily surpass 30,000 symbols which is around 5-6Mb! Therefore if you cut out the 60-80% of unnecessary symbols, your DSO can be megabytes smaller! -
*Much lower chance of symbol collision.* The old woe of two libraries internally using the same symbol for different things is finally behind us with this patch. Hallelujah!
Although the library quoted above is an extreme case, the new visibility support reduced the exported symbol table from > 200,000 symbols to less than 18,000. Some 21Mb was knocked off the binary size as well!
A layman's explanation is much appreciated. :)
10 seconds on google:
I spent about 10 minutes reading. :) That is why I asked for a layman's explanation. A flurry of acronyms and high tech jibber jabber is not a layman's explanation. :)
Be nice to us old folks Calvin!
Darrell
A layman's explanation is much appreciated. :)
10 seconds on google:
I spent about 10 minutes reading. :) That is why I asked for a layman's explanation. A flurry of acronyms and high tech jibber jabber is not a layman's explanation. :)
Be nice to us old folks Calvin!
Darrell
+1. Not very many people know C++ internals well enough to really know what is going on with visibility support.
The claims on that website are a bit overinflated (to their credit they did state this). My testing on TDE has indicated a slight but noticeable speed increase with something like a 10% decrease in core library size. To be fair this test was run on a heavily loaded development system over NFS, so a lightly used system with lots of RAM and a fast hard disk may notice no speed improvement at all.
TDE needs to be built with the visibility flag set for arts, tdelibs, and tdebase to see the improvements. I would treat this flag as beta quality for now until we get widespread testing of the resultant code, as older versions (3.x) of gcc had problems implementing visibility support on KDE.
I hope this helps some!
Tim
+1. Not very many people know C++ internals well enough to really know what is going on with visibility support.
I'm studying C++ on my own but have a long, long, long way to go. I'm getting so I can at least read C++ code without crying or running my finger up and down across my lips. :)
I understand that the ELF standard basically describes the way files are structured internally in order to knowably find what is needed. I can see that reducing the size of any file theoretically helps with loading and run times, but by how much I have no clue. :)
The claims on that website are a bit overinflated (to their credit they did state this). My testing on TDE has indicated a slight but noticeable speed increase with something like a 10% decrease in core library size. To be fair this test was run on a heavily loaded development system over NFS, so a lightly used system with lots of RAM and a fast hard disk may notice no speed improvement at all.
What about systems with nominal RAM and slow hard disks --- old systems?
TDE needs to be built with the visibility flag set for arts, tdelibs, and tdebase to see the improvements. I would treat this flag as beta quality for now until we get widespread testing of the resultant code, as older versions (3.x) of gcc had problems implementing visibility support on KDE.
I'm game for the experiment. I'm back porting the patches to 3.5.13 too.
I hope this helps some!
Yes. Thank you! :)
Darrell