Forking only after being ready to accept clients avoids running races
with the tests which sometimes failed on the first 'apt-get update'
(or similar) with the previous background-start and hope for the best…
The commit fixes also some oversight output-order changes in regards to
Description-md5 and (I-M-S) race conditions in various tests.
fix SHA2* cleanups to zero-out the complete context
GCC 4.8 is now clever enough to warn about:
contrib/sha2_internal.cc: In function ‘char* SHA256_End(SHA256_CTX*, char*)’:
contrib/sha2_internal.cc:656:31: warning: argument to ‘sizeof’ in ‘void*
memset(void*, int, size_t)’ call is the same expression as the destination;
did you mean to dereference it? [-Wsizeof-pointer-memaccess]
MEMSET_BZERO(context, sizeof(context));
So fix it as suggested. Its interesting though that the SHA2*
calculation as far as we need it works even without zeroing out.
The for-loop iterating over the DepIterators which need configuration
can (and will be in 'complicated' situations) run multiple times, so we
can't just GlobOr on the DepIterator as it modifies it, so that the next
iteration over the list ends up checking another dependency leading us
into a 'Internal error, packages left unconfigured. foopkg' maybe or we
are 'lucky' and calculate a solution which might break down the line
With the selfgrown splitting we got the problem of not recovering
from networks which just reply with invalid data like those sending
us login pages to authenticate with the network (e.g. hotels) back.
The good thing about the InRelease file is that we know that it must
be clearsigned (a Release file might or might not have a detached sig)
so if we get a file but are unable to split it something is seriously
wrong, so there is not much point in trying further.
The Acquire system already looks out for a NODATA error from gpgv,
so this adds a new error message sent to the acquire system in case
the splitting we do now ourselves failed including this magic word.
we have a test which required traditionally lighttpd to be executed
as it requires a webserver supporting some kind of URI rewriting.
Now with some lines of code our own webserver can do this and the
testcase can be enabled by default. This test hinted at the bug fixed
in the previous commit, so having more tests which can easily be run
is a good thing.
Before we download the 'new' InRelease file the old file will be moved
out of the way with the name 'foobar_InRelease.reverify', so if no
partial file for the 'new' file exists take the modification time from
this reverify file, so that if we get an IMS hit for the InRelease file
we can move back the reverify file as new file rather than downloading
the 'new' file even though we already have it.
We do the same for Release files and this happened to work until the
reverify renaming was corrected for InRelease files.
APT needs to acquire data in a secure fashion over an inherently
unsecure way, known as the internet, while communicating with
unreliable partners, known as webservers and proxies.
For your integration tests we so far relied on 'normal' webservers,
but all of them have certain quirks and none is able to provide us
with all quirks which can be observed in the wild and we therefore
have to test with, so this webserver isn't trying to be fast, secure
or feature complete, but to provide all the quirks we need in a
consistent way.
This webserver also makes the APT project self-contained, as it is now
able to generate, serve as well as acquire package indexes. ;)
try defaults if auto-detection failed in apt-cdrom
The default is to ask udev for location and mountpoints of CD-ROMs,
but the old way of specifying the mountpoint is still available and
is tried now in case udev doesn't find any CD-ROM.
It probably fails, too, but we get a bunch of error messages this
way and the user can get an idea of how to make his setup work even
if udev can't be convienced to return something useful.
do not blindly assume that all packages stanzas have a "Description:"
field in 'apt-cache show' as well as in the cache creation itself.
We instead assume now that if the stanza has a Description, it will not
be the first field as we look out for "\nDescription" to take care of
MD5sum as well as (maybe ignored) translated Descriptions embedded in
the package stanza.
ensure state-dir exists before coyping cdrom files
We do the same in the acquire system which handles the 'normal'
downloads, so do it here as well even though its unlikely anyone
will ever notice (beside testcases of course …)
fail in CopyFile if the FileFds have error flag set
Testing for global PendingErrors in users of CopyFile is incorrect
in so far as unrelated errors will prevent us from copying perfectly
fine files and checking for the validity of the files is just better
in CopyFiles as it already checks if files are at least opened.
Add also a higher-level error message to the error stack if it fails.
For testcases it might sometimes be handy to add trap-actions
before the general cleanup, e.g. if it has set directories read-
only which rm doesn't want to remove even with --force applied
(its fine with files though)
OpenDescriptor should autoclose fd always on error
OpenInternDescriptor failures would cause additional errors to be
generated by double-closing an fd. Other errors (although these
are generated if the method is used incorrectly, so unlikely)
didn't close the fd aswell.
don't explicitly init ExtractTar InFd with invalid fd
The default constructor of the FileFd will kick in anyway,
which will know that the Fd is invalid while with this explicit
call it must be assumed that the fd is in fact valid, which
might generate errors in the future
set Fail flag in FileFd on all errors consistently
Previously some errors would set the Fail flag while some didn't
without a clear reason as all errors leave a bad FileFd behind,
so we use a helper now to ensure that all errors set the flag.
In the past packages were flagged "Protected" so that install/
remove markings where issued before the ProblemResolver.
Nowadays, the marking methods check if they are allowed to modify
the marking of a package instead, so that markings set by FromUser
calls are not overwritten anymore by automatic calls which elimates
the need for InstallProtect which just eats CPU now.
The method itself is left untouched for now in case frontend needs it
still for some wierd usecase, but they should be eliminated.
Splits the big loop over dependencies in SmartConfigure which unpacks and
configures dependencies into two loops and reverse their order, so that all
dependencies which need to be unpacked are handled first and only after that
configures are issued for dependencies.
This is needed as otherwise the unpack of a (new) dependency will be issued
in between a configure call for two (or more) packages which form a loop,
which means the configure calls aren't part of the same dpkg call and
therefore dpkg bails out.
Such tight loops should really be avoided as they are usually wrong – and in
reality the dependencies in libreoffice were greatly simplified thanks to
Rene Engelhard so the problem is gone for the benefit of all.
fix priority sorting by prefering higher in MarkInstall
Used to work until a certain (here unnamed) person came along and used
the wrong operator causing low-priority packages to be sorted above
high-priority packages while choosing a provider in commit 2b5c35c7bb915dbd46fefd7c79f05364ba22f93b from Nov 2011
Doing Removes early is good to have them out of the way, so they
don't break 'Inst' or 'Conf' chains, but scoring them above Essentials
means that we end up upgrading (many) less important packages before
we handle big stuff like libc6 or debconf which not only fails if those
less important packages have unannounced (strict) dependencies, but also
leads to having these packages unconfigured for a long time triggering
bugs in maintainer scripts for no good reason (#708831).
So this commits sets the default value for remove scores to 100, which
is below the one for essentials (200) and a lot lower than the previous
default value (500).
rewrite pkgOrderList::DepRemove to stop incorrect immediate setting
Some squeeze → wheezy upgrades indicate that DepRemove runs amok
in complicated setups as it wasn't correctly working with or-groups.
Completely rewritten the check is now moving from or-group to or-group
instead.
The behavior should be the same as the code before, but
(hopefully) with less bugs and more comments.
remove -ldl from cdrom and -lutil from apt-get linkage
Building src:apt shows:
dpkg-shlibdeps: warning: package could avoid a useless dependency if
debian/apt/usr/lib/apt/methods/cdrom was not linked against libdl.so.2
(it uses none of the library's symbols)
dpkg-shlibdeps: warning: package could avoid a useless dependency if
debian/apt/usr/bin/apt-get was not linked against libutil.so.1 (it uses
none of the library's symbols)
non-inline RunGPGV methods to restore ABI compatibility with previous versions to fix partial upgrades (Closes: #707771)
The rename in 0.9.7.9~exp2 moved the method body to the class definition
which means it became inline, which isn't ABI compatibile. The reverse of
moving inline to non-inline is safe though.
Michael Vogt [Wed, 8 May 2013 15:46:31 +0000 (17:46 +0200)]
* apt-pkg/algorithms.cc:
- Do not propagate negative scores from rdepends. Propagating the absolute
value of a negative score may boost obsolete packages and keep them
installed instead of installing their successors. (Closes: #699759)