Not all are needed for all files at the moment, but the new docbook
building hadn't available some of the entities it used as the files
weren't correctly copied around in all cases and having the same across
the bord makes working with all of them a little easier.
Our integration tests need some additional dependencies to run and
function correctly, but while multiple places run them, there is no need
to also specify the these dependencies in multiple places.
Michael Vogt [Tue, 29 Jul 2014 13:01:13 +0000 (15:01 +0200)]
Fix SmartConfigure to ignore ordering of packages that are already valid
With the change of SmartConfigure() in git commit 42d51f the ordering
code was trying to re-order dependencies, even when at this point in
time this was not needed. Now it will first check all targets of the
given dependency and only if there is not a good one try to reorder
and unpack/configure as needed.
Michael Vogt [Wed, 16 Jul 2014 11:57:50 +0000 (13:57 +0200)]
StringToBool: only act if the entire string is consumed by strtol()
StringToBool uses strtol() internally to check if the argument is
a number. This function stops when it does not find any more numbers.
So a string like "0ad" (which is a valid packagename) is interpreted
as a "0". The code now checks that the entire string is consumed
not just a part of it. Thanks to Johannes Schauer for raising this
issue.
The behaviour of echo "\tA\t" differs between dash/zsh which interprets
the \t as tab and bash which prints it literally. Similar things happen
for other escape sequences – without the -e flag.
Switching to printf makes this more painless^Wportable, so that the
tests are also working correctly with bash as sh.
(commit message by committer, patch otherwise unmodified)
A call to UniqFindTagWrite can trigger the need for a bigger mmap, which
is usually done by moving it, but with this move all pointers into it
become invalid (and have to be remapped). The compiler calculates the
pointer before the execution of the call though, so it tries to store
the returned value at the old location, resulting in a segfault.
We solve this by use of a temprorary variable as we did in the other
instances of this problem before.
The name suggests that it is supposed to substitute a variable with a
value, but we tend to use it in a more liberal replace_all() fashion,
but this breaks if either of the parameters is empty or more importantly
if two "variable" occurrences follow each other directly.
don't send pkg from an unknown architecture via EDSP
APT's cache can include packages from architectures dpkg has no
knowledge about and can therefore not be installed for e.g. to allow
easy lookups. There is no point in telling external solvers about them
though and some of them might even be really talkative about ignoring
them if we do.
In commit 21b3eac8 I promoted the check for installable dependencies to
a pre-install check, which also reverts to a known good candidate (the
installed version) if it fails. This revert was done even for user
requested candidate switches which disabled our Broken detection so that
install requests which are impossible to satisfy do not fail anymore,
but print an (incomplete) solution proposal and then exit successfully.
Michael Vogt [Fri, 30 May 2014 12:47:56 +0000 (14:47 +0200)]
Show unauthenticated warning for source packages as well
This will show the same unauthenticated warning for source packages
as for binary packages and will not download a source package if
it is unauthenticated. This can be overridden with
--allow-unauthenticated
EDSP code uses pipes opened via an FD as sources and later for those
files modification times and filesize are read - but never really used
again. The result we get from FileFd is probably wrong, but as we don't
use it anyway, we just don't fallback if we have nothing to fallback to
Solvers are supposed to exit successfully even if they haven't found a
solution, but a solver which fails drastically (like e.g. segfaults)
should be detected and dealt with accordingly instead of ignored.
if Resolver fails, do not continue even if not broken
This can happen if the request is already a well-formed request all by
itself (e.g. the package has no dependencies), but the resolver found
a reason to not accept it as solution. Our edsp 'dump' solver e.g.
shouldn't be able to trigger install, which it does otherwise.
As outlined in #748355 apt segfaulted if it encountered a loop between a
package pre-depending on a package conflicting with the previous as it
ended up in an endless loop trying to unpack 'the other package'.
In this specific case as an essential package is involved a lot of force
needs to be applied, but can also be caused by 'normal' tight loops and
highlights a problem in how we handle breaks which we want to avoid.
The fix comes in multiple entangled changes:
1. All Smart* calls are guarded with loop detection. Some already had it,
some had parts of it, some did it incorrect, and some didn't even try.
2. temporary removes to avoid a loop (which is done if a loop is
detected) prevent the unpack of this looping package (we tried to unpack
it to avoid the conflict/breaks, but due to a loop we couldn't, so we
remove/deconfigure it instead which means we can't unpack it now)
3. handle conflicts and breaks very similar instead of duplicating most
of the code. The only remaining difference is, as it should:
deconfigure is enough for breaks, for conflicts we need the big hammer
consistently fail if Smart* packagemanager actions fail
These failure conditions come with an error message attached and the
conditions aren't workaroundable (otherwise this would have been done
instead of returning failure), so not erroring out here means that we
execute dpkg later on with a known not-working ordering adding insult
(our own error messages at the end) to injury (dpkg failure).