discard candidates via IsInstallOk to allow override
In commit 446551c8 I changed MarkInstall to discard the candidate if the
candidate can't satisfy the dependency. This breaks interactive solvers
like aptitude which can change the candidate on-the-fly later.
In commit df77d8a5 I introduced this 'early' loop-breaking to begin with
which can't be that helpful for interactive solvers as well, but makes
perfect sense for non-interactives to stop them from exploring trees
which can't be satisfied, but it isn't perfect as ideally we would check
this before auto-installing the first dependency.
This commit therefore moves the loop into its own IsInstallOk hook so
that frontends can override this check if they want to and in exchange
removes the loop-breaking from MarkInstall itself and does it before any
dependency is installed.
do IsInstallOk call in MarkInstall unconditionally
Hooked checks could be influenced by AutoInst as a lot can happen
between a call without and one with this bit set. The real cache-hit
check is above this call already. Individual hooked checks can then
inspect the state if they want to cache. Calling them multiple times
shouldn't be a problem either way.
We have to properly close our pseudo terminals even in error cases
before we call post-invoke scripts. This is done now by breaking from
the dpkg calling loop instead of copying the handling, which did it in
the wrong order before.
This also ensures that our state file is written in error cases to
record autobit and co as this was forgotten before.
This methods should not be used by anyone expect the library itself as
they are helpers for the specific class and therefore perfect candidates
for hidding.
While it is a huge undertaking to enable it for our public libraries as
basically everything we exported so far could be seen as public
interface our private library is new and under our full control, so we
can do whatever we like with it. The benefits are not that big in return
of course, but it reduces the size a bit, so thats great nontheless.
only consider versioned kernel packages in autoremove
Metapackages like "linux-image-amd64" are otherwise matched by our
extraction as well, which later on can't be successfully compared via
dpkg --compare-versions as the 'amd64' bit isn't a version number.
(Luckily none of our architectures starts with a digit.)
This was broken by me in 0.9.16 as I moved a shell-glob matcher to a
regex-based one which has slightly different semantics regarding '*'.
It can happen that content in our buffer is not enough to produce a
meaningful output in which case no output is created by liblzma, but
still reports that everything is okay and we should go on.
The code assumes it has reached the end through if it encounters a null
read, so this commit makes it so that it looks like this read was
interrupted just like the lowlevel read() on uncompressed files could.
It subsequently fixes the issue with that as well as until now our loop
would still break even if we wanted it to continue on.
(This bug triggers our usual "Hash sum mismatch" error)
Reported-By: Stefan Lippers-Hollmann <s.L-H@gmx.de>
Fix handling of autoclosing for compressed files (Closes: #741685)
AutoClose is both an argument in OpenDescriptor() and an enum. In
commit 84baaae93badc2da7c1f4f356456762895cef278 code using the AutoClose
parameter was moved to OpenDescriptorInternal(). In that function,
AutoClose meant the enum value, so the check was always false.
use the pretty fullname of a pkg as download desciption
Otherwise the "WARNING: The following packages cannot be authenticated!"
messages does not include the architecture of the package, so it would
be slightly misinformative.
We have xz/lzma support for a while, but only via an external binary
provided by xz-utils. Now that the Debian archive provides xz by default
and dpkg pre-depends on the library provided by liblzma-dev we can switch
now to use this library as well to avoid requiring an external binary.
For now the binary is in a prio:required package, but this might change
in the future.
API wise it is quiet similar to bz2 code expect that it doesn't provide
file I/O methods, so we piece this together on our own.
The framework can be configured to use different compression algorithms
to test different ones, but a testcase testing for gz support should
always be run with gz, regardless of what compressions are configured
otherwise.
In #737085 we see that apt can be confused if informations about
versions only differ slightly. This commit adds a way of at least adding
a few more data points with the next abi break to help a bit with it.
As we deal with regex matchers here the dots are treated as wildcards if
we don't take care of escaping them. Not very likely that this could be
a real-world problem, but just to be sure.
add ".*-{kernel,modules}-$KERVER" matcher for hook
Pre-build kernel modules (like those build with module-assistent) are
commonly named in this way and it should be ungeneric enough to be added
by default for everyone.
kfreebsd as well as hurd kernel packages call the postinst script as
well so we just need to enable the correct parsing for installed
packages and disable the "protect every version" hammer for them.
use a configurable list of versioned kernel packages
With APT::VersionedKernelPackages users have the option of adding
packages like pre-build out-of-tree modules to the list of automatically
protected from being autoremoved.
Wojciech Górski [Mon, 10 Mar 2014 01:07:05 +0000 (02:07 +0100)]
fix polish --install-suggests text in apt-get manpage
Description of the --install-suggests option is wrong in the polish
apt-get man page. The actual meaning of this option is the opposite
to what is written in the manual.
support very long mtab entries in mountpoint discovery
Old code limited lines to 250 characters which is probably enough for
everybody, but who knows… It also takes care of device nodes which start
with the same prefix.
remove code duplication for Add & Ident in apt-cdrom
The preparation code to deal with auto-detection and co is the same for
both methods, so not sharing them would be bad. Deals also with the
prevention of side effects triggered by the auto-detection like
disabling mounting for the fallback.
Commit 62dcbf84 changed the code of ident to look more like the code for
add on my suggestion. This made ident interactive as it starts with a
unmount, press enter, mount cycle. The first two are skipped now.
This fixes d-i/apt-setup which is using it to get ID as well as label.
no error for non-existing mountpoints in MountCdrom
The mountpoint might be auto-generated by the mount command so pushing
an error on the stack will confuse the following code and let it believe
an unrecoverable error occured while potentially everything is okay.
Same goes for umount as a non-existing mountpoint is by definition not
mounted.
if mountpoint has a ".disk" directory it is mounted
Checking that parent-directory of mountpoint and mountpoint are on
different devices is fine most of the time, but is too restrictive
for our testcases and there shouldn't be anything wrong with 'normal'
users copying disk-contents around either if they want to.
We check for the existance of the ".disk/" directory now as this will
not be present if the disk isn't 'mounted'. Disks doesn't need to have
such a directory through, so for those we fall back to the old way of
detecting mounted or not mounted.
do not configure already unpacked packages needlessly
The unpack of a M-A:same package will force the unpack of all its
siblings directly to prevent that they could be separated by later
immediate actions. In commit 634985f8 a call to SmartConfigure was
introduced to configure these packages at the time the installation
order encounters them. Usually, the unpack order is already okay, so
that this 'earlier' unpack was not needed and if it wouldn't have been
done, the package would now only be unpacked, but by configuring the package
now we impose new requirements which must be satisfied. The code is
clever enough to handle this most of the time (it worked for 2 years!),
but it isn't needed and in very coupled cases this can fail.
Removing this call again removes this extra burden and so simplifies the
ordering as can be seen in the modified tests. Famous last words, but I
don't see a reason for this extra burden to exist hence the remove.
use SPtrArray handling instead of explicit delete[]
The warning message from gcc doesn't make that much sense in my reading
as there is no loop which could overflow here, but it is better to use
our SPtrArray wrapping anyway which fixes the warning as well.
warning: cannot optimize loop, the loop counter may overflow [-Wunsafe-loop-optimizations]
delete[] Dsc;
server.cc: In member function ‘bool ServerState::HeaderLine(std::string)’:
server.cc:198:72: warning: format ‘%llu’ expects argument of type ‘long long unsigned int*’, but argument 3 has type ‘long long int*’ [-Wformat=]
else if (sscanf(Val.c_str(),"bytes %llu-%*u/%llu",&StartPos,&Size) != 2)
support DEB_BUILD_PROFILES and -P for build profiles
Inspired by the rest of the patch in 661537, but abstract the
parsing of various ways of setting the build profiles more so it can
potentially be reused and all apt parts have the same behaviour.
Especially config options, cmdline options and environment will not be
combined as proposed as this isn't APTs usual behaviour and dpkg doesn't
do it either, so one overrides the other as it normally does.
Johannes Schauer [Mon, 24 Feb 2014 23:12:20 +0000 (00:12 +0100)]
implement BuildProfileSpec support as dpkg has in 1.17.2
Build-dependencies are now able to include a <profile.foo …>
specification limiting usage similar to already supported [arch …].
More details: https://wiki.debian.org/BuildProfileSpec
add default and override handling for Cnf::FindVector
Automatically handle the override of list options via its parent value
which can even be a comma-separated list of values. It also adds an easy
way of providing a default for the list.
It can be useful to have a whole makefile available for vendor setup,
but by providing a basic one we can deal with the simple cases more
easily (and changes to the system are presumably easier).
Prevents that "old" dependencies have an influence in the scoring.
With positive dependencies this is usually not a problem, but negative
dependencies can linger around for a long time.
propagate a negative score point along breaks/conflicts
versioned -dev packages like db and boost have the problem of no
dependencies which would give them a competitive advantage against an
older incarnation of the -dev package, so they tend to be kept back
until the old version is removed from the archive, which, if the user
has older releases in its sources can take a long time (or never happens).
The newer version has a conflicts/breaks against the older one, but the
older one hasn't against the newer, so by giving via the conflicts the
older one a reduced score the newer one can win if there is no other
reason to keep it. If both have a conflict against each other the
scoring will cancel itself out, so no harm done.
This gives "action" a slightly bigger edge in breaks/conflicts cases
than before, but holding back isn't a really good solution anyway.