gpgv: Untrust SHA1, RIPE-MD/160, but allow downgrading to weak
Change the trust level check to allow downgrading an Untrusted
option to weak (APT::Hashes::SHA1::Weak "yes";), so it prints
a warning instead of an error; and change the default values
for SHA1 and RIPE-MD/160 from Weak to Untrusted.
Paul Wise [Fri, 25 Nov 2016 09:53:10 +0000 (10:53 +0100)]
show output as documented for APT::Periodic::Verbose 2
The documentation of APT::Periodic::Verbose doesn't match the code,
specifically level 2 should apply some things differently to level 1
but does not because it uses `-le 2` instead of `-lt 2` or `-le 1`.
optional write aptwebserver log to client specific files
The test test-handle-redirect-as-used-mirror-change serves multiple
clients at the same time, so the order of the output is undefined and
once in a while the two clients will intermix their lines causing the
grep we perform on it later to fail making our tests fail.
Solved by introducing client-specific logfiles which we all grep and
sort the result to have the results more stable.
In ad9416611ab83f7799f2dcb4bf7f3ef30e9fe6f8 we fall back to asking the
original mirror (e.g. a redirector) if we do not get the expected
result. This works for the indexes, but patches are a different beast
and much simpler. Adding this fallback code here seems like overkill as
they are usually right along their Index file, so actually forward the
relevant settings to the patch items which fixes pdiff support combined
with a redirector and partial mirrors as in such a situation the pdiff
patches would be 404 and the complete index would be downloaded.
add apt-key support for armored GPG key files (*.asc)
Having binary files in /etc is kinda annoying – not that the armored
files are much better – but it is hard to keep tabs on which format the
file has ("simple" or "keybox") and different gnupg versions have
different default binary formats which can be confusing for users to
work with (beside that it is binary).
Adding support for this now will enable us in some distant future to
move to armored later on, much like we added trusted.gpg.d years before
the world picked it up.
We report warnings from apt-key this way already since 29c590951f812d9e9c4f17706e34f2c3315fb1f6, so reporting errors seems like
a good addition. Most of those errors aren't really from apt-key
through, but from the code setting up and actually calling it which used
to just print to stderr which might or might not intermix them with
(other) progress lines in update calls. Having them as proper error
messages in the system means that the errors are actually collected
later on for the list instead of ending up with our relatively generic
but in those cases bogus hint regarding "is gpgv installed?".
The effective difference is minimal as the errors apply mostly to
systems which have far worse problems than a not as nice looking error
message, which makes this pretty hard to test – but at least now the
hint that your system is broken can be read in proper order (= there
aren't many valid cases in which the permissions of /tmp are messed up…).
do not configure unconfigured to be removed packages
We try to configure all packages at the end which need to be configured,
but that also applies to packages which weren't completely installed
(e.g. maintainerscript failed) we end up removing in this interaction
instead.
APT doesn't perform this explicit configure in the end as it is using
"dpkg --configure --pending", but it does confuse the progress report
and potentially also hook scripts.
dpkg stumbles over these (#844300) and we haven't dropped 'easier'
removes to be implicit and to be scheduled by dpkg by default so far
so we shouldn't push the decision in such cases to dpkg either.
Our old idea was to look for the first package which would be "touched"
and take this as the package dpkg is talking about, but that is
incorrect in complicated situations like a package upgraded to/from
multiple M-A:same siblings installed.
As we us the progress report to decide what is still needed we have to
be reasonabily right about the package dpkg is talking about, so we jump
to quite a few loops to get it.
Given that we use the progress information to skip over actions dpkg has
already done like not purging a package which was already removed and
had no config files or not acting on disappeared packages and such it is
important that apt and dpkg agree on which states the package has to
pass through.
To ensure that we keep tabs on this in the future a warning is added at
the end if apt hasn't seen all the action it was supposed to see. I
can't wait for the first bugreporters to wonder about this…
react to trig-pend only if we have nothing else to do
If a package is triggered dpkg frequently issues two messages about it
causing us to make a note about it both times which messes up our
planned dpkg actions view. Adding these actions if we have nothing else
planned fixes this and should still be correct as those planned actions
will deal with the triggering just fine and we avoid strange problems
like a package triggered before its removed…
Our profile says we spend about 5% of the time transforming the
hex digits into the binary format used by HashsumValue, all for
comparing them against the other strings. That makes no sense
at all.
According to callgrind, this reduces the overall instruction
count from 5,3 billion to 5 billion in my example, which
roughly matches the 5%.
Generating a string for each version we see is somewhat inefficient.
The problem here is that the Description tag names are longer than
15 byte, and thus require an allocation on the heap, which we should
avoid.
It seems reasonable that 20 characters works for all languages codes
used for archive descriptions, but if not, there's a warning, so
we'll catch that.
Compare size before data when ordering cache bucket entries
This has the effect of significantly reducing actual string
comparisons, and should improve the performance of FindGrp
a bit, although it's hardly measureable (callgrind says it
uses 10% instructions less now).
Optimize VersionHash() to not need temporary copy of input
Stop copying stuff, and just parse the bytes one by-one to the
newly created AddCRC16Byte. This improves the instruction count
for an update run from 720,850,121 to 455,801,749 according to
callgrind.
Introduce tolower_ascii_unsafe() and use it for hashing
This one has some obvious collisions for non-alphabetical characters,
like some control characters also hashing to numbers, but we don't
really have those, and these are hash functions which are not
collision free to begin with.
Bump the cache major version for non-backportable changes
We already have two stable series with major version 10, and
the next commits will introduce non-backportable performance
changes that affect the cache algorithms, so we need to bump
the major version now to prevent future problems.
TagSection: Split AlphaIndexes into AlphaIndexes and BetaIndexes
Move the use of the AlphaHash to a new second hash table in
preparation for the arrival of the new perfect hash function.
With the new perfect hash function hashing most of the keys for
us, having 128 slots for a fallback hash function seems enough
and prevents us from wasting space.
We have the last Release file around for other checks, so its trivial to
look if the new Release file contains a new codename (e.g. the user has
"testing" in the sources and it flipped from stretch to buster).
Such a change can be okay and expected, but also be a hint of problems,
so a warning if we see it happen seems okay. We can only print it once
anyhow and frontends and co are likely to ignore/hide it.
A suite or codename entry in the Release file is checked against the
distribution field in the sources.list entry that lead to the download of that
Release file. This distribution entry can contain slashes in the distribution
field:
deb http://security.debian.org/debian wheezy/updates main
However, the Release file may only contain "wheezy" in the Codename field and
not "wheezy/updates". So a transformation needs to take place that removes the
last / and everything that comes after (e.g. "/updates"). This fails, however,
for valid cases like a reprepro snapshot where the given Codename contains
slashes but is perfectly fine and doesn't need to be transformed. Since that
transformation is essentially just a workaround for special cases like the
security repository, it should be checked if the literal Codename without any
transformations happened is valid and only if isn't the dist should be checked
against the transformated one.
This way special cases like security.debian.org are handled and reprepro
snapshots work too.
The initial patch was taken as insperationto move whole transformation
to CheckDist() which makes this method more accepting & easier to use
(but according to codesearch.d.n we are the only users anyhow).
Thanks: Lukas Anzinger for initial patch Closes: 644610
add hidden config to set packages as Essential/Important
You can pretty much achieve the same with a local dummy package if you
want to, but libapt has an inbuilt setting for essential: "apt" which
can be overridden with this option as well – it could be helpful in
quick tests and what not so adding this alternative shouldn't really
hurt much.
We aren't going to document them much through as care must be taken in
regards to the binary caches as they aren't invalidated by config
options alone, so the effects of old settings could still be in them,
similar to the other already existing pkgCacheGen option(s).
Closes: 767891
Thanks: Anthony Towns for initial patch
Edgar Fuß [Fri, 11 Nov 2016 09:20:23 +0000 (10:20 +0100)]
http: clear content before reporting the failure
[Comment from commiter:] I have the feeling that the issue itself is
fixed for a while already as nowadays we have testcases involving a
webserver closing the connection on error (look for "closeOnError") and
no even remotely recent reports about it, but moving the content
clearance above the failure report is a valid change and shouldn't hurt.
On Travis CI running tests with code coverage enabled sometimes
generates lines like:
profiling:/path/to/file.gcda:Merge mismatch for function 257
It would be nice if we could resolve this somehow as it garbles the
statistics, but until then it is far more annoying that this causes
test failures for no good reason.
The 0.0.0.0:0 tor reports is pretty useless by itself, but even if an IP
would be reported it is better to show the user the hostname we wanted
the proxy to connect to in the same error message. We improve upon it
further by looking for this bind address in particular and remap error
messages slightly to give users a better chance of figuring out what
went wrong. Upstream Tor can't do that as it is technically wrong.
rename Checksum-FileSize to Filesize in hashsum mismatch
Some people do not recognize the field value with such an arcane name
and/or expect it to refer to something different (e.g. #839257).
We can't just rename it internally as its an avoidance strategy as such
fieldname existed previously with less clear semantics, but we can spare
the general public from this implementation detail.
reset HOME, USER(NAME), TMPDIR & SHELL in DropPrivileges
We can't cleanup the environment like e.g. sudo would do as you usually
want the environment to "leak" into these helpers, but some variables
like HOME should really not have still the value of the root user – it
could confuse the helpers (USER) and HOME isn't accessible anyhow.
These new enum values might cause "interesting" behaviour in tools not
expecting them – like an old apt would think a Build-Conflicts-Arch is
some sort of Build-Depends – but that can't reasonably be avoided and
effects only packages using B-D/C-A so if there is any breakage the
tools can easily be adapted.
The APT_PKG_RELEASE number is increased so that libapt users can detect
the availability of these new enum fields via:
#if APT_PKG_ABI > 500 || (APT_PKG_ABI == 500 && APT_PKG_RELEASE >= 1)
fix testcase expecting incorrect remove log from dpkg
dpkg 1.18.11 includes:
* Do not log nor print duplicate dpkg removal action. We print
“Removing <package> (<version>)” lines and log remove action twice
when purging a package from frontends, because they usually first call
--remove and then --purge sequentially. When purging a package which is
already in config-files (i.e. it has been removed before), do not print
nor log the remove action.
Michael Vogt [Wed, 9 Nov 2016 14:09:44 +0000 (15:09 +0100)]
Do not (re)start "apt-daily.system"
This unit runs unattended-upgrades. If apt itself is part of the
upgrade a restart of the unit will kill unattended-upgrades. This
will lead to an inconsistent dpkg status.
A user relying on the deprecated behaviour of apt-get to accept a source
with an unknown pubkey to install a package containing the key expects
that the following 'apt-get update' causes the source to be considered
as trusted, but in case the source hadn't changed in the meantime this
wasn't happening: The source kept being untrusted until the Release file
was changed.
This only effects sources not using InRelease and only apt-get, the apt
binary downright refuses this course of actions, but it is a common way
of adding external sources.
don't install new deps of candidates for kept back pkgs
In effect this is an extension of the 6 years old commit a8dfff90aa740889eb99d00fde5d70908d9fd88a which uses the autoremover to
remove packages again from the solution which are no longer needed to be
there. Commonly these are dependencies of packages we end up not
installed due to problem resolver decisions. Slightly less common is
the situation we deal with here: a package which we wanted to upgrade
sporting a new dependency, but ended up holding back.
The problem is that all versions of an installed reverse dependencies can
bring back a "garbage" package – we need to do this as there is
nothing inherently wrong in having garbage packages installed or upgrade
them, which itself would have garbage dependencies, so just blindly
killing all new garbage packages would prevent the upgrade (and actually
generate errors). What we should be doing is looking only at the version
we will have on the system, disregarding all old/new reverse dependencies.
travis: Move codecov from after_success to after_script
We always want to run codecov test, even if there are spurious
failures. We should really work around those failures more, though,
it is starting to annoy me.
don't leak FD in AutoProxyDetect command return parsing
which accidentally made the proxy autodetection code also read
the scripts output on stderr, not only on stdout when it switched
the code from popen() to Popen().
VersionHash: Do not skip too long dependency lines
If the dependency line does not contain spaces in the repository
but does in the dpkg status file (because dpkg normalized the
dependency list), the dpkg line might be longer than the line
in the repository. If it now happens to be longer than 1024
characters, it would be skipped, causing the hashes to be
out of date.
Note that we have to bump the minor cache version again as
this changes the format slightly, and we might get mismatches
with an older src cache otherwise.
This allows fully automated code coverage testing, which is
basically awesome. To allow the methods and solvers and stuff
which run as _apt to write to our build directory, we need to
adjust the permissions a bit, but otherwise it's OK.
We need to ignore messages from gcov. All those messages
start with profiling: and are printed using vfprintf(), so
the only thing we can do is add a library overriding those
functions and linking apt-pkg to it.
test: Always install dpkg into our tests, regardless of MA
Even if we only configure a single architecture, install dpkg, so
dpkg can assert multi arch correctly. This also has the nice side
effect of making single architecture and multiple architecture
test cases more uniform.
edsp: try 2 to read responses even if writing failed
Commit b60c8a89c281f2bb945d426d2215cbf8f5760738 improved the situation,
but due to inconsistency mostly for planners, not for solvers. As the
idea of hiding errors if we show another error is a bit scary (as the
extern error might be a followup of our intern error, rather than the
reason for our intern error as it is at the moment) we don't discard the
errors, but if we got an extern error we show them directly removing
them from the error list at the end of the run – that list will contain
the extern error which hopefully gives us the best of both worlds.
The problem itself is the same as before: The externals exiting before
apt is done talking to them.
Commit 3af3ac2f5ec007badeded46a94be2bd06b9917a2 (released in 1.3~pre1)
implements proper fallback for SRV, but that works actually too good
as the RFC defines that such an SRV record should indicate that the
server doesn't provide this service and apt should respect this.
The solution is hence to fail again as requested even if that isn't what
the user (and perhaps even the server admins) wanted. At least we will
print a message now explicitly mentioning SRV to point people in the
right direction.
acquire: Use priority queues and a 3 stage pipeline design
Employ a priority queue instead of a normal queue to hold
the items; and only add items to the running pipeline if
their priority is the same or higher than the priority
of items in the queue.
The priorities are designed for a 3 stage pipeline system:
In stage 1, all Release files and .diff/Index files are fetched. This
allows us to determine what files remain to be fetched, and thus
ensures a usable progress reporting.
In stage 2, all Pdiff patches are fetched, so we can apply them
in parallel with fetching other files in stage 3.
In stage 3, all other files are fetched (complete index files
such as Contents, Packages).
Performance improvements, mainly from fetching the pdiff patches
before complete files, so they can be applied in parallel:
For the 01 Sep 2016 03:35:23 UTC -> 02 Sep 2016 09:25:37 update
of Debian unstable and testing with Contents and appstream for
amd64 and i386, update time reduced from 37 seconds to 24-28
seconds.
Previously, apt would first download new DEP11 icon tarballs
and metadata files, causing the CPU to be idle. By fetching
the diffs in stage 2, we can now patch our contents and Packages
files while we are downloading the DEP11 stuff.
CMake: test/libapt: Use a prebuilt GTest library if available
If a non-existing source directory is specified, try finding
the system gtest library. Debian derived distributions are
a bit strange because they only ship the source code and
not the library...
support long keyid and fingerprint in gpgv's GOODSIG
In gpgv1 GOODSIG (and the other messages of status-fd) are documented as
sending the long keyid. In gpgv2 it is documented to be either long
keyid or the fingerprint. At the moment it is still the long keyid, but
the documentation hints at the possibility of changing this.
We care about this for Signed-By support as we detect this way if the
right fingerprint has signed this file (or not). The check itself is
done via VALIDSIG which always is a fingerprint, but there must also be
a GOODSIG (as expired sigs are valid, too) found to be accepted which
wouldn't be found in the fingerprint-case and the signature hence
refused.