James McCoy [Wed, 20 Apr 2016 02:27:21 +0000 (22:27 -0400)]
deb822: Restore support for <multivalue>-{Add,Remove}
Redesign of multivalue options in 463c8d801595ce5ac94d7c032264820be7434232
caused the parser to look for <multivalue>{Add,Remove} (no hyphen)
instead of the expected <multivalue>-{Add,Remove}.
factor out Pkg/DepIterator prettyprinters into own header
The old prettyprinters have only access to the struct they pretty print,
which isn't enough usually as we want to know for a package also a bit
of state information like which version is the candidate.
We therefore need to pull the DepCache into context and hence use a
temporary struct which is printed instead of the iterator itself.
This method does not return the 'current' candidate of the DepCache
which would be most expected, but instead returns the version which
would be candidate in a default-only policy setting – aka ignoring
apt_preferences settings and co.
respect user pinning in M-A:same version (un)screwing
Using Pkg.CandVersion() here is wrong as its implementation will return
a candidate based just on the default policy settings ignoring user
preferences and otherwise set candidates (aka: it sidesteps the
pkgDepCache).
This causes M-A:same libraries to be detected as screwed even through
they aren't, so that they end up being kept back.
If the file is in a failed state there is no point in trying to flush
out the buffers as the file is to be discarded anyhow & its likely all
this flushing is producing is additional error messages.
Patrick Cable [Wed, 27 Apr 2016 20:55:55 +0000 (16:55 -0400)]
refactored no_proxy code to work regardless of where https proxy is set
when using the https transport mechanism, $no_proxy is ignored if apt is
getting it's proxy information from $https_proxy (as opposed to
Acquire::https::Proxy somewhere in apt config). if the source of proxy
information is Acquire::https::Proxy set in apt.conf (or apt.conf.d),
then $no_proxy is honored.
It would previously return a pin of 0, which is an invalid value, but
the intend is that versions which are only in the dpkg/status file can't
be selected for installation (= can't be a candidate) which is what a
negative pin assures.
This helps with the communication to EDSP solvers as they neither know
about the rc-state (yet) nor that they shouldn't choose this version.
Ideally they shouldn't be told about such versions at all as there is
nothing to be solved here, but we will get there eventually.
use the same redirection mirror for all index files
Redirection services like httpredir.debian.org tend to use a set of
mirrors from which they pick a mirror at "random" for each requested
file, which is usually benefitial for the download of debs, but for the
index files this can quickly cause problems (aka hashsum mismatches) if
the two (or more) mirrors involved are only slightly out-of-sync.
This commit "resolves" this issue by using the mirror we ended up using
to get the (signed) Release file directly to get the index files
belonging to this Release file instead of asking the redirection
service which eliminates the risk of hitting out-of-sync mirrors.
As an obvious downside the redirection service can't serve partial
mirrors anymore for indexes and the download of indexes indexed in the
same Release file can't be done in parallel (from different mirrors).
This does not effect the download of non-index files like deb-files as
out-of-sync mirrors aren't a huge problem there, so the parallel
download outweights a potentially 404 error (also because this causes no
errenous downloads while hashsum mismatches download the entire file
before finding out that it was pointless).
The rational for this is that indexes are relative to the Release file.
If we would be talking about a HTML page including images, such a
behaviour is obvious and intended – not doing it means in the best case
a bunch of "useless" requests which will all be answered with a
redirect.
show more details for "Writing more data" errors, too
They are the small brothers of the hashsum mismatch, so they deserve a
similar treatment even through we have for architectual reasons not a
much to display as for hashsum mismatches for now.
Users tend to report these errors with just this error message… not very
actionable and hard to figure out if this is a temporary or 'permanent'
mirror-sync issue or even the occasional apt bug.
Showing the involved hashsums and modification times should help in
triaging these kind of bugs – and eventually we will have less of them
via by-hash.
The subheaders aren't marked for translation for now as they are
technical glibberish and probably easier to deal with if not translated.
After all, our iconic "Hash Sum mismatch" is translated at least.
These additions were proposed in #817240 by Peter Palfrader.
format multiline errors properly in acquire progress
Together with the GlobalError change this allows us to add errors
spanning multiple lines, just that we control GlobalError while the
acquire progress is dealt with potentially by individual clients which
might or might not need to be adapted.
This isn't critical through as it either just works as expected anyhow
or is a minor styling thing (after all, all this commit does it add two
spaces to indent the lines a bit…).
This is a duplicate of sorts of 0efb29eb36184bbe6de7b1013d1898796d94b171
which is the a lot more frequent case of this error – and also a
duplicate of this error message, just without the \n at the end.
Calling the (non-existent) reporter multiple times for the same error
with different codes for the same error (e.g. hashsum) is a bit strange.
It also doesn't need to be a public API. Ideally that would all look and
behave slightly different, but we will worry about that at the time this
is actually (planed to be) used somewhere…
don't ask server if we have entire file in partial/
We have this situation in cases were parts of the transaction are
refused (e.g. in a hashsum mismatch) and rerun the update (e.g. in the
hope that we get a mirror which is synced this time).
Previously we would ask the server with an if-range and in the best case
recieve a 416 in response (less featureful server might end up giving us
the entire file again or we get the wrong file this time giving us a
hashsum mismatch…), which is a waste of time if we know already by
checking the hashsums that we got the complete and correct file.
Queues feeding workers like rred are created in a random pattern to get
a few of them to run in parallel – but if we already have an idling queue
we don't need to assign it to a (potentially new) random queue as that
saves us the (agruably small) overhead of starting up a new queue,
avoids adding jobs to an already busy queue while others idle and as
a bonus reduces the size of debug logs a bit.
We also keep starting new queues now until we reach our limit before
we assign work at random to them, which should give us a more effective
utilisation overall compared to potentially adding work to busy queues
while we haven't reached our queue limit yet.
allow uncompressed files to be empty in store again
With the previous fix for file applied we can again hit repositories
which contain uncompressed empty files, which since the introduction of
the central store: method wasn't accounted for anymore as we forbid
empty compressed files.
A silly of-by-one error in the stripping of the extension to check for
the uncompressed filename broken in an attempt to support all
compressions in commit a09f6eb8fc67cd2d836019f448f18580396185e5.
Fixing this highlights also mistakes in the handling of the Alt-Filename
in libapt which would cause apt to remove the file from the repository
(if root has the needed rights – aka the disk isn't readonly or similar)
recheck Pre-Depends satisfaction in SmartConfigure
Regression introduced in commit 590f1923121815b36ef889033c1c416a23cbe9a2
(2011!) causing apt to not check if Pre-Depends are satisfied before
calling --configure. This managed to hide so perfectly well for years as
Pre-Depends aren't that common, apt prefers upgrading these packages
first and checks for satisfaction is already in SmartUnpack, so there
is only a small window of oppertunity to break a pre-dependency relation
(usually with an unpack).
Verified by logchecking with two provided status files in the buglog.
I would have liked to write a test, but I wasn't able to reach the needed
complexity to get apt to fail – but the change is small and reasonable,
so what could possible go wrong™, right?
It handy to be able to point apt at reading a compressed dpkg/status
file in debugging cases, which worked pre-1.1 but somewhere down the
line in the massive refactoring. Restoring this behavior in a central
place for all realfile index files instead of just for the status file.
(This has no effect on index files acquired from an archive – those are
handled by different classes and support compressed files just fine)
There is a good chance that the attempt will fail, but if a user
mentions certain packages explicitly on the commandline there is a
chance that this will consist of a broken system which is resolved
by upgrading more packages then just the mentioned.
This limitation was not effecting external resolvers.
With the previous commit we track the state of transactions, so we can
now use our knowledge to avoid processing data for a transaction which
was already closed (via an abort in this case).
This is needed as multiple independent processes are interacting in the
process, so there isn't a simple immediate full-engine stop and it would
also be bad to teach each and every item how to check if its manager
has failed subordinate and what to do in that case.
In the pdiff case, which deals (potentially) with many items during its
lifetime e.g. a hashsum mismatch in another file can abort the
transaction the file we try to patch via pdiff belongs to. This causes
some of the items (which are already done) to be aborted with it, but
items still in the process of acquisition continue in the processing and
will later try to use all the items together failing in strange ways as
cleanup already happened.
The chosen solution is to dry up the communication channels instead by
ignoring new requests for data acquisition, canceling requests which are
not assigned to a queue and not calling Done/Failed on items anymore.
This means that e.g. already started or pending (e.g. pipelined)
downloads aren't stopped and continue as normal for now, but they remain
in partial/ and aren't processed further so the next update command will
pick them up and put them to good use while the current process fails
updating (for this transaction group) in an orderly fashion.
Closes: 817240
Thanks: Barr Detwix & Vincent Lefevre for log files
We want to keep track of the state of a transaction overall to base
future decisions on it, but as a pre-requirement we have to make sure
that a transaction isn't commited twice (which happened if the download
of InRelease failed and Release takes over).
It also happened to create empty commits after a transaction was already
aborted in cases in which the Release files were rejected.
This isn't effecting security at the moment, but to ensure this isn't
happening again and can never be bad a bunch of fatal error messages are
added to make regressions on this front visible.
Hardly noticeable, but given that we have the option to easily enable
it, lets enable it as every newline in the message is written
individually by the code.
Some methods had it missing, some used the keyword directly, which isn't
a problem as it is a cc file, but for consistency lets stick to our
macro for now.
Michael Vogt [Thu, 17 Mar 2016 07:56:58 +0000 (08:56 +0100)]
Use systemd.timer instead of a cron job
The rational is that we need to spread the load on the mirrors
that apt update and unattended-upgrades cause. To do so, we
leverage the RandomizeDelay feature of systemd. The other advantage
is that the timer is not run at a fixed daily.daily time but
instead every 24h. This also fixes the problem that the randomized
deplay in the current apt.cron.daily causes other cron jobs to
be deplayed.
A compatibility cron job is also provided for systems that do not
use systemd.
Note that the time is fired two times a day, but the logic inside
of apt.systemd.daily will ensure (via stamp files) that the
servers are hit at most every 24h. Firing two times a day helps
with the worst case update time and it also helps with systems
that are not always on.
test-apt-download-progress: Use a larger file for testing
This should make the test less flaky, as with a small file,
we might have already received all the data before trying
to apply rate limits which is a constant source of failure
on the i386 Ubuntu autopkgtest.
Do not mark packages for keep that we want to remove
If the package is marked for removal, keep it marked for
removal and do not mark it for keep. If we mark it for keep,
we some how later get to a different stage where it is marked
for unpack instead of removal.
In the example in the bug report, we would get a:
SmartUnPack maas-region-controller-min:amd64 (replace version 2.0.0~alpha3+bzr4810-0ubuntu1 with Segmentation fault
maas-region-controller-min:amd64 was marked for removal, but
we changed it to keep and somehow it thinks that this is to
be replaced now instead of removed (probably because the
InstallVer != CandidateVer [with InstallVer = 0]).
Our own gpgv method can declare a digest algorithm as untrusted and
handles these as worthless signatures. If gpgv comes with inbuilt
untrusted (which is called weak in official terminology) which it e.g.
does for MD5 in recent versions we should handle it in the same way.
To check this we use the most uncommon still fully trusted hash as a
configureable one via a hidden config option to toggle through all of
the three states a hash can be in.
Using erase(pos) is invalid in our case here as pos must be a valid and
derefenceable iterator, which isn't the case for an end-iterator (like
if we had no good signature).
The problem runs deeper still through as VALIDSIG is a keyid while
GOODSIG is just a longid so comparing them will always fail.
tests: reenable basic auth test and add @ in username
On launchpad #1558484 a user reports that @ in the authentication tokens
parsing of sources.list isn't working in an older (precise) version. It
isn't the recommended way of specifying passwords and co (auth.conf is),
but we can at least test for regressions (and in this case test at all…
who was that "clever" boy disabling a test with exit……… oh, nevermind.
cachefile: Only set members that were initialized successfully
Otherwise, things will just start failing later down the stack,
because (a) the lazy getters do not check if building was successful
and (b) any further getter call would return the invalid object
anyway.
Also initialize VS in pkgCache to nullptr by default.
test framework: Pass -n to lsof to speed up finding the https port
There is no point in resolving all addresses to their names, this
just seriously slows the setup phase down. So just pass -n to not
resolve names anymore.
The test is a bit flaky. In order to get it less flaky, reduce
the speed in each run. To compensate for issues, start with a
higher speed level. Also increase the number of runs to 10.
Furthermore, http get the same multiple-run loop, and the log
files are changed to indicate the protocol being tested, as it's
not obvious which one fails if it fails in quiet mode.
The epoch stripping in this code is done since day one, but in other
places we show a version epochs are not stripped. If epochs are present
in packages they tend to be an important information which we can't just
drop and especially can't drop "sometimes" as that confuses users and
tools alike – so even if removing code in use for (close to) 18 years
feels wrong, it is probably the right choice for consistency.
Michael Vogt [Tue, 15 Mar 2016 13:50:37 +0000 (14:50 +0100)]
Get accurate progress reporting in apt update again
For the non-pdiff case, we have can have accurate progress
reporting because after fetching the {,In}Release files we know
how many IndexFiles will be fetched and what size they have.
Therefore init the filesize early (in pkgAcqIndex::Init) and
ensure that in Acquire::Pulse() looks at already downloaded
bits when calculating the progress in Acquire::Pulse.
Also improve debug output of Debug::acquire::progress
Michael Vogt [Tue, 15 Mar 2016 12:13:54 +0000 (13:13 +0100)]
Fix bug where the problemresolve can put a pkg into a heisenstate
The problemresolver will set the candidate version for pkg P back
to the current version if it encounters an impossible to satisfy
critical dependency on P. However it did not set the State of
the package back as well which lead to a situation where P is
neither in Keep,Install,Upgrade,Delete state.
Note that this can not be tested via the traditional sh based
framework. I added a python-apt based test for this.
methods/gpgv: Correctly handle weak signatures with multiple keys
We added weak signatures to BadSigners, meaning that a Release file
signed by both a weak signature and a strong signature would be
rejected; preventing people from migrating from DSA to RSA keys
in a sane way.
Instead of using BadSigners, treat weak signatures like expired
keys: They are no good signatures, and they are worthless.
ERRSIG is created whenever a key uses an unknown/weak digest
algorithm, for example. This allows us to report a more useful
error than just "unknown apt-key error.":
While still not being the best reportable error message, it's
better than unknown apt-key error and hopefully redirects users
to complain to their repository owners.
"%s can not be marked as it is not installed." was incorrectly
translated as "%s no se puede marcar como no instalado.\n",
which means "%s can not be marked as not installed."
Thanks to Marcos Del Sol Vives for reporting & to the spanish
translation team – and in particular Camaleón and Venturi –
for review and correction of this issue!
The (unlikely) waitpid failure case should fallthrough the code just
like the other failures (and successes) instead of taking a shortcut
avoiding all the cleanup (progress) and finishing touches (log, state).
This also delays the cleanup of the progress until apt is really done
with everything and "just" has the post-invokes left to do, so the
period of 'apt looks finished as it stopped the progress' and 'apt
really finished as I have the shell-prompt back' is shorter even if
there is no progress reported anymore, so the bar lingers at 100%…
Ideally even the post-invokes would be covered by progress, but they
can have their own output and dealing with that could be hard.
flush line-clearing on progress stop before post-invoke
All other interactions with std::cout are flushed directly, just in the
stop case we hadn't done it – no problem expect if there is still output
coming after apt is done like in the case of a post-invoke script
producing output.
Iceweasel^WFirefox complains about the missing encoding in its console
which can be a bit annoying in interactive sessions, so fixing these
issues has no effect on apt itself, but on the testers.
require $(HASH)-Download field in .diff/Index files
Now that we ignore SHA1-only files it makes sense to require also the
provision of hashes for the compressed patches as this was introduced in
the same patchset as support for non-SHA1 hashes in the file itself in
dak and adding support in other archive creators (if they support pdiffs
at all) will likely be in the same batch.
The reason for the change itself is simple: If you are 'scared' enough
about the security of SHA1, you shouldn't uncompress a file you haven't
verified at all – after all, it could be exploiting a bug or a zip bomb.
Given that we refuse to use SHA1-only .diff/Indexes no point in shipping
and running code which pretends to check support for it which given that
all these tests are run 3 times eats a noticeable amount of time.