The code naturally evolved from a TransactionManager optional to a
required setup which resulted in various places doing unneeded checks
suggesting a more complicated setup than is actually needed.
fix same-mirror redirection for Release{,.gpg} pair
Commit 9b8034a9fd40b4d05075fda719e61f6eb4c45678 just deals with
InRelease properly and generates broken URIs in case the mirror (or the
achieve really) has no InRelease file.
[As this was in no released version no need to clutter changelog with a
fix notice.]
tests: disable generation of Release.gpg by default
Most tests just need a signed repository and don't care if it signed by
an InRelease file or a Release.gpg file, so we can save some time by
just generating one of them by default.
Sounds like not much, but quickly adds up to a few seconds with the
amount of tests we have accumulated by now.
allow redirection for items without a space in the desc again
Broken in a4b8112b19763cbd2c12b81d55bc7d43a591d610.
If an item has a description which includes no space and is redirected
to another mirror the code which wants to rewrite the description
expects a space in there, but can't find it and the unguarded substr
command on the string will fail with an exception thrown…
dpkg can optionally colorize its output since 1.18.5. Currently this
defaults to 'never', but it will eventually be 'auto'. It seems
reasonable to assume that a user who has enabled/disabled colors in apt
will want to have dpkg have the same state regarding color usage.
This isn't overriding explicit settings by the user, so in case a user
feels strongly about it one way or the other there are options.
The actual reason for this commit isn't the limit – there isn't much
point in using that much nesting – its in shutting up gcc mostly:
apt/apt-pkg/contrib/configuration.cc: In function ‘bool ReadConfigFile(Configuration&, const string&, const bool&, const unsigned int&)’:
apt/apt-pkg/contrib/configuration.cc:686:20: warning: cannot optimize loop, the loop counter may overflow [-Wunsafe-loop-optimizations]
string Stack[100];
^
by replacing this with C++s handy std::stack container (adapter).
Also cleans some whitespace noise from the file in the process.
warn if apt-key is run unconditionally in maintainerscript
We want to stop hard-depending on gnupg and for this it is essential
that apt-key isn't used in any critical execution path, which
maintainerscript are. Especially as it is likely that these script call
apt-key either only for (potentially now outdated cleanup) or still not
use the much simpler trusted.gpg.d infrastructure.
apt doesn't need gnupg in its main execution paths to function,
especially the Release file verification is done with gpgv only.
It is only used by apt-key for advanced key management functionality
most user will never use nor need.
The intend is to demote it eventually to Suggests, but we opt here for a
staged downgrade as there are still third-party repositories out there
which require apt-key functionality without depending on gnupg (or apt
for that matter).
support Signed-By in Release files as a sort of HPKP
Users have the option since apt >= 1.1 to enforce that a Release file is
signed with specific key(s) either via keyring filename or fingerprints.
This commit adds an entry with the same name and value (except that it
doesn't accept filenames for obvious reasons) to the Release file so
that the repository owner can set a default value for this setting
effecting the *next* Release file, not the current one, which provides a
functionality similar "HTTP Public Key Pinning". The pinning is in
effect as long as the (then old) Release file is considered valid, but
it is also ignored if the Release file has no Valid-Until at all.
We parse the messages we receive into two big categories: Most of the
messages have a keyid as well as a userid and as they are errors we want
to show the userids as well. The other category is also errors, but have
no userid (like NO_PUBKEY). Explicitly expressing this in code should
make it a bit easier to look at and it also help in dropping additional
fields or just the newline at the end consistently.
don't show NO_PUBKEY warning if repo is signed by another key
Daniel Kahn Gillmor highlights in the bugreport that security isn't
improving by having the user import additional keys – especially as
importing keys securely is hard.
The bugreport was initially about dropping the warning to a notice, but
in given the previously mentioned observation and the fact that we
weren't printing a warning (or a notice) for expired or revoked keys
providing a signature we drop it completely as the code to display a
message if this was the only key is in another path – and is considered
critical.
Signatures on data can have an expiration date, too, which we hadn't
handled previously explicitly (no problem – gpg still has a non-zero
exit code so apt notices the invalid signature) so the error message
wasn't as helpful as it could be (aka mentioning the key signing it).
The upstream documentation says about KEYEXPIRED:
"This status line is not very useful". Indeed, it doesn't mention which
key is expired, and suggests to use the other message which does.
This basically introduces ~33 flags in the output, but a package can
have only ~11 of them displayed at the same time. There is quiet a bit
of duplication also (an uninstalled package is by definition a
newinstall if its getting installed), but as this is debug output we are
better of showing them all in case one of them isn't set in a way it is
supposed to be set.
James McCoy [Wed, 20 Apr 2016 02:27:21 +0000 (22:27 -0400)]
deb822: Restore support for <multivalue>-{Add,Remove}
Redesign of multivalue options in 463c8d801595ce5ac94d7c032264820be7434232
caused the parser to look for <multivalue>{Add,Remove} (no hyphen)
instead of the expected <multivalue>-{Add,Remove}.
factor out Pkg/DepIterator prettyprinters into own header
The old prettyprinters have only access to the struct they pretty print,
which isn't enough usually as we want to know for a package also a bit
of state information like which version is the candidate.
We therefore need to pull the DepCache into context and hence use a
temporary struct which is printed instead of the iterator itself.
This method does not return the 'current' candidate of the DepCache
which would be most expected, but instead returns the version which
would be candidate in a default-only policy setting – aka ignoring
apt_preferences settings and co.
respect user pinning in M-A:same version (un)screwing
Using Pkg.CandVersion() here is wrong as its implementation will return
a candidate based just on the default policy settings ignoring user
preferences and otherwise set candidates (aka: it sidesteps the
pkgDepCache).
This causes M-A:same libraries to be detected as screwed even through
they aren't, so that they end up being kept back.
If the file is in a failed state there is no point in trying to flush
out the buffers as the file is to be discarded anyhow & its likely all
this flushing is producing is additional error messages.
Patrick Cable [Wed, 27 Apr 2016 20:55:55 +0000 (16:55 -0400)]
refactored no_proxy code to work regardless of where https proxy is set
when using the https transport mechanism, $no_proxy is ignored if apt is
getting it's proxy information from $https_proxy (as opposed to
Acquire::https::Proxy somewhere in apt config). if the source of proxy
information is Acquire::https::Proxy set in apt.conf (or apt.conf.d),
then $no_proxy is honored.
It would previously return a pin of 0, which is an invalid value, but
the intend is that versions which are only in the dpkg/status file can't
be selected for installation (= can't be a candidate) which is what a
negative pin assures.
This helps with the communication to EDSP solvers as they neither know
about the rc-state (yet) nor that they shouldn't choose this version.
Ideally they shouldn't be told about such versions at all as there is
nothing to be solved here, but we will get there eventually.
use the same redirection mirror for all index files
Redirection services like httpredir.debian.org tend to use a set of
mirrors from which they pick a mirror at "random" for each requested
file, which is usually benefitial for the download of debs, but for the
index files this can quickly cause problems (aka hashsum mismatches) if
the two (or more) mirrors involved are only slightly out-of-sync.
This commit "resolves" this issue by using the mirror we ended up using
to get the (signed) Release file directly to get the index files
belonging to this Release file instead of asking the redirection
service which eliminates the risk of hitting out-of-sync mirrors.
As an obvious downside the redirection service can't serve partial
mirrors anymore for indexes and the download of indexes indexed in the
same Release file can't be done in parallel (from different mirrors).
This does not effect the download of non-index files like deb-files as
out-of-sync mirrors aren't a huge problem there, so the parallel
download outweights a potentially 404 error (also because this causes no
errenous downloads while hashsum mismatches download the entire file
before finding out that it was pointless).
The rational for this is that indexes are relative to the Release file.
If we would be talking about a HTML page including images, such a
behaviour is obvious and intended – not doing it means in the best case
a bunch of "useless" requests which will all be answered with a
redirect.
show more details for "Writing more data" errors, too
They are the small brothers of the hashsum mismatch, so they deserve a
similar treatment even through we have for architectual reasons not a
much to display as for hashsum mismatches for now.
Users tend to report these errors with just this error message… not very
actionable and hard to figure out if this is a temporary or 'permanent'
mirror-sync issue or even the occasional apt bug.
Showing the involved hashsums and modification times should help in
triaging these kind of bugs – and eventually we will have less of them
via by-hash.
The subheaders aren't marked for translation for now as they are
technical glibberish and probably easier to deal with if not translated.
After all, our iconic "Hash Sum mismatch" is translated at least.
These additions were proposed in #817240 by Peter Palfrader.
format multiline errors properly in acquire progress
Together with the GlobalError change this allows us to add errors
spanning multiple lines, just that we control GlobalError while the
acquire progress is dealt with potentially by individual clients which
might or might not need to be adapted.
This isn't critical through as it either just works as expected anyhow
or is a minor styling thing (after all, all this commit does it add two
spaces to indent the lines a bit…).
This is a duplicate of sorts of 0efb29eb36184bbe6de7b1013d1898796d94b171
which is the a lot more frequent case of this error – and also a
duplicate of this error message, just without the \n at the end.
Calling the (non-existent) reporter multiple times for the same error
with different codes for the same error (e.g. hashsum) is a bit strange.
It also doesn't need to be a public API. Ideally that would all look and
behave slightly different, but we will worry about that at the time this
is actually (planed to be) used somewhere…
don't ask server if we have entire file in partial/
We have this situation in cases were parts of the transaction are
refused (e.g. in a hashsum mismatch) and rerun the update (e.g. in the
hope that we get a mirror which is synced this time).
Previously we would ask the server with an if-range and in the best case
recieve a 416 in response (less featureful server might end up giving us
the entire file again or we get the wrong file this time giving us a
hashsum mismatch…), which is a waste of time if we know already by
checking the hashsums that we got the complete and correct file.
Queues feeding workers like rred are created in a random pattern to get
a few of them to run in parallel – but if we already have an idling queue
we don't need to assign it to a (potentially new) random queue as that
saves us the (agruably small) overhead of starting up a new queue,
avoids adding jobs to an already busy queue while others idle and as
a bonus reduces the size of debug logs a bit.
We also keep starting new queues now until we reach our limit before
we assign work at random to them, which should give us a more effective
utilisation overall compared to potentially adding work to busy queues
while we haven't reached our queue limit yet.
allow uncompressed files to be empty in store again
With the previous fix for file applied we can again hit repositories
which contain uncompressed empty files, which since the introduction of
the central store: method wasn't accounted for anymore as we forbid
empty compressed files.
A silly of-by-one error in the stripping of the extension to check for
the uncompressed filename broken in an attempt to support all
compressions in commit a09f6eb8fc67cd2d836019f448f18580396185e5.
Fixing this highlights also mistakes in the handling of the Alt-Filename
in libapt which would cause apt to remove the file from the repository
(if root has the needed rights – aka the disk isn't readonly or similar)
recheck Pre-Depends satisfaction in SmartConfigure
Regression introduced in commit 590f1923121815b36ef889033c1c416a23cbe9a2
(2011!) causing apt to not check if Pre-Depends are satisfied before
calling --configure. This managed to hide so perfectly well for years as
Pre-Depends aren't that common, apt prefers upgrading these packages
first and checks for satisfaction is already in SmartUnpack, so there
is only a small window of oppertunity to break a pre-dependency relation
(usually with an unpack).
Verified by logchecking with two provided status files in the buglog.
I would have liked to write a test, but I wasn't able to reach the needed
complexity to get apt to fail – but the change is small and reasonable,
so what could possible go wrong™, right?
It handy to be able to point apt at reading a compressed dpkg/status
file in debugging cases, which worked pre-1.1 but somewhere down the
line in the massive refactoring. Restoring this behavior in a central
place for all realfile index files instead of just for the status file.
(This has no effect on index files acquired from an archive – those are
handled by different classes and support compressed files just fine)
There is a good chance that the attempt will fail, but if a user
mentions certain packages explicitly on the commandline there is a
chance that this will consist of a broken system which is resolved
by upgrading more packages then just the mentioned.
This limitation was not effecting external resolvers.
With the previous commit we track the state of transactions, so we can
now use our knowledge to avoid processing data for a transaction which
was already closed (via an abort in this case).
This is needed as multiple independent processes are interacting in the
process, so there isn't a simple immediate full-engine stop and it would
also be bad to teach each and every item how to check if its manager
has failed subordinate and what to do in that case.
In the pdiff case, which deals (potentially) with many items during its
lifetime e.g. a hashsum mismatch in another file can abort the
transaction the file we try to patch via pdiff belongs to. This causes
some of the items (which are already done) to be aborted with it, but
items still in the process of acquisition continue in the processing and
will later try to use all the items together failing in strange ways as
cleanup already happened.
The chosen solution is to dry up the communication channels instead by
ignoring new requests for data acquisition, canceling requests which are
not assigned to a queue and not calling Done/Failed on items anymore.
This means that e.g. already started or pending (e.g. pipelined)
downloads aren't stopped and continue as normal for now, but they remain
in partial/ and aren't processed further so the next update command will
pick them up and put them to good use while the current process fails
updating (for this transaction group) in an orderly fashion.
Closes: 817240
Thanks: Barr Detwix & Vincent Lefevre for log files
We want to keep track of the state of a transaction overall to base
future decisions on it, but as a pre-requirement we have to make sure
that a transaction isn't commited twice (which happened if the download
of InRelease failed and Release takes over).
It also happened to create empty commits after a transaction was already
aborted in cases in which the Release files were rejected.
This isn't effecting security at the moment, but to ensure this isn't
happening again and can never be bad a bunch of fatal error messages are
added to make regressions on this front visible.
Hardly noticeable, but given that we have the option to easily enable
it, lets enable it as every newline in the message is written
individually by the code.
Some methods had it missing, some used the keyword directly, which isn't
a problem as it is a cc file, but for consistency lets stick to our
macro for now.
Michael Vogt [Thu, 17 Mar 2016 07:56:58 +0000 (08:56 +0100)]
Use systemd.timer instead of a cron job
The rational is that we need to spread the load on the mirrors
that apt update and unattended-upgrades cause. To do so, we
leverage the RandomizeDelay feature of systemd. The other advantage
is that the timer is not run at a fixed daily.daily time but
instead every 24h. This also fixes the problem that the randomized
deplay in the current apt.cron.daily causes other cron jobs to
be deplayed.
A compatibility cron job is also provided for systems that do not
use systemd.
Note that the time is fired two times a day, but the logic inside
of apt.systemd.daily will ensure (via stamp files) that the
servers are hit at most every 24h. Firing two times a day helps
with the worst case update time and it also helps with systems
that are not always on.
test-apt-download-progress: Use a larger file for testing
This should make the test less flaky, as with a small file,
we might have already received all the data before trying
to apply rate limits which is a constant source of failure
on the i386 Ubuntu autopkgtest.
Do not mark packages for keep that we want to remove
If the package is marked for removal, keep it marked for
removal and do not mark it for keep. If we mark it for keep,
we some how later get to a different stage where it is marked
for unpack instead of removal.
In the example in the bug report, we would get a:
SmartUnPack maas-region-controller-min:amd64 (replace version 2.0.0~alpha3+bzr4810-0ubuntu1 with Segmentation fault
maas-region-controller-min:amd64 was marked for removal, but
we changed it to keep and somehow it thinks that this is to
be replaced now instead of removed (probably because the
InstallVer != CandidateVer [with InstallVer = 0]).
Our own gpgv method can declare a digest algorithm as untrusted and
handles these as worthless signatures. If gpgv comes with inbuilt
untrusted (which is called weak in official terminology) which it e.g.
does for MD5 in recent versions we should handle it in the same way.
To check this we use the most uncommon still fully trusted hash as a
configureable one via a hidden config option to toggle through all of
the three states a hash can be in.
Using erase(pos) is invalid in our case here as pos must be a valid and
derefenceable iterator, which isn't the case for an end-iterator (like
if we had no good signature).
The problem runs deeper still through as VALIDSIG is a keyid while
GOODSIG is just a longid so comparing them will always fail.
tests: reenable basic auth test and add @ in username
On launchpad #1558484 a user reports that @ in the authentication tokens
parsing of sources.list isn't working in an older (precise) version. It
isn't the recommended way of specifying passwords and co (auth.conf is),
but we can at least test for regressions (and in this case test at all…
who was that "clever" boy disabling a test with exit……… oh, nevermind.
cachefile: Only set members that were initialized successfully
Otherwise, things will just start failing later down the stack,
because (a) the lazy getters do not check if building was successful
and (b) any further getter call would return the invalid object
anyway.
Also initialize VS in pkgCache to nullptr by default.
test framework: Pass -n to lsof to speed up finding the https port
There is no point in resolving all addresses to their names, this
just seriously slows the setup phase down. So just pass -n to not
resolve names anymore.
The test is a bit flaky. In order to get it less flaky, reduce
the speed in each run. To compensate for issues, start with a
higher speed level. Also increase the number of runs to 10.
Furthermore, http get the same multiple-run loop, and the log
files are changed to indicate the protocol being tested, as it's
not obvious which one fails if it fails in quiet mode.
The epoch stripping in this code is done since day one, but in other
places we show a version epochs are not stripped. If epochs are present
in packages they tend to be an important information which we can't just
drop and especially can't drop "sometimes" as that confuses users and
tools alike – so even if removing code in use for (close to) 18 years
feels wrong, it is probably the right choice for consistency.
Michael Vogt [Tue, 15 Mar 2016 13:50:37 +0000 (14:50 +0100)]
Get accurate progress reporting in apt update again
For the non-pdiff case, we have can have accurate progress
reporting because after fetching the {,In}Release files we know
how many IndexFiles will be fetched and what size they have.
Therefore init the filesize early (in pkgAcqIndex::Init) and
ensure that in Acquire::Pulse() looks at already downloaded
bits when calculating the progress in Acquire::Pulse.
Also improve debug output of Debug::acquire::progress
Michael Vogt [Tue, 15 Mar 2016 12:13:54 +0000 (13:13 +0100)]
Fix bug where the problemresolve can put a pkg into a heisenstate
The problemresolver will set the candidate version for pkg P back
to the current version if it encounters an impossible to satisfy
critical dependency on P. However it did not set the State of
the package back as well which lead to a situation where P is
neither in Keep,Install,Upgrade,Delete state.
Note that this can not be tested via the traditional sh based
framework. I added a python-apt based test for this.
methods/gpgv: Correctly handle weak signatures with multiple keys
We added weak signatures to BadSigners, meaning that a Release file
signed by both a weak signature and a strong signature would be
rejected; preventing people from migrating from DSA to RSA keys
in a sane way.
Instead of using BadSigners, treat weak signatures like expired
keys: They are no good signatures, and they are worthless.