pass versioned provides to external solvers in EDSP
The EDSP output generated by apt didn't include the versioned provides
information so that every provides looked like an unversioned one in the
eyes of an external resolver.
pkgAcqChangelog has the default behaviour of downloading a changelog to
a temporary directory (inside /tmp, not /tmp directly), which is cleaned
up on shutdown, but this can be overridden to store the changelog more
permanently – but that caries a permission problem.
For changelog we can 'easily' solve this by always downloading to a
temporary directory and only move it out of there on done.
use local changelog from /usr/share/doc if possible
If pkgAcqChangelog is told to acquire the changelog for a version it
will check first if this version is installed on the disk and if so will
use the local changelog in /usr/share/doc (possibily/likely gz
compressed) instead of downloading the file from the web.
An option is provided to disable this, which is enabled by default for
the Ubuntu vendor as they truncate the local changelogs – and for apts
--print-uris action.
test: use our special downloaded dir for 'source' result
Otherwise the test run as root fails seeing the
W: Can't drop privileges for downloading as file 'foo_1.tar.gz' couldn't be
accessed by user '_apt'. - pkgAcquire::Run (13: Permission denied)
warning in a command which isn't supposed to warn.
get dpkg lock in build-dep if cache was invalid again
Regression introduced in a249b3e6fd798935a02b769149c9791a6fa6ef16, which
in the case of an invalid cache would build the first part unlocked and
later pick up the (still unlocked) cache for further processing, so the
system got never locked and apt would end up complaining about being
unable to release the lock at shutdown.
The far more common case of having a valid cache worked as expected and
hence covered up the problem – especially as tests who would have
noticed it are simulations only, which do not lock.
Closes: 814139 Reported-By: Balint Reczey <balint@balintreczey.hu> Reported-By: Helmut Grohne <helmut@subdivi.de> on IRC
If we just reopened the file, we also need to reset the current
seek position when we reset the buffer, otherwise the code will
not try to seek to the position given to Skip (from 0), but will
try to seek to old offset + the position given to skip.
avoid building dependency tree in 'source' command
We don't need the dependencies for obvious reasons and we don't need the
candidate version either, so building a pkgDepCache is wasted effort,
which we can stop doing now that build-dep cleared the path.
The later just calls the earlier, but the later needs the fullblown
dependency cache to be initialized, which is a very costly operation and
isn't done anymore that early in the run as we would need to throw away
and rebuild it again after we got all the information about source pkgs.
As we end up with a nullptr for the pkgDepCache, we use a slightly
longer calling convention to make sure that we use the pkgCache
directly, avoiding nullptr induced segfaults and costly operations.
It's a fairly expensive call and it's called on every package,
even though it's usually only used when we're interested in
a small number of packages.
Long description is currently only shown by this function
when using `apt search X --full`.
On my PC, this patch speeds up `apt list` by roughly 20%
and `apt list --installed` by 1-2%.
simple_buffer::write: Use free() instead of maxsize - size()
We want to check whether the amount of free space is smaller
than the requested write size. Checking maxsize - size() is
incorrect for bufferstart >= 0, as size() = end - start.
support <libc>-<kernel>-<cpu> in architecture specs
APT has a different understanding than dpkg (#748936) what matches and
what doesn't match an architecture specification as it isn't converting
back (and forward) to Debian triplets. That has to eventually be solved
some way or the other, but until that happens we change the matching in
apt so that porters can continue their work on non-gnu libc-ports even
if policy doesn't specify that yet (and dpkg just supporting it "by
accident" via triplets).
The initial patch was reformatted, fixed in terms of patterns containing
"any-any", dealing with expanding an arch without libc to gnu while a
pattern expands libc to any, the parsedepends test was fixed (the new
if's were inserted one step too early) and another test just for the
specifications added.
Closes: #812212
Thanks: Bálint Réczey for initial patch
only warn about missing/invalid Date field for now
The Date field in the Release file is useful to avoid allowing an
attacker to 'downgrade' a user to earlier Release files (and hence to
older states of the archieve with open security bugs). It is also needed
to allow a user to define min/max values for the validation of a Release
file (with or without the Release file providing a Valid-Until field).
APT wasn't formally requiring this field before through and (agrueable
not binding and still incomplete) online documentation declares it
optional (until now), so we downgrade the error to a warning for now to
give repository creators a bit more time to adapt – the bigger ones
should have a Date field for years already, so the effected group should
be small in any case.
It should be noted that earlier apt versions had this as an error
already, but only showed it if a Valid-Until field was present (or the
user tried to used the configuration items for min/max valid-until).
deal better with (very) small apt::cache-start values
It is a bit academic to support values which aren't big enough to fit even
the hashtables without resizing, but cleaning up ensures that we do the
right thing (aka not segfaulting) even if something goes wrong in these
deep layers. You still can't have very very small values through…
get sources for packages in multiple releases again
In 321213f0dcdcdaab04e01663e7a047b261400c9c Andreas Cadhalpun corrected
the incorrect overriding of earlier better-fitting results with later
(semi-)matches – but that broke the case in which packages are in multiple
releases in the same version (and the user has both releases configured).
In commit a221efc331693f8905da870141756c892911c433 I promoted the source
package name and version to the binary cache for faster access by e.g.
EDSP, but due to changing the interpretation length to soon we always
ignored the version part of the Source field, so that packages ended up
having the binary version as source version – which while usually just
fine it is wrong for binary rebuilds.
Avoid the dependency on a specific current path for the tar test and
ensure that _system is correctly initialized (gcc-6 runs into a segfault
otherwise and with it fixed starts to depend on the multi-arch
configuration of the running system… not good).
drop explicit check for EWOULDBLOCK if it has the same value as EAGAIN
gcc correctly reports that we check for the same value twice, expect
that the manpage of read(2) tells us to do it for portability, so to
make both sides happy lets add a little #if'ing here.
This would mess up reference counting and should not be allowed
(it could be implemented correctly, but it would not be efficient
and we do not need it).
build-dep was implemented by parsing the build-dependencies of a package
and figuring out which packages to install/remove based on this. That
means that for the first level of dependencies build-dep was
implementing its very own resolver with all the benefits (aka: bugs)
this gives us for not using the existing resolver for all levels.
Making this work involves generating a dummy binary package with fitting
Depends and Conflicts and as we can't create them out of thin air the
cache generation needs to be involved so we end up writing a Packages
file which we want to parse – after we have parsed the other Packages
files already. With .dsc/.deb files we could add them before we started
parsing anything.
With a bit of care we can avoid generating too much data we have to
throw away again (as many parts assume that e.g. the count of packages
doesn't change midair), so that on a speed front there shouldn't be
much of a difference, but output can be slightly confusing as if we have
a completely valid cache on disk the "Reading package lists... Done" is
printed two times – but apt is pretty quick about it in that case.
use consistently the last : as name:arch separator
Proper debian packages do not contain ':' in the package name, so for
real packages this is a non-issue, but apt itself frequently makes use
of packages with such an illegal name for internal proposes.
If you have chosen to install a foreign architecture provider it is
more reasonable to keep this provider instead of removing this one to
replace it with a newer version from a (usually) more preferred arch.
To resolve dependencies like "pkg:arch" we create a package with the
name "pkg:arch" and the architecture "any". We create these packages
only if a dependency needs it as these kind of dependencies aren't that
common. This commit ensured that in the even this architecture specific
dependency is the only relation this package has we still create the
underlying package to have them available in provides resolution.
I only looked at parameters in the previous commit, which was
not enough: One place also generated local string views. In this
case, we only need to make ArchA dynamic, as NameA is not used
after the FindPkg() call.
Remap StringView instances pointing into the cache
It turns out that StringViews might need to be remapped in some
places because they come from the cache. For example, some sites
pass a Ver.VerStr() to NewProvides().
Such a StringView would become invalid during the duration of
the call if the cache is remapped, causing the program to die
with a segmentation fault.
We can take care of those issues by remapping string views in
the same way we remap all the iterators. String views are only
remapped if they point into the cache though, this allows us
to write more generic code on the callee site without having
to check whether the view points into the cache or not.
That's not as efficient as possible, but the overhead does not
appear to be measurable.
string_view: Drop constexpr constructor for standard compatibility
APT::StringView is supposed to be a temporary measure, until support
for the standardized string_view is widely available. Introducing
additional unstandardized features just makes porting to the
standard version harder.
The constexpr constructor also won't have any real effect on most
systems, as the compiler will happily optimise the strlen() call
away for constant strings.
return correct position in APT::StringView::(r)find
The position returned is supposed to be the position of the character
counted from the start of the string, but if we used the substr calling
overloads the skipped over prefix wasn't considered. The pos parameter
of rfind had also the wrong semantic.
Introduced in 9d2a8a7388cf3b0bbbe92f6b0b30a533e1167f40 apt tries to
merge actions like downloading the same (as judged by hashes) file
into doing it once. The implementation was very simple in that it isn't
planing at all. Turns out that it works 90% of the time just fine, but
has issues in more complicated situations in which items can be in
different stages downloading different files emitting potentially the
"wrong" hash – like while pdiffs are worked on we might end up copying
the patch instead of the result file giving us very strange errors in
return. Reverting the change until we can implement a better planing
solution seems to be the best course of action even if its sad.
fix M-A:foreign provides creation for unknown archs
Architectures for packages which do not belong to the native nor a
foreign architecture (dubbed barbarian for now) which are marked
M-A:foreign still provide in their own architecture even if not for
others. Also, other M-A:foreign (and allowed) packages provide in these
barbarian architectures.
evaluate sourceslist-list-format entity in vendors sources.list
Parsing XML entity files in shell isn't exactly nice and causing the
substitution with a while-read loop isn't either, but it seems to be
good enough for the moment without changing too much.
If a package has no description, we would crash in search. While
this should not happen, there seem to be some weird cases where
it does.
A safer way might be to make the whole parser thing safe
against this, so pkgRecords::Lookup(Desc.FileList()) works
and returns a parser where all values are empty. This would
also fix all other instances of this bug, if there are any.
remove uncompressed leftover partial file before pdiff bootstrap
The code already deals with compressed leftovers, but forgot the
uncompressed files. The opertunity is picked to reorder this code and
add debug messages about the actions taken as well as produce such a
leftover file in the associated testcase.
use filesize of compressed pdiffs for the limit if possible
With the addition of the $HASH-Download field in the .diff/Index we got
the size of the compressed patches for 'free', so if that information is
available we can use it for a more fitting calculation of the size
requirements of the patches vs. the complete file.
Note that this predicts a too small size in the transition case in which
the information isn't available for all patches, but figuring this out
would be a lot of code for practically nothing as only one update can
ever be in such a transition phase.
tests: limit autotest-functionname generation to sane characters
Some (older) versions of bash seem to be allergic to a method named
"aptautotest_grep_^apt" (note the caret). Unlikely that we are going to
write autotests for such commands so we could just skip those, but lets
instead just use "normal" characters in the names and strip the rest as
we already did with the (arguable more common) '-'.
support '-' and no parameter for stdin in apt-helper cat-file
This way it works more similar to the compressor binaries, which we
can relief in this way from their job in the test framework avoiding the
need of adding e.g. liblz4-tool to the test dependencies.
Downloading and storing are two different operations were different
compression types can be preferred. For downloading we provide the
choice via Acquire::CompressionTypes::Order as there is a choice to
be made between download size and speed – and limited by whats available
in the repository.
Storage on the other hand has all compressions currently supported by
apt available and to reduce runtime of tools accessing these files the
compression type should be a low-cost format in terms of decompression.
apt traditionally stores its indexes uncompressed on disk, but has
options to keep them compressed. Now that apt downloads additional files
we also deal with files which simply can't be stored uncompressed as
they are just too big (like Contents for apt-file). Traditionally they
are downloaded in a low-cost format (gz) as repositories do not provide
other formats, but there might be even lower-cost formats and for
download we could introduce higher-cost in the repositories.
Downloading an entire index potentially requires recompression to
another format, so an update takes potentially longer – but big files
are usually updated via pdiffs which has to de- and re-compress anyhow
and does it on the fly anyhow, so there is no extra time needed and in
general it seems to be benefitial to invest the time in update to save
time later on file access.
allow pdiff bootstrap from all supported compressors
There is no reason to enforce that the file we start the bootstrap with
is compressed with a compressor which is available online. This allows
us to change the on-disk format as well as deals with repositories
adding/removing support for a specific compressor.
ensure compression cleanup even without lists-cleanup
If we store files compressed in lists/ and the file switched compression
formats we happened to retain the "old" format, but by default the
cleanup process catched this oversight and removed the file.
[The initial situation described doesn't arise as we store no files by
default compressed and even with apt-file configuring Contents files, we
don't really have that problem as there is just .gz files for those.]
We solve this by just removing any uncompressed as well as compressed
(we support) file just before we move the 'new' version of the file in.