<p>Em 28/04/2011 08:17, "Dagobert Michelsen" <<a href="mailto:dam@opencsw.org">dam@opencsw.org</a>> escreveu:<br>
><br>
> Hi Maciej,<br>
><br>
> Am 28.04.2011 um 00:10 schrieb Maciej BliziĆski:<br>
> > I remember a discussion about potential checks that could help identify problems caused by package upgrades, but problems occurring with other packages rather than the package being upgraded. If we had such automation, it would prevent problems like this one from happening in the future.<br>
> ><br>
> > The basic plan would be:<br>
> ><br>
> > - establish a list of packages potentially affected by the planned upgrade - run checkpkg against this set of packages and record all non overridden errors thrown by checkpkg. The set of packages should not include the packages from theplanned upgrade. - run checkpkg again, against a slightly modified set of packages: this time include the new versions of the packages to upgrade - if any new errors are thrown, it means that our upgrade breaks other packages and it should be stopped.<br>
> ><br>
> > I'm sure there's a fair amount of devil to be in the details, but this check looks doable to me. I don't have ideas what false positives would we get from it. They would be hard to override, especially if they concern legacy packages.<br>
><br>
> The heuristics to find out which packages could be affected sounds difficult.</p>
<p>The first heuristic could be to look at dependent packages, let's say up to 2 levels. This way we'd catch issues with package renames. </p>
<p>> The other idea was to check against the whole catalog and see if more errors<br>
> happen on the updated package, which required performance tuning of the code...</p>
<p>Analyzing the whole catalog is potentially doable, but requires significant work. I think I did describe the issue before; it's the lookups of packages by file name and catalog. They require a couple of table joins, and keep on beings slow despite indexes on all foreign keys. A solution to that would be schema denormalization and additional code to handle catalog updates.</p>
<p>That would probably not be sufficient for practical use, as the fastest whole catalog run ever achieved was about one hour. To insert a batch of packages, you would typically need to make ckecks for: 2 architectures times 3 OS releases times 2 states (before upgrade, after upgrade). This amounts to 12h (per batch) in the best case. This would limit us to 2 catalog update attempts a day.</p>
<p>There is a lot of space for optimization and parallelism, so we can achieve reasonable performance at some point. It will require a significant amount of development work and testing. It's hard to make precise estimates, but at the rate of our development we could optimize file-to-package in about 4 to 8 weeks of focused effort. That wouldn't necessary mean making that many whole catalog runs feasible. It would be a necessary step though.</p>
<p>Maciej</p>