pijul_org / pijul

#90 Wrong patches caused problems in Cargo.lock

Opened by pijul_org, on May 16, 2017
pmeunier commented on May 16, 2017

I just reset our main repository, several patches were causing conflict problems in pijul/Cargo.lock. Investigating the issue now.

pmeunier commented on May 16, 2017

Alright, a bug in apply had been corrected, but caused an old patch to be self-conflicting. The current version on the Nest is temporary.

lthms commented on May 16, 2017

Thanks for the news and explanation, it is very appreciated!

lthms commented on May 17, 2017

For my own understanding, in case I run into this kind of bug one day. Would have it been possible to just have a patch that delete Cargo.lock and another that add it again, rather than rebooting the whole history?

pmeunier commented on May 17, 2017

Yes. I tried to do that, but then I ran into other conflicts because I had not separated my patches correctly.

Also, since the file was quite big, it took me a really long time (several days) to figure out what was going on, and I needed to move on.

It is quite unlikely to ever happen to new projects. What triggered it in my case, was that I had recorded changes to Cargo.lock from the Windows VM I use to produce the windows builds of Pijul.

For the record, here is what happened: it all started with a conflict in Cargo.lock. Prior to version 0.5.9, conflicts were not all solvable. For instance, simply deleting conflict markers would produce an empty diff, and would not be recorded.

This has changed recently, and the only remaining unsolvable conflicts are cycle conflicts, but at the moment, the only way to get them is through bugs in libpijul/src/optimal_diff.rs (further research on a better diff algorithm could potentially produce other patches).

But I only figured this out after resetting history, getting a working version again, and testing on different examples.

lthms commented on May 19, 2017

Thanks for the explanation, for pijul and all your work!

lthms commented on June 5, 2017

I think we can close this issue now.