OK, this has been dissected lots of times, I hope I get all these right (I am rusty)
1. IPvDlangA is deployed on some boxes. This hypothetical protocol is just 100% compatible with IPv4 so these boxes can talk directly to IPv4 boxes.
IPvDlangA is junk - it doesn't get us anywhere we weren't with IPv4 so it doesn't matter if its deployed or not.
2. IPvDlangB is deployed on some boxes. This is a very sophisticated hypothetical protocol. It uses some magic IPv4 flag bits to detect other IPvDlangB boxes and when talking to them it embeds enhanced 128-bit addresses, with the top 96 bits cleared. To transit over the IPv4 network the packets must of course all have valid IPv4 headers too.
Everybody who gets IPvDlangB sees slower performance AND less bandwidth because IPvDlangB is wasting a lot of both. So every smart individual or organisation disables or avoids IPvDlangB. No uptake.
3. IPvDlangC does protocol conversion. It probes next hop routers (good luck with that) and if they also do IPvDlangC it speaks IPvDlangC otherwise it converts every packet to IPv4 and strips 96 bits of address. When receiving packets from such routers, it adds 96 bits of zeroes, and converts to IPvDlangC.
IPvDlangC is very resource-intensive, far more than mere IP routing. It would be tremendously expensive to deploy in core systems. This expenditure will never be authorised. In terms of development effort, code footprint, attack surface etc. it's much the same as the dual-stack solution today except without any of the benefits.
Do you have more options? We can look at those too (I can't promise to be swift, I have lots of real work to do)