Take a look at the size of some commonly used toolkit, oh, GTK+3. That's 14 megabytes of compressed source, though some part of this is bound to be data. Whatever. 14 MB, I'll run with it.
Now suppose you write a hello world application: that's 14 MB + your own source, probably no more than 1 kB. Now write xlib application: it's probably a bit longer with respect to your own code, but eliminates 14 MB of code you did not need. Now, if you wish to understand the total system, you need to understand all your own code, all of GTK+, and then all of Xlib, and whatever is underneath it. If you don't have GTK+, you eliminated that much abstraction you did not need to understand.
Therefore: if I am allowed to stipulate that complexity can be counted this way, any additional code you add is a net loss, because it makes the total system more complex, no matter how pretty the world constructed in the cocoon of this new abstraction. But is this a valid argument? I don't really think so. GTK+ is probably allowing you to avoid having to deal with Xlib entirely, so you never have to learn any bit of it and yet get applications that run in X. And of course it gives you nice features like themeability practically for free. It's actually something better than Xlib for most programmers (which is why practically nobody writes raw xlib programs).
All of this is besides the point. My personal experience when dealing with abstractions that simply wrap some other functionality, is that they always make things worse. Let's call this the "driver pattern". These drivers are never quite as nice as the abstraction's API is, and the abstraction hides information from me which I generally need to make the application work well. Maybe I don't get good error reporting, or can't access all the functionality, or whatever. So I believe criticism of the driver pattern is often justified, because it rarely actually helps; rather, it seems to me that a lot of people are simply deluded to only count the upsides and wave away the downsides.