It seems to me that there is an artificial distinction between 'fast' and 'big'. This comes from the fact that 'fast' must be backed by swap space, and swap space must be pre-allocated and so cannot be too 'big'.
Many years ago I worked with Apollo workstations running "Domain/OS" - which was Unix-like. They didn't have a swap partition, or a swap file. They just used spare space in the filesystem for swap.
Could that work for Linux? You could probably create a user-space solution that monitored swap usage and created new swap files on demand. But I suspect it wouldn't work very well.
Or you could teach Linux filesystems to support swap files that grow on demand - or instantiate space on demand.
Once the swap-over-NFS patches get merged this should be quite possible. The filesystem is told that a given file is being used for swap, then it can preload enough data so that it can allocate space immediately without needing any further memory allocation. You could then create a 100G sparse file and add that as a swap destination and it would "just work". Writing to a tmpfs filesystem would be fast for small files, but big files would spill out into the same space as is used by the filesystem.
(Yes, I realise this is a long-term solution while what is needed is a short-term solution.)