In the plain git implementation, if you add a 5GB file to git it will take some time to add it as git blob to the .git directory. It will be gzipped but since large files tend to be binary, it will still take up almost 5GB. That means that by simply adding the file to git, the disk usage is doubled. This is the only way which allows a user to restore the file even if the computer is offline.
If the file system implements copy-on-write, then it is possible to keep a spare copy of the file relatively cheaply, however, git does not take advantage of that. On Linux, copy-on-write systems are common yet. Nevertheless, an optimization in git could be to keep large files unaltered in the .git repository with a side-car file.
The git repository will not inflate in size when moving the 5GB file around. But every time the file changes, then the storage requirement will grow by 5GB. Luckily large user files are usually media files and these do not change a lot. Using git for video or audio editing data is not a good idea. I have no idea how e.g. Box or Dropbox will deal with such data.