Let's not exaggerate
Let's not exaggerate
Posted Jul 23, 2018 9:53 UTC (Mon) by epa (subscriber, #39769)In reply to: Let's not exaggerate by Cyberax
Parent article: Deep learning and free software
I guess it's the second part of your original comment that appears not to fit. "Especially since neural networks often use f16 or even f8 precision." The reasons you describe for nondeterminism don't seem that they would affect smaller floating point types any worse than bigger ones.