浮点不准确性的确定性如何?

How deterministic is floating point inaccuracy?(浮点不准确性的确定性如何?)
本文介绍了浮点不准确性的确定性如何?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我了解浮点计算存在准确性问题,并且有很多问题可以解释原因.我的问题是,如果我两次运行相同的计算,我是否可以始终依靠它来产生相同的结果?哪些因素可能会影响这一点?

I understand that floating point calculations have accuracy issues and there are plenty of questions explaining why. My question is if I run the same calculation twice, can I always rely on it to produce the same result? What factors might affect this?

  • 计算之间的时间?
  • CPU 的当前状态?
  • 不同的硬件?
  • 语言/平台/操作系统?
  • 太阳耀斑?

我有一个简单的物理模拟,想记录会话以便回放.如果可以依赖计算,那么我应该只需要记录初始状态以及任何用户输入,并且我应该总是能够准确地重现最终状态.如果计算不准确,开始时的错误可能会在模拟结束时产生巨大影响.

I have a simple physics simulation and would like to record sessions so that they can be replayed. If the calculations can be relied on then I should only need to record the initial state plus any user input and I should always be able to reproduce the final state exactly. If the calculations are not accurate errors at the start may have huge implications by the end of the simulation.

我目前在 Silverlight 工作,但我很想知道这个问题是否可以得到一般性的回答.

I am currently working in Silverlight though would be interested to know if this question can be answered in general.

更新:最初的答案表明是,但显然这并不完全明确,正如所选答案的评论中所讨论的那样.看来我必须做一些测试,看看会发生什么.

Update: The initial answers indicate yes, but apparently this isn't entirely clear cut as discussed in the comments for the selected answer. It looks like I will have to do some tests and see what happens.

推荐答案

据我了解,只有在处理相同的指令集和编译器,并且运行的任何处理器都遵守时,才能保证得到相同的结果严格按照相关标准(即IEEE754).也就是说,除非您正在处理一个特别混乱的系统,否则运行之间的任何计算偏差都不太可能导致错误行为.

From what I understand you're only guaranteed identical results provided that you're dealing with the same instruction set and compiler, and that any processors you run on adhere strictly to the relevant standards (ie IEEE754). That said, unless you're dealing with a particularly chaotic system any drift in calculation between runs isn't likely to result in buggy behavior.

我知道的具体问题:

  1. 某些操作系统允许您以破坏兼容性的方式设置浮点处理器的模式.

  1. some operating systems allow you to set the mode of the floating point processor in ways that break compatibility.

浮点中间结果通常在寄存器中使用 80 位精度,但在内存中仅使用 64 位.如果以更改函数内的寄存器溢出的方式重新编译程序,则与其他版本相比,它可能会返回不同的结果.大多数平台都会为您提供一种强制将所有结果截断为内存精度的方法.

floating point intermediate results often use 80 bit precision in register, but only 64 bit in memory. If a program is recompiled in a way that changes register spilling within a function, it may return different results compared to other versions. Most platforms will give you a way to force all results to be truncated to the in memory precision.

标准库函数可能会因版本而异.我认为在 gcc 3 vs 4 中有一些不常见的例子.

standard library functions may change between versions. I gather that there are some not uncommonly encountered examples of this in gcc 3 vs 4.

IEEE 本身允许一些二进制表示不同...特别是 NaN 值,但我不记得细节了.

The IEEE itself allows some binary representations to differ... specifically NaN values, but I can't recall the details.

这篇关于浮点不准确性的确定性如何?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!

本站部分内容来源互联网,如果有图片或者内容侵犯您的权益请联系我们删除!

相关文档推荐

DispatcherQueue null when trying to update Ui property in ViewModel(尝试更新ViewModel中的Ui属性时DispatcherQueue为空)
Drawing over all windows on multiple monitors(在多个监视器上绘制所有窗口)
Programmatically show the desktop(以编程方式显示桌面)
c# Generic Setlt;Tgt; implementation to access objects by type(按类型访问对象的C#泛型集实现)
InvalidOperationException When using Context Injection in ASP.Net Core(在ASP.NET核心中使用上下文注入时发生InvalidOperationException)
LINQ many-to-many relationship, how to write a correct WHERE clause?(LINQ多对多关系,如何写一个正确的WHERE子句?)