问题描述
运行与 双乘法是否在 .NET 中损坏? 并阅读了几篇关于 C# 字符串格式的文章,我认为:
Running a quick experiment related to Is double Multiplication Broken in .NET? and reading a couple of articles on C# string formatting, I thought that this:
{
double i = 10 * 0.69;
Console.WriteLine(i);
Console.WriteLine(String.Format(" {0:F20}", i));
Console.WriteLine(String.Format("+ {0:F20}", 6.9 - i));
Console.WriteLine(String.Format("= {0:F20}", 6.9));
}
将是此 C 代码的 C# 等效项:
Would be the C# equivalent of this C code:
{
double i = 10 * 0.69;
printf ( "%f
", i );
printf ( " %.20f
", i );
printf ( "+ %.20f
", 6.9 - i );
printf ( "= %.20f
", 6.9 );
}
但是 C# 会产生输出:
However the C# produces the output:
6.9
6.90000000000000000000
+ 0.00000000000000088818
= 6.90000000000000000000
尽管 i 在调试器中显示的值等于 6.89999999999999946709(而不是 6.9).
与显示格式要求的精度的 C 相比:
compared with C which shows the precision requested by the format:
6.900000
6.89999999999999946709
+ 0.00000000000000088818
= 6.90000000000000035527
发生了什么事?
(Microsoft .NET Framework 3.51 SP1/Visual Studio C# 2008 速成版)
( Microsoft .NET Framework Version 3.51 SP1 / Visual Studio C# 2008 Express Edition )
我有数值计算的背景和在各种平台上实施区间算术的经验 - 一种用于估计由于复杂数值系统中的精度限制而导致的误差的技术.要获得赏金,请不要尝试解释存储精度 - 在这种情况下,它是 64 位双精度 ULP 的差异.
I have a background in numerical computing and experience implementing interval arithmetic - a technique for estimating errors due to the limits of precision in complicated numerical systems - on various platforms. To get the bounty, don't try and explain about the storage precision - in this case it's a difference of one ULP of a 64 bit double.
为了获得赏金,我想知道 .Net 如何(或是否)可以将 double 格式化为 C 代码中可见的请求精度.
To get the bounty, I want to know how (or whether) .Net can format a double to the requested precision as visible in the C code.
推荐答案
问题是 .NET 总是将 double
舍入到 15 位有效十进制数字在应用你的格式,无论您的格式要求的精度如何,也无论二进制数的确切十进制值如何.
The problem is that .NET will always round a double
to 15 significant decimal digits before applying your formatting, regardless of the precision requested by your format and regardless of the exact decimal value of the binary number.
我猜 Visual Studio 调试器有自己的格式/显示例程,可以直接访问内部二进制数,因此 C# 代码、C 代码和调试器之间存在差异.
I'd guess that the Visual Studio debugger has its own format/display routines that directly access the internal binary number, hence the discrepancies between your C# code, your C code and the debugger.
没有任何内置功能可让您访问 double
的确切十进制值,或使您能够将 double
格式化为特定的小数位数位置,但您可以自己执行此操作,方法是拆分内部二进制数并将其重建为十进制值的字符串表示形式.
There's nothing built-in that will allow you to access the exact decimal value of a double
, or to enable you to format a double
to a specific number of decimal places, but you could do this yourself by picking apart the internal binary number and rebuilding it as a string representation of the decimal value.
或者,您可以使用 Jon Skeet 的 DoubleConverter
类(链接从他的二进制浮点和.NET"文章).这有一个 ToExactString
方法,它返回 double
的精确十进制值.您可以轻松地对其进行修改,以将输出四舍五入到特定精度.
Alternatively, you could use Jon Skeet's DoubleConverter
class (linked to from his "Binary floating point and .NET" article). This has a ToExactString
method which returns the exact decimal value of a double
. You could easily modify this to enable rounding of the output to a specific precision.
double i = 10 * 0.69;
Console.WriteLine(DoubleConverter.ToExactString(i));
Console.WriteLine(DoubleConverter.ToExactString(6.9 - i));
Console.WriteLine(DoubleConverter.ToExactString(6.9));
// 6.89999999999999946709294817992486059665679931640625
// 0.00000000000000088817841970012523233890533447265625
// 6.9000000000000003552713678800500929355621337890625
这篇关于在 C# 中格式化双精度输出的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!