Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
230 views
in Technique[技术] by (71.8m points)

.net - XmlSerializer startup HUGE performance loss on 64bit systems

I am experiencing a really HUGE performance loss while calling a simple XmlSerializer.Deserizlize() on a class with lots of fields.

NOTE: I'm writing the code without Visual Studio, at home, so it may have some errors.

My serializable class is flat and has hundreds of fields:

[Serializable]
class Foo
{
    public Foo() { }

    [XmlElement(ElementName = "Field1")]
    public string Field1;

    // [...] 500 Fields defined in the same way

    [XmlElement(ElementName = "Field500")]
    public string Field500;
}

My application deserializes an input string (even small):

 StringReader sr = new StringReader(@"<Foo><Field1>foo</Field1></Foo>");
 XmlSerializer serializer = new XmlSerializer(typeof(Foo));
 object o = serializer.Deserialize(sr);

Running the application in 32bit systems (or with 32bit forced with corflags.exe), the code takes about ONE SECOND the first time (temp serialization class generation, and all...), then it's close to 0.

Running the application in 64bit systems, the code takes ONE MINUTE the first time, then it's close to 0.

What could possibly hang the system for such a long time, during the first execution of an XmlSerializer, for a big class, in a 64bit system?

Right now I'm not sure if I have to blame temp class generation/remove, xml name table initialization, CAS, Windows Search, AntiVirus, or Santa Claus...

SPOILERS

Here are my tests, don't read this if you don't want to be sidetracked by my (possible) analysys mistakes.

  • Running the code the from Visual Studio debugger makes the code run FAST even in 64 bit systems
  • Adding the (totally undocumented) system.diagnostic switch "XmlSerialization.Compile", which prevents the system from removing the serialization temp classes, makes the code run FAST even in 64 bit systems
  • Taking the temp FooXmlSerializer class created by the runtime, including the .cs in my project, and using it instead of the XmlSerializer, makes the code run FAST even in 64 bit systems
  • Creating the same FooXmlSerializer class with sgen.exe, including the .cs in my project, and using it instead of the XmlSerializer, makes the code run FAST even in 64 bit systems
  • Creating the same FooXmlSerializer class with sgen.exe, referencing the Foo.XmlSerializers.dll assembly in my project, and using it instead of the XmlSerializer, makes the code run SLOW even in 64 bit systems (this bugs me a lot)
  • The performance loss only happens if the input to deserialize actually contains a field of the big class (this also bug me a lot)

To further explain the last point, if I have a class:

[Serializable]
class Bar
{
    public Bar() { }

    [XmlElement(ElementName = "Foo")]
    public Foo Foo; // my class with 500 fields
}

The deserialize is slow only when passing a Foo child. Even if I already performed a deserialization:

 StringReader sr = new StringReader(@"<Bar></Bar>");
 XmlSerializer serializer = new XmlSerializer(typeof(Bar));
 object o = serializer.Deserialize(sr); // FAST

 StringReader sr = new StringReader(@"<Bar><Foo><Field1>foo</Field1></Foo></Bar>");
 XmlSerializer serializer = new XmlSerializer(typeof(Bar));
 object o = serializer.Deserialize(sr); // SLOW

EDIT I forgot to say that I analyzed the execution with Process Monitor, and I don't see any task taking a long time from my app or from csc.exe, or anything Framework-related. The system just does other stuff (or I am missing something), like antivirus, explorer.exe, Windows Search indexing (already tried to turn them off)

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

I don't know if this is related at all, but I had an issue with XSLT and found those rather interesting comments by Microsoft about the 64-Bit JITter:

The root of the problem is related to two things: First, the x64 JIT compiler has a few algorithms that are quadratically scaling. One of them is the debug info generator, unfortunately. So for very large methods, it really gets out of control.

[...]

some algorithms in the 64 bit JIT that have polynomial scaling. We're actually working on porting the 32 bit JIT compiler to x64, but that won't see the light of day until the next side-by-side release of the runtime (as in "2.0 & 4.0 run side-by-side, but 3.0/3.5/3.5SP1 were 'in-place' releases). I've switched this over to a 'suggestion' so I can keep it attached to the JIT-throughput work item to make sure this is fixed when the newly ported JIT is ready to ship.

Again, this is about a completely different issue, but it appears to me that the 64-Bit JITter comments are universal.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...