Link the XML file (I’d remove everything useless, and I’d add just the XML namespace and a superparent tag) to an AsXElement (XML) node, then downstream place a GetElements and further down a GetAttribute; finally an Attribute node.
GetElements will need the text “Value” to find all the Value tags, GetAttributes would be fed with a spread by a Select (String Bin) node, which would be fed like this: Input = the attributes you want to retrieve in the order you want to retrieve them, BinSize = the number of the attributes to be retrieved (in order to have all of the x,y,z triplets of each line), Select = the content of GetElements’ Elements Bin Size pin (so a link between the two nodes would suffice).
You should then patch your logic to gather and process retrieved data.
If using GetSlice feels very heavy and inefficient, chances are that either your file is huge so it’s a machine issue or the patched logic needs some tuning. IMMHO.
Thank you for the example! I will try it out and see if it solves the performance issue.
The far right solution seems to be what I was trying to figure out how to do.
h99, thank you for the description of how to solve the issue. I will try out bjoern’s setups and report back.
Thank you for the help!
Edit:
I was able to get bjoern’s example to work with much better performance than I had previously.
I changed the XPath Query a bit so it samples the correct data from different point names each with the x,y,z values.