I was looking at the so called “round 6” of the performance tests published on the esbperformance.org, and was wondering how the WSO2 ESB performed so bad. The key concern was the fact that the CPU was hug by the test for a prolonged time of more than 6 hours. I could not believe this, given for a fact that I know for a fact how much the WSO2 ESB is battle tested on real battle grounds.
So, rather than being skeptical, I ran the tests on my own, using the same settings, as the site has published, using the same AMI instance that they advertised. After four repeated attempts, I could not get the CPU to spin like this site is claiming.
So I did more detailed analysis, to part facts from fiction.
This article is artistically crafted, “The ESB was stuck for over 6.5 hours with 100% CPU utilization. Since we had selected this ESB for the final run initially, we re-attempted the analysis and the retry completed the test.” If CPU hog is the highlight, how did it ever run again? If it is for real, how could I not reproduce the scenario at least one out of four times? Science == repeatable tests. In this case, the same old procedure, setup and artifacts cannot re-produce the claims. For sure, it does not look to be science. The CPU problem was so highlighted, that it is the WSO2 ESB that appears on the top of the “problem report”, yet it was the last test run. Again, it could not be random co-incidence, that it was chosen to be on top – the headlines should be on the leaders in the space, else the papers would not be read by the crowds.
If anyone wants the facts, I would like to invite them to run the AMI, and see for themselves, as I did. Again, it is a fact that many those who happen to scan the headlines, hardly verify the facts on their own, no matter how much info are provided on how to verify, such is the behavior with news – we listen, but hardly verify. That again is a good crafty tactic – “behold, this fails”, and in case anyone tries “ah well, it did once for me, may be not for you”. And the part that is never claimed is that “may be I did a mistake in my first run, I cannot re-produce, but anyway I will report”.
Warnings and metrics are also counted as exceptions – it could not be a fact that the Java expert authors cannot tell warnings and exceptions apart, it got to be fiction.
The settings used for WSO2 ESB run are not the ideal ones. Obviously, according to authors, that is because they are not sure what they should be. However, they are knowledgeable to the fact of the WSO2 ESB’s pass through transport, because they ran against that, and it cannot be fiction that they have not seen WSO2’s publication on pass through transport performance. They have no reason to believe our numbers, but it would be fiction to say that they did not know of this article. And behold the configuration section and it is clearly given what the tuning parameters are. It could be fiction that people that runs a project as such that claims to be “open performance framework”, did not use a simple tool like Google to find these, rather spent so much of time on their own to figure out some numbers that looks best for WSO2 ESB. However, the fact is, when I ran WSO2 ESB with the ideal settings, as mentioned by the WSO2 article, some numbers appear much better than the so called “collaborators” they have on their numbers sheet. And it is simple to reproduce, just run the tests yourself with <ESB_HOME>/lib/core/WEB-INF/classes/nhttp.properties adhering to “Note: It’s recommended that snd_io_threads and lst_io_threads values are set to the CPU core count of the server ESB is running on.” principle mentioned in the above article.
So, it would be fiction to believe that the authors, who are so familiar with WSO2 technologies, having worked for WSO2 themselves for long, did not know where to look for ideal setting – let alone they should have known by heart, unless they are so forgetful. However, it could not be fiction to believe that they deliberately did not go with the ideal settings for WSO2 ESB, to make WSO2 numbers on the sheet look bad, given their in-depth knowledge on the product.
In addition to the fiction, there are many gaps in the way this performance framework is designed and executed. However, it is not my intention here to point out the technical issues in this performance framework, they should have known better, if they claim to be the “open framework”. The sheer number of mistakes in the way the averages are accounted and graphs are portrayed shows how unprofessional work this is. Just to give one example, I see many number of errors in case of a non-WSO2 ESB, and yet the final graph is much better than that of WSO2 even though the WSO2 ESB has zero errors. So nice graphics, rather than correct graphs based on facts – may be that is part of the “collaboration” too.
esbperformance.org is about facts or is it fiction? I will let you decide on your own.
Performance is a key element for any ESB. If an ESB is not performing, that is not an ESB to start with, even though performance is not the only thing about an ESB. So every ESB vender out there makes an effort to tune the performance. WSO2 is no exception. We were spending many cycles on performance aspects of our ESB in the past, and we do now, and we will continue to do so in the future. And if others find issues in WSO2 ESB (and for that matter, any other WSO2 product), not only our users, but even our competitors, we will put our heads down and will go all out and fix them all to make sure those issues are all fixed right and verified. That is what “open” means to WSO2, we will not bar anyone from running anything and reporting any issues against us – we will accept the facts and fix them all. Our bugs trackers are open, so are the mailing lists – if you see any issues, come tell us, and we will fix them – we are open source as well as open community and open culture. However, we are unable to fix fictitious issues that are fabricated.
So, rather than being skeptical, I ran the tests on my own, using the same settings, as the site has published, using the same AMI instance that they advertised. After four repeated attempts, I could not get the CPU to spin like this site is claiming.
So I did more detailed analysis, to part facts from fiction.
This article is artistically crafted, “The ESB was stuck for over 6.5 hours with 100% CPU utilization. Since we had selected this ESB for the final run initially, we re-attempted the analysis and the retry completed the test.” If CPU hog is the highlight, how did it ever run again? If it is for real, how could I not reproduce the scenario at least one out of four times? Science == repeatable tests. In this case, the same old procedure, setup and artifacts cannot re-produce the claims. For sure, it does not look to be science. The CPU problem was so highlighted, that it is the WSO2 ESB that appears on the top of the “problem report”, yet it was the last test run. Again, it could not be random co-incidence, that it was chosen to be on top – the headlines should be on the leaders in the space, else the papers would not be read by the crowds.
If anyone wants the facts, I would like to invite them to run the AMI, and see for themselves, as I did. Again, it is a fact that many those who happen to scan the headlines, hardly verify the facts on their own, no matter how much info are provided on how to verify, such is the behavior with news – we listen, but hardly verify. That again is a good crafty tactic – “behold, this fails”, and in case anyone tries “ah well, it did once for me, may be not for you”. And the part that is never claimed is that “may be I did a mistake in my first run, I cannot re-produce, but anyway I will report”.
Warnings and metrics are also counted as exceptions – it could not be a fact that the Java expert authors cannot tell warnings and exceptions apart, it got to be fiction.
The settings used for WSO2 ESB run are not the ideal ones. Obviously, according to authors, that is because they are not sure what they should be. However, they are knowledgeable to the fact of the WSO2 ESB’s pass through transport, because they ran against that, and it cannot be fiction that they have not seen WSO2’s publication on pass through transport performance. They have no reason to believe our numbers, but it would be fiction to say that they did not know of this article. And behold the configuration section and it is clearly given what the tuning parameters are. It could be fiction that people that runs a project as such that claims to be “open performance framework”, did not use a simple tool like Google to find these, rather spent so much of time on their own to figure out some numbers that looks best for WSO2 ESB. However, the fact is, when I ran WSO2 ESB with the ideal settings, as mentioned by the WSO2 article, some numbers appear much better than the so called “collaborators” they have on their numbers sheet. And it is simple to reproduce, just run the tests yourself with <ESB_HOME>/lib/core/WEB-INF/classes/nhttp.properties adhering to “Note: It’s recommended that snd_io_threads and lst_io_threads values are set to the CPU core count of the server ESB is running on.” principle mentioned in the above article.
So, it would be fiction to believe that the authors, who are so familiar with WSO2 technologies, having worked for WSO2 themselves for long, did not know where to look for ideal setting – let alone they should have known by heart, unless they are so forgetful. However, it could not be fiction to believe that they deliberately did not go with the ideal settings for WSO2 ESB, to make WSO2 numbers on the sheet look bad, given their in-depth knowledge on the product.
In addition to the fiction, there are many gaps in the way this performance framework is designed and executed. However, it is not my intention here to point out the technical issues in this performance framework, they should have known better, if they claim to be the “open framework”. The sheer number of mistakes in the way the averages are accounted and graphs are portrayed shows how unprofessional work this is. Just to give one example, I see many number of errors in case of a non-WSO2 ESB, and yet the final graph is much better than that of WSO2 even though the WSO2 ESB has zero errors. So nice graphics, rather than correct graphs based on facts – may be that is part of the “collaboration” too.
esbperformance.org is about facts or is it fiction? I will let you decide on your own.
Performance is a key element for any ESB. If an ESB is not performing, that is not an ESB to start with, even though performance is not the only thing about an ESB. So every ESB vender out there makes an effort to tune the performance. WSO2 is no exception. We were spending many cycles on performance aspects of our ESB in the past, and we do now, and we will continue to do so in the future. And if others find issues in WSO2 ESB (and for that matter, any other WSO2 product), not only our users, but even our competitors, we will put our heads down and will go all out and fix them all to make sure those issues are all fixed right and verified. That is what “open” means to WSO2, we will not bar anyone from running anything and reporting any issues against us – we will accept the facts and fix them all. Our bugs trackers are open, so are the mailing lists – if you see any issues, come tell us, and we will fix them – we are open source as well as open community and open culture. However, we are unable to fix fictitious issues that are fabricated.
Technorati Tags: ESB Performance,WSO2 ESB
Comments
I got myself involved in the discussion, because I follow both companies. I am interested in both WSO2 ESB and AdroitLogic UltraESB.
To me both seem to be among the best (not only performance) open source ESBs around.
As for the figures: it is clear that some mistakes may have been made in configuring the WSO2 ESB. If this has been done intentionally or not is just speculation, (BIG) BUT...
... one can come easily to the conclusion that it was intentionally. The reason for this is obvious. esbperformance.org is run by AdroitLogic people. They might have a pseudo agenda. This was clear to me from the beginning, so I looked at the figures of the tested ESBs with the required scepticism. I look at performance figures published by any company in the same way. As Kirsten Johnston said in 3rd Rock From The Sun: "Trust no-one".
IMHO this discussion is going in the wrong direction. Enough has been said on the subject. It is clear that there is a difference of opinions. If this discussion continues, it might even harm both companies.
So what to do now?
I do not request or order esbperformance.org to rerun the test of the WSO2 ESB again, with the mentioned recommendations applied in the configuration, but I do recommend them to do it anyway, just to put an end to the discussion. I expect such professionalism from any company.
Please read the following http://dushansview.blogspot.com/2012/08/esbperformanceorg-facts-or-fiction-cont_24.html,
I have given my thoughts on this
I would even go one step further & say that the tests have to be repeated for the 4 leading ESBs which didn't even make it due to incorrect suboptimal configuration, or the names of those 4 ESBs have to be left out since it is a defamation to claim something that is not true.
Also take a look at http://www.dankulp.com/blog/2012/08/talend-esb-performance-tuning/. The 2 man esbperformance.org team personally contacted Dan Kulp from Talend to collaborate, and get the proper configurations for the Talend ESB. It was totally defamatory that the WSO2 team was not consulted, and invalid comments derogatory towards WSO2 ESB were published. Perhaps the ex-WSO2ers think that they are still the WSO2 ESB experts even though they left many years ago. The product has evolved over time, and large scale deployments such as eBay which is running WSO2 ESB were performance tuned by other ESB experts at WSO2.