I had occasion to create a very large, but narrowly banded matrix, on the order of 100k by 100k. I found that member-wise item setting using At() failed due an internal step where 'rows * columns' is calculated, and since both are type int, I was getting
an integer overflow. I fixed this for my own use by modifying the source to simply cast the row and column integers to type long prior to evaluation, but perhaps it would be worthwhile to include something in the code to handle such cases. I'm sure I'm in
the extreme minority using such large matrices (for signal analysis, FWIW), but I'm probably not the only one.
Comments: (Not sure why Codeplex keeps reopening these issues all the time)
an integer overflow. I fixed this for my own use by modifying the source to simply cast the row and column integers to type long prior to evaluation, but perhaps it would be worthwhile to include something in the code to handle such cases. I'm sure I'm in
the extreme minority using such large matrices (for signal analysis, FWIW), but I'm probably not the only one.
Comments: (Not sure why Codeplex keeps reopening these issues all the time)