Quantcast
Channel: Math.NET Numerics
Viewing all 971 articles
Browse latest View live

Source code checked in, #cf3cae4377e9


Source code checked in, #1dc85bf6fbb9

$
0
0
MCMC: Minor namespace/naming/doc fixes

Created Issue: Interpolation data limits? [5725]

$
0
0
I'm using the following method:

IInterpolation ii = MathNet.Numerics.Interpolation.Interpolate.RationalWithoutPoles(points, values);

It seems to work when I'm passing in lists that are less than about 700 items, but fails when I'm using lists that are greater than about 800 items. I'm trying to interpolate an elevation when given a volume.

The attached text file has the raw data (861 items) Ignore the first two columns. The third is the elevation and forth is the volume. When using this data I'm getting either negative values or values that exceed the maximum elevation in the set. If I reduce the same set, by not including the first 200 items, it provides what appears to be a reasonable answer.

Thanks

Commented Issue: Interpolation data limits? [5725]

$
0
0
I'm using the following method:

IInterpolation ii = MathNet.Numerics.Interpolation.Interpolate.RationalWithoutPoles(points, values);

It seems to work when I'm passing in lists that are less than about 700 items, but fails when I'm using lists that are greater than about 800 items. I'm trying to interpolate an elevation when given a volume.

The attached text file has the raw data (861 items) Ignore the first two columns. The third is the elevation and forth is the volume. When using this data I'm getting either negative values or values that exceed the maximum elevation in the set. If I reduce the same set, by not including the first 200 items, it provides what appears to be a reasonable answer.

Thanks
Comments: Thanks. I just had a go at the data provided, but it seems to work fine for me. I used the following snippet, in LinqPad (hence the .Dump calls): ``` var samples = File.ReadAllLines(@"E:\Downloads\elev2vol.txt") .Select(l => l.Split(',').Skip(2).Select(s => Double.Parse(s.Trim())).ToArray()) .ToArray(); var x = samples.Select(z => z[0]).ToArray(); var y = samples.Select(z => z[1]).ToArray(); x.Length.Dump(); var ii = Interpolate.RationalWithoutPoles(x,y); double e1 = 0, e2 = 0, e3 = 0; int count = 0; for(int i=0;i<x.Length-1;i++) { count++; e1 += Math.Abs(ii.Interpolate(x[i]) - y[i]); e2 += Math.Abs(ii.Interpolate((x[i]+x[i+1])/2) - (y[i]+y[i+1])/2); e3 += Math.Abs(ii.Interpolate((4.0*x[i]+x[i+1])/5) - (4.0*y[i]+y[i+1])/5); } (e1 / count).Dump(); (e2 / count).Dump(); (e3 / count).Dump(); ``` returning: 861 (# samples) 0 (mean abs error at sample points, zero expected since this is an interpolation) 0.253021407617856 (mean abs error at center between samples, if it were linear between points) 0.156471265231335 (mean abs error at 1/5 between samples, if it were linear between points) Could you provide some concrete example values where the interpolation fails?

Commented Issue: Interpolation data limits? [5725]

$
0
0
I'm using the following method:

IInterpolation ii = MathNet.Numerics.Interpolation.Interpolate.RationalWithoutPoles(points, values);

It seems to work when I'm passing in lists that are less than about 700 items, but fails when I'm using lists that are greater than about 800 items. I'm trying to interpolate an elevation when given a volume.

The attached text file has the raw data (861 items) Ignore the first two columns. The third is the elevation and forth is the volume. When using this data I'm getting either negative values or values that exceed the maximum elevation in the set. If I reduce the same set, by not including the first 200 items, it provides what appears to be a reasonable answer.

Thanks
Comments: Some more notes, related more to your problem than the reported issue itself: Note that an interpolation to a rational function of order >800 is somewhat brutal (school-book approaches like a neville polynomial would utterly fail here, even more as the samples are equidistant); you may want to consider a spline interpolation instead: ``` var ii = new CubicSplineInterpolation(x,y); ``` Looking at the actual curve it seems a rational curve of order 3-4 should represent the data quite well. Do you really need an interpolation, or would a regression (see http://christoph.ruegg.name/blog/2012/9/9/linear-regression-mathnet-numerics.html) e.g. to a polynomial function of order 3-6 do the job as well? Alternatively, interpolating over a small subset of e.g. 5 samples (of the provided 800) could work as well, e.g. like this: ``` var n = 5; var chebyshevNodes = Enumerable.Range(1,n) .Select(i => Math.Cos((2*i-1)*Math.PI/(2*n))) .Select(z => (int)Math.Round((z+1)*(x.Length-1)/2)) .Concat(new[] {0,x.Length-1}) .OrderBy(k => k); var ii = Interpolate.RationalWithoutPoles( chebyshevNodes.Select(k => x[k]).ToArray(), chebyshevNodes.Select(k => y[k]).ToArray()); ```

Commented Issue: Interpolation data limits? [5725]

$
0
0
I'm using the following method:

IInterpolation ii = MathNet.Numerics.Interpolation.Interpolate.RationalWithoutPoles(points, values);

It seems to work when I'm passing in lists that are less than about 700 items, but fails when I'm using lists that are greater than about 800 items. I'm trying to interpolate an elevation when given a volume.

The attached text file has the raw data (861 items) Ignore the first two columns. The third is the elevation and forth is the volume. When using this data I'm getting either negative values or values that exceed the maximum elevation in the set. If I reduce the same set, by not including the first 200 items, it provides what appears to be a reasonable answer.

Thanks
Comments: Btw, unrelated to the reported issue itself, a cubic spline interpolation might work quite well here: ``` var ii = new CubicSplineInterpolation(x,y); ```

New Post: Non-Mathematician needs help getting started! (Please)

$
0
0
Look. I'm not a mathematician or a statistician. I'm just a software developer with a job to do and most of the documentation I find regarding this project is Greek to me (no offense to the Greeks) so I'm hoping one of you more learned people wouldn't mind taking a few minutes to guide me in the proper direction.
My task is to predict the price of a sale of an item at auction based on historical auction data. I have a List of Sale objects that have a SaleDate and SaleAmount where SaleDate is fairly random (as opposed to being in a more usable pattern such as every weekday or once per week or once per month). Then, I need to show the overall curve/line graphically in a Silverlight/RT/WinPhone UI.
After some research, I thought that using a Weighted Least Squares might be a good fit (currently using non-Weighted Least Squares and it, after 4 years, is no longer doing the job well enough). I found this project and am quite impressed with how thorough it appears to be. I would like to use it, if possible. But I don't know where to start....
So, would someone be kind enough to point me in the right direction?

New Post: using discrete fourier transform to observe pure sinusoidal signal

$
0
0
hello,

I want to use discrete fourier tranform with C#. so mathnet seemed like a perfect candidate for the job.

During a 5 seconds capture, i can capture about 40~57 values (sometimes more, sometimes less).

I know the captured data is expected to be a sinusoid. The period is 1 sec so there should be about 5 periods in the captured data. I want to give a quality score to the measure (how far from the model) :
0 = bad.
1 = perfect score

my problem is that the acquisition is slow/unreliable and during the same time frame (5sec) i get more or less samples between runs.

I believe that in order to give a "quality score" to such a feed i should focus on :
  • make sure the maximum values are located in the correct place in Complex[] (= observed frequency is ok)
  • make sure the ratio between the max and the biggest other value decent (= signal to noise ratio is ok)
However, the use of Complex[] samples in mathnet confuses me + FourierOptions confuses me too + i am unsure i should use DTF in this you-never-have-same-amount-of-samples situation + i have not made any math in over 10 years. i need help :)

My questions
  • is the approach correct ? does it make sense?
  • is using Complex(mydata, 0) ok for each input point ? (ignoring imaginary value)
  • am i right to ignore phase and use only magnitude in the Complex[] result ?
thanks

alex

class Program
{
    static void Main(string[] args)
    {
        double[] scores = new double[10];
        scores[0] = fft_test(54, 5.00, null);
        scores[1] = fft_test(47, 5.00, null);
        scores[2] = fft_test(41, 5.00, new[] { 31 });
        scores[3] = fft_test(43, 5.00, new[] { 25 });
        scores[4] = fft_test(55, 5.00, new[] { 12, 22 });
        scores[5] = fft_test(43, 5.00, new[] { 12, 22 });
        scores[6] = fft_test(54, 5.00, new[] { 12, 35, 47 });
        scores[7] = fft_test(47, 5.00, new[] { 12, 35, 47 });

    }

    static double fft_test(int numPoints, double numPeriods, int[] stupidIndexes)
    {
        Random rnd = new Random();
        double[] x_values = new double[numPoints];
        double[] y_values = new double[numPoints];
        //double[] magnitudes = new double[numPoints];
        double start = 0;
        double end = numPeriods * 2 * Math.PI;
        Complex[] dftme = new Complex[numPoints];
        for (int i = 0; i < numPoints ; i++)
        {
            x_values[i] = (double)(i * (end - start)) / numPoints; 
            y_values[i] = (stupidIndexes != null && stupidIndexes.Contains(i) ?  Math.Sin(rnd.NextDouble() *2 * Math.PI)  :   Math.Sin(x_values[i]));
            dftme[i] = new Complex(y_values[i], 0);
        }

        //MathNet.Numerics.IntegralTransforms.Transform.FourierForward(fftme, MathNet.Numerics.IntegralTransforms.FourierOptions.NoScaling); // is this the right fonction ? or do i have to instantiace DiscreteFourierTransform ?
        MathNet.Numerics.IntegralTransforms.Algorithms.DiscreteFourierTransform dft = new MathNet.Numerics.IntegralTransforms.Algorithms.DiscreteFourierTransform();
        dft.BluesteinForward(dftme, MathNet.Numerics.IntegralTransforms.FourierOptions.NoScaling); // what is this FourierOption thing ? i just want the "usual" one

        string res = "";
        for (int i = 0; i < numPoints; i++) { res += y_values[i] + ";" + dftme[i].Real + ";" + dftme[i].Imaginary + ";" + dftme[i].Magnitude + ";" + dftme[i].Phase + "|"; }
        //for (int i = 0; i < numPoints; i++) { magnitudes[i] = dftme[i].Magnitude; }
        //Array.Sort(magnitudes);
        //Array.Reverse(magnitudes);
        int[] topIndices = getPeakIndices(dftme, 5);

        int[] expectedIndices = { (int)numPeriods, (int)(numPoints - numPeriods) };

        double freqScore = (expectedIndices.Contains(topIndices[0]) && expectedIndices.Contains(topIndices[1]) ? 1.0 : 0.0);
        double signalToNoiseScore = 1 - (dftme[topIndices[3]].Magnitude / dftme[topIndices[0]].Magnitude);


        return freqScore * signalToNoiseScore;

    }

    static int[] getPeakIndices(Complex[] dtfme, int numPoints)
    {
        int[] results = new int[numPoints];
        double currentMax = double.MinValue;
        double previousMax = double.MaxValue;
        int currentIndex = -1;
        // this o(n²) smells like bug. it is too late at night for clear thinking
        for (int k = 0; k < numPoints; k++)
        {
            for (int i = 0; i < dtfme.Length; i++)
            {
                if ( currentMax < dtfme[i].Magnitude && dtfme[i].Magnitude < previousMax)
                {
                    currentMax = dtfme[i].Magnitude;
                    currentIndex = i;
                }
            }
            results[k] = currentIndex;
            previousMax = currentMax;
            currentMax = double.MinValue;
        }
        return results;
    }



}





New Post: How to use in Visual Studio 2008?

$
0
0
We have a .NET project that is written under Visual Studio 2008. We also needed numerical processings and we chose Math.NET Numerics for this as it has all the functionalities we need.

But after adding a reference to the redistributable binary MathNet.Numerics.dll (portable version) and compiling the project we get the following error:

Error:
Could not load referenced assembly "...\MathNet.Numerics.dll". Caught a BadImageFormatException saying "Could not load file or assembly '...\MathNet.Numerics.dll' or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded.".

So we decided to re-compile source code by using VS 2008. But it gives numerous syntax errors due to the new C# syntax. They can not be fixed manually as there are thousands of errors.

We also don't have VS 2012 to re-compile the code and re-target to .NET 2.0 (or .NET 3.5), if this works at all.

Therefore, my question is how I can get redistributable Math.NET binaries for .NET 2.0 (or at least .NET 3.5), or how get source codes that are .NET 3.5 compliant?

Thanks for any help in advance.

New Post: using discrete fourier transform to observe pure sinusoidal signal

$
0
0
  • is the approach correct ? does it make sense?
The approach makes sense, but I think the implementation is flawed (your expectedIndices definition and the following code). Make sure you understand where the frequency peaks should be located in your FFT output (see your next question). Also consider spectral leakage.
  • is using Complex(mydata, 0) ok for each input point ? (ignoring imaginary value)
Yes, but since you're doing a complex DFT on real data, the output frequencies are symmetrical around n/2, so after your FFT you should only consider the first n/2 values for further processing.
  • am i right to ignore phase and use only magnitude in the Complex[] result ?
Yes.


Additionally, your getPeakIndices method isn't correct. Why not just use Array.Sort:
class MagnitudeComparer : IComparer<Complex>
{
    publicint Compare(Complex a, Complex b)
    {
    return b.Magnitude.CompareTo(a.Magnitude);
    }
}

staticint[] getPeakIndices(Complex[] dtfme, int numPoints)
{
    var indices = Enumerable.Range(0, dtfme.Length).ToArray();
    
    // Considering only the first half of the FFT data.
    Array.Sort(dtfme, indices, 0, dtfme.Length / 2, new MagnitudeComparer());

    return indices.Take(numPoints).ToArray();
}

New Post: using discrete fourier transform to observe pure sinusoidal signal

$
0
0
Here's a version of getPeakIndices which will work. the above version will also sort the complex values, so the indices won't point to the right location anymore.
staticint[] getPeakIndices(Complex[] dtfme, int numPoints)
{
    int n = dtfme.Length / 2;

    var magnitudes = dtfme.Take(n).Select((z) => z.Magnitude).ToArray();
    var indices = Enumerable.Range(0, n).ToArray();

    Array.Sort(magnitudes, indices);

    return indices.Reverse().Take(numPoints).ToArray();
}

New Post: Quality of XML comments.

$
0
0
Hi, I'm new to this library. I have my own SpatialVector class but I need to rotate around an axis so before beginning to write my own matrix type, I thought I'd look at MathNet and possibly replacing my vector with one from this lib, if its better.

The problem I have hit immediately is that I can't even instantiate one. The XML documentation/comments on the SparseVector (sparse? I guess this is the right choice for vectors in Euclidian space) are useless.

You can look at this comment as critical feedback. In my own APIs, and from reading and studying framework design, a smooth and gradual on-ramp is the key to adoption and success. This means massive user-empathy, choosing self-explanatory variable names and going over the top with XML comments. The result is falling into the pit of success, i.e. make it hard to do the wrong thing.

You might think I'm being harsh, considering the community effort put in for no financial reward, but this is my point. If you put all that effort in, surely you want people to take to it immediately.

Most internet people have no patience and just move on. They're in "trying to get something done" mode. They don't care about MathNet, unless it saves the day. They didn't pay for it, so they have no incentive to invest time learning it.

So, specifically, the constructor overloads on Double.SparseVector have these signature and comment combinations:

IList<double> array: "The array to create this vector from."

It's not an array and what should it contain? i, j, k unit vectors? How many? Can I make 11-dimension vectors?

int size: "the size of the vector."

Like, the magnitude? Size is meaningless. Is this the dimensions it has?

value: "the value to set each element to."

So maybe the size is the dimension count, but how does that work when there's only one value variable?

What about some common scenario constructors or subclasses, most people will be using 2D and 3D vectors.

~Luke

New Post: How to hook up to the a native provider

$
0
0
Hi,

I am trying to hook up to the free ACML provider, but can't find Intel Fortran v4.4 for download on the AMD page. There's one GNU version built for 64bit windows. Do you have a dll for a box with an Intel 32bit CPU?

Are you planning to wrap free native providers?

Thanks,
Candy

New Post: How to hook up to the a native provider

$
0
0
Hi,

AMD no longer provides a 32-bit version of ACML so we dropped the ACML wrapper.
Are you planning to wrap free native providers?
There are issues building OpenBLAS and ATLAS on windows. ATLAS is really close, but there is a problem with it building the full LAPACK library. Once it can, we'll wrap it. We could build LAPACK separately, but I'd prefer the the one line ATLAS build (lower support "cost"). Anyone is free to pursue that route.

Regards,
Marcus

New Post: EigenValue Decomposition - method failing

$
0
0
An update. I added EVD to the native wrapper in my fork, but the managed code needs to updated to work with a 1-D array instead of a jagged array. I will finish it when I get some free time.

New Post: Use with Visual Studio 2008?

$
0
0
We officially do not support .Net below 4.0 right now, but it should be possible. In fact, most of the missing pieces are already there for the portable library build (where we do not have System.Numerics and full TPL either).

I myself have no time (nor personal interest to be honest, as I don't see the point of using .Net 3.5 in 2013) to work on a special build for .Net 3.5, but if someone finds the time to make it work in a simple way I'd be more than happy to integrate it into mainline and start producing official .Net 3.5 builds in the future.

New Post: Quality of XML comments.

$
0
0
No worries, you're preaching to the choir. We're trying to follow the very same API design approach (which is actually unusual for math libraries), although it seems there is still quite some room for improvement. Such feedback is therefore very welcome!

Concerning Double.SparseVector: I agree, the XML comments on the constructors are not really helping, and the name "size" is unfortunate. We should work on that.

Then there also seems to be a context mismatch here: It seems to me you're looking for spatial/geometric/mechanical vectors and (transformation) matrices, while our matrix and vector types are really about linear algebra (that's why they are in the LinearAlgebra namespace). The usage scenarios, expectations and even the "language" in these two contexts are quite different, but also the way they would be implemented. Maybe we should point this out more clearly. While I'd love to go into the spatial world in the future (think "Math.NET Spatial"), our current matrix and vector types are really focusing on the linear algebra scenario and are not well suited for geometry (they don't even provide a cross-product, there's no notion of affine transformations or a quaternion, they would perform badly, etc.)

As such, in the linear algebra context:
  • Vector and matrix dimensions are arbitrary, so providing an array or list would be an obvious and expected choice for construction.
  • Vectors of size 2 or 3 would not be specially dominant. Nevertheless, we could still consider adding a constructor accepting a params-array for simpler usage.
  • I'd consider any usage of sparse vectors with a dimension less than 50 as a bug (dense vectors should be used instead).
  • I'd expect the API to be optimized for common linear algebra scenarios, including matrix decompositions, working with linear systems and generic least squares problems.
While on the other hand, in the spatial context:
  • Dimensions should be hard-coded to 2 or 3, maybe also 4. No arrays would be used at all.
  • All operations would be unfolded and heavily optimized for these dimensions. No loops would be needed.
  • I'd expect explicit x-y-z style constructors, but also e.g. by length and direction, maybe even using angles.
  • I'd expect the API to be optimized for geometry (and maybe mechanics).
  • There could also be some basic support for coordinate systems, mapping, physics or gaming scenarios.
Thanks,
Christoph

New Post: How to hook up to the a native provider

New Post: How to hook up to the a native provider

$
0
0
It has been a while since I've tried (I don't recall the instructions I followed). I'll try these and see how it goes.

New Post: How to hook up to the a native provider

Viewing all 971 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>