Login | Register For Free | Help
Search for: (Advanced)

Mailing List Archive: Python: Python

numpy (matrix solver) - python vs. matlab

 

 

First page Previous page 1 2 Next page Last page  View All Python python RSS feed   Index | Next | Previous | View Threaded


newsboost at gmail

May 1, 2012, 11:03 PM

Post #26 of 47 (2578 views)
Permalink
Re: numpy (matrix solver) - python vs. matlab [In reply to]

On 05/02/2012 01:38 AM, Russ P. wrote:
> On May 1, 4:05 pm, Paul Rubin<no.em...@nospam.invalid> wrote:
>> someone<newsbo...@gmail.com> writes:
>>> Actually I know some... I just didn't think so much about, before
>>> writing the question this as I should, I know theres also something
>>> like singular value decomposition that I think can help solve
>>> otherwise illposed problems,
>>
>> You will probably get better advice if you are able to describe what
>> problem (ill-posed or otherwise) you are actually trying to solve. SVD
>> just separates out the orthogonal and scaling parts of the
>> transformation induced by a matrix. Whether that is of any use to you
>> is unclear since you don't say what you're trying to do.
>
> I agree with the first sentence, but I take slight issue with the word
> "just" in the second. The "orthogonal" part of the transformation is
> non-distorting, but the "scaling" part essentially distorts the space.
> At least that's how I think about it. The larger the ratio between the
> largest and smallest singular value, the more distortion there is. SVD
> may or may not be the best choice for the final algorithm, but it is
> useful for visualizing the transformation you are applying. It can
> provide clues about the quality of the selection of independent
> variables, state variables, or inputs.

Me would like to hear more! :-)

It would really appreciate if anyone could maybe post a simple SVD
example and tell what the vectors from the SVD represents geometrically
/ visually, because I don't understand it good enough and I'm sure it's
very important, when it comes to solving matrix systems...

--
http://mail.python.org/mailman/listinfo/python-list


russ.paielli at gmail

May 1, 2012, 11:36 PM

Post #27 of 47 (2585 views)
Permalink
Re: numpy (matrix solver) - python vs. matlab [In reply to]

On May 1, 11:03 pm, someone <newsbo...@gmail.com> wrote:
> On 05/02/2012 01:38 AM, Russ P. wrote:
>
>
>
>
>
>
>
>
>
> > On May 1, 4:05 pm, Paul Rubin<no.em...@nospam.invalid>  wrote:
> >> someone<newsbo...@gmail.com>  writes:
> >>> Actually I know some... I just didn't think so much about, before
> >>> writing the question this as I should, I know theres also something
> >>> like singular value decomposition that I think can help solve
> >>> otherwise illposed problems,
>
> >> You will probably get better advice if you are able to describe what
> >> problem (ill-posed or otherwise) you are actually trying to solve.  SVD
> >> just separates out the orthogonal and scaling parts of the
> >> transformation induced by a matrix.  Whether that is of any use to you
> >> is unclear since you don't say what you're trying to do.
>
> > I agree with the first sentence, but I take slight issue with the word
> > "just" in the second. The "orthogonal" part of the transformation is
> > non-distorting, but the "scaling" part essentially distorts the space.
> > At least that's how I think about it. The larger the ratio between the
> > largest and smallest singular value, the more distortion there is. SVD
> > may or may not be the best choice for the final algorithm, but it is
> > useful for visualizing the transformation you are applying. It can
> > provide clues about the quality of the selection of independent
> > variables, state variables, or inputs.
>
> Me would like to hear more! :-)
>
> It would really appreciate if anyone could maybe post a simple SVD
> example and tell what the vectors from the SVD represents geometrically
> / visually, because I don't understand it good enough and I'm sure it's
> very important, when it comes to solving matrix systems...

SVD is perhaps the ultimate matrix decomposition and the ultimate tool
for linear analysis. Google it and take a look at the excellent
Wikipedia page on it. I would be wasting my time if I tried to compete
with that.

To really appreciate the SVD, you need some background in linear
algebra. In particular, you need to understand orthogonal
transformations. Think about a standard 3D Cartesian coordinate frame.
A rotation of the coordinate frame is an orthogonal transformation of
coordinates. The original frame and the new frame are both orthogonal.
A vector in one frame is converted to the other frame by multiplying
by an orthogonal matrix. The main feature of an orthogonal matrix is
that its transpose is its inverse (hence the inverse is trivial to
compute).

The SVD can be thought of as factoring any linear transformation into
a rotation, then a scaling, followed by another rotation. The scaling
is represented by the middle matrix of the transformation, which is a
diagonal matrix of the same dimensions as the original matrix. The
singular values can be read off of the diagonal. If any of them are
zero, then the original matrix is singular. If the ratio of the
largest to smallest singular value is large, then the original matrix
is said to be poorly conditioned.

Standard Cartesian coordinate frames are orthogonal. Imagine an x-y
coordinate frame in which the axes are not orthogonal. Such a
coordinate frame is possible, but they are rarely used. If the axes
are parallel, the coordinate frame will be singular and will basically
reduce to one-dimensional. If the x and y axes are nearly parallel,
the coordinate frame could still be used in theory, but it will be
poorly conditioned. You will need large numbers to represent points
fairly close to the origin, and small deviations will translate into
large changes in coordinate values. That can lead to problems due to
numerical roundoff errors and other kinds of errors.

--Russ P.
--
http://mail.python.org/mailman/listinfo/python-list


jpiitula at ling

May 1, 2012, 11:42 PM

Post #28 of 47 (2582 views)
Permalink
Re: numpy (matrix solver) - python vs. matlab [In reply to]

"someone" writes:

> except it would be nice to learn some things for future use (for
> instance understanding SVD more - perhaps someone geometrically can
> explain SVD, that'll be really nice, I hope)...

The Wikipedia article looks promising to me:
<http://en.wikipedia.org/wiki/Singular_value_decomposition>

Lots in it, with annotated links to more, including a song.
--
http://mail.python.org/mailman/listinfo/python-list


no.email at nospam

May 2, 2012, 12:56 AM

Post #29 of 47 (2592 views)
Permalink
Re: numpy (matrix solver) - python vs. matlab [In reply to]

someone <newsboost [at] gmail> writes:
>> You will probably get better advice if you are able to describe what
>> problem (ill-posed or otherwise) you are actually trying to solve. SVD
> I don't understand what else I should write. I gave the singular
> matrix and that's it. Nothing more is to say about this problem,

You could write what your application is. What do the numbers in that
matrix actually represent?

> except it would be nice to learn some things for future use (for
> instance understanding SVD more

The Wikipedia article about SVD is pretty good.

> Still, I dont think I completely understand SVD. SVD (at least in
> Matlab) returns 3 matrices, one is a diagonal matrix I think.

Yes. Two diagonal matrices and a unitary matrix. The diagonal matrices
have the singular values which are the magnitudes of the matrix's
eigenvalues. A matrix (for simplicity consider it to have nonzero
determinant) represents a linear transformation in space. Think of a
bunch of vectors as arrows sticking out of the origin in different
directions, and consider what happens to them if you use the matrix to
act on all of them at once. You could break the transformation into 3
steps: 1) scale the axes by suitable amounts, so all the vectors stretch
or shrink depending on their directions. 2) Rotate all the vectors
without changing their lengths, through some combination of angles; 3)
undo the scaling transformation so that the vectors get their original
lengths back. Each of these 3 steps can be described by a matrix and
the 3 matrices are what SVD calculates.

I still don't have any idea whether this info is going to do you any
good. There are some quite knowledgeable numerics folks here (I'm not
one) who can probably be of more help, but frankly your questions don't
make a whole lot of sense.

You might get the book "Numerical Recipes" which is old but instructive.
Maybe someone else here has other recommendations.
--
http://mail.python.org/mailman/listinfo/python-list


no.email at nospam

May 2, 2012, 12:57 AM

Post #30 of 47 (2584 views)
Permalink
Re: numpy (matrix solver) - python vs. matlab [In reply to]

"Russ P." <russ.paielli [at] gmail> writes:
> The SVD can be thought of as factoring any linear transformation into
> a rotation, then a scaling, followed by another rotation.

Ah yes, my description was backwards, sorry.
--
http://mail.python.org/mailman/listinfo/python-list


kiuhnm03.4t.yahoo.it at mail

May 2, 2012, 4:03 AM

Post #31 of 47 (2591 views)
Permalink
Re: numpy (matrix solver) - python vs. matlab [In reply to]

On 5/2/2012 8:00, someone wrote:
> Still, I dont think I completely understand SVD. SVD (at least in
> Matlab) returns 3 matrices, one is a diagonal matrix I think. I think I
> would better understand it with geometric examples, if one would be so
> kind to maybe write something about that... I can plot 3D vectors in
> matlab, very easily so maybe I better understand SVD if I hear/read the
> geometric explanation (references to textbook/similar is also appreciated).

Russ's post is a very good starting point. I hope you read it.

Kiuhnm
--
http://mail.python.org/mailman/listinfo/python-list


steve+comp.lang.python at pearwood

May 2, 2012, 4:52 AM

Post #32 of 47 (2586 views)
Permalink
Re: numpy (matrix solver) - python vs. matlab [In reply to]

On Wed, 02 May 2012 08:00:44 +0200, someone wrote:

> On 05/02/2012 01:05 AM, Paul Rubin wrote:
>> someone<newsboost [at] gmail> writes:
>>> Actually I know some... I just didn't think so much about, before
>>> writing the question this as I should, I know theres also something
>>> like singular value decomposition that I think can help solve
>>> otherwise illposed problems,
>>
>> You will probably get better advice if you are able to describe what
>> problem (ill-posed or otherwise) you are actually trying to solve. SVD
>
> I don't understand what else I should write. I gave the singular matrix
> and that's it.

You can't judge what an acceptable condition number is unless you know
what your data is.

http://mathworld.wolfram.com/ConditionNumber.html
http://en.wikipedia.org/wiki/Condition_number

If your condition number is ten, then you should expect to lose one digit
of accuracy in your solution, over and above whatever loss of accuracy
comes from the numeric algorithm. A condition number of 64 will lose six
bits, or about 1.8 decimal digits, of accuracy.

If your data starts off with only 1 or 2 digits of accuracy, as in your
example, then the result is meaningless -- the accuracy will be 2-2
digits, or 0 -- *no* digits in the answer can be trusted to be accurate.



--
Steven
--
http://mail.python.org/mailman/listinfo/python-list


slord at mathworks

May 2, 2012, 7:47 AM

Post #33 of 47 (2586 views)
Permalink
Re: numpy (matrix solver) - python vs. matlab [In reply to]

"Russ P." <russ.paielli [at] gmail> wrote in message
news:2275231f-405f-4ee3-966a-40c821b7c0a8 [at] 2g2000yqp
> On May 1, 11:52 am, someone <newsbo...@gmail.com> wrote:
>> On 04/30/2012 03:35 AM, Nasser M. Abbasi wrote:
>>
>> > On 04/29/2012 07:59 PM, someone wrote:
>> > I do not use python much myself, but a quick google showed that pyhton
>> > scipy has API for linalg, so use, which is from the documentation, the
>> > following code example
>>
>> > X = scipy.linalg.solve(A, B)
>>
>> > But you still need to check the cond(). If it is too large, not good.
>> > How large and all that, depends on the problem itself. But the rule of
>> > thumb, the lower the better. Less than 100 can be good in general, but
>> > I
>> > really can't give you a fixed number to use, as I am not an expert in
>> > this subjects, others who know more about it might have better
>> > recommendations.
>>
>> Ok, that's a number...
>>
>> Anyone wants to participate and do I hear something better than "less
>> than 100 can be good in general" ?
>>
>> If I don't hear anything better, the limit is now 100...
>>
>> What's the limit in matlab (on the condition number of the matrices), by
>> the way, before it comes up with a warning ???

I'm not going to answer, and the reason why is that saying "the limit is X"
may lead you to believe that "as long as my condition number is less than X,
I'm safe." That's not the case. The threshold above which MATLAB warns is
the fence that separates the tourists from the edge of the cliff of
singularity -- just because you haven't crossed the fence doesn't mean
you're not in danger of tripping and falling into the ravine.

> The threshold of acceptability really depends on the problem you are
> trying to solve.

I agree with this statement, although you generally don't want it to be too
big. The definition of "too big" is somewhat fluid, though.

> I haven't solved linear equations for a long time,
> but off hand, I would say that a condition number over 10 is
> questionable.

That seems pretty low as a general bound IMO.

> A high condition number suggests that the selection of independent
> variables for the linear function you are trying to fit is not quite
> right. For a poorly conditioned matrix, your modeling function will be
> very sensitive to measurement noise and other sources of error, if
> applicable. If the condition number is 100, then any input on one
> particular axis gets magnified 100 times more than other inputs.
> Unless your inputs are very precise, that is probably not what you
> want.
>
> Or something like that.

Russ, you and the OP (and others) may be interested in one of the books that
Cleve Moler has written and made freely available on the MathWorks website:

http://www.mathworks.com/moler/

The chapter Linear Equations in "Numerical Computing with MATLAB" includes a
section (section 2.9, starting on page 14 if I remember correctly) that
discusses norm and condition number and gives a more formal statement of
what you described. The code included in the book is written in MATLAB, but
even if you don't use MATLAB (since I know this has been cross-posted to
comp.lang.python) there's plenty of good, crunchy mathematics in that
section.

--
Steve Lord
slord [at] mathworks
To contact Technical Support use the Contact Us link on
http://www.mathworks.com

--
http://mail.python.org/mailman/listinfo/python-list


newsboost at gmail

May 2, 2012, 1:25 PM

Post #34 of 47 (2589 views)
Permalink
Re: numpy (matrix solver) - python vs. matlab [In reply to]

On 05/02/2012 08:36 AM, Russ P. wrote:
> On May 1, 11:03 pm, someone<newsbo...@gmail.com> wrote:
>> On 05/02/2012 01:38 AM, Russ P. wrote:
..
>>> On May 1, 4:05 pm, Paul Rubin<no.em...@nospam.invalid> wrote:
>> It would really appreciate if anyone could maybe post a simple SVD
>> example and tell what the vectors from the SVD represents geometrically
>> / visually, because I don't understand it good enough and I'm sure it's
>> very important, when it comes to solving matrix systems...
>
> SVD is perhaps the ultimate matrix decomposition and the ultimate tool
> for linear analysis. Google it and take a look at the excellent
> Wikipedia page on it. I would be wasting my time if I tried to compete
> with that.

Ok.

> To really appreciate the SVD, you need some background in linear
> algebra. In particular, you need to understand orthogonal
> transformations. Think about a standard 3D Cartesian coordinate frame.
> A rotation of the coordinate frame is an orthogonal transformation of
> coordinates. The original frame and the new frame are both orthogonal.

Yep.

> A vector in one frame is converted to the other frame by multiplying
> by an orthogonal matrix. The main feature of an orthogonal matrix is
> that its transpose is its inverse (hence the inverse is trivial to
> compute).

As far as i know, you have to replace orthogonal with: orthonormal. That
much I at least think I know without even going to wikipedia first...

> The SVD can be thought of as factoring any linear transformation into
> a rotation, then a scaling, followed by another rotation. The scaling
> is represented by the middle matrix of the transformation, which is a
> diagonal matrix of the same dimensions as the original matrix. The
> singular values can be read off of the diagonal. If any of them are
> zero, then the original matrix is singular. If the ratio of the
> largest to smallest singular value is large, then the original matrix
> is said to be poorly conditioned.

Aah, thank you very much. I can easily recognize some of this...

> Standard Cartesian coordinate frames are orthogonal. Imagine an x-y
> coordinate frame in which the axes are not orthogonal. Such a
> coordinate frame is possible, but they are rarely used. If the axes
> are parallel, the coordinate frame will be singular and will basically
> reduce to one-dimensional. If the x and y axes are nearly parallel,
> the coordinate frame could still be used in theory, but it will be
> poorly conditioned. You will need large numbers to represent points
> fairly close to the origin, and small deviations will translate into
> large changes in coordinate values. That can lead to problems due to
> numerical roundoff errors and other kinds of errors.

Thank you very much for your time. It always helps to get the same
explanation from different people with slightly different ways of
explaining it. Thanks!
--
http://mail.python.org/mailman/listinfo/python-list


newsboost at gmail

May 2, 2012, 1:26 PM

Post #35 of 47 (2583 views)
Permalink
Re: numpy (matrix solver) - python vs. matlab [In reply to]

On 05/02/2012 01:03 PM, Kiuhnm wrote:
> On 5/2/2012 8:00, someone wrote:
>> Still, I dont think I completely understand SVD. SVD (at least in
>> Matlab) returns 3 matrices, one is a diagonal matrix I think. I think I
>> would better understand it with geometric examples, if one would be so
>> kind to maybe write something about that... I can plot 3D vectors in
>> matlab, very easily so maybe I better understand SVD if I hear/read the
>> geometric explanation (references to textbook/similar is also
>> appreciated).
>
> Russ's post is a very good starting point. I hope you read it.

Ofcourse I do.


--
http://mail.python.org/mailman/listinfo/python-list


newsboost at gmail

May 2, 2012, 1:29 PM

Post #36 of 47 (2588 views)
Permalink
Re: numpy (matrix solver) - python vs. matlab [In reply to]

On 05/02/2012 01:52 PM, Steven D'Aprano wrote:
> On Wed, 02 May 2012 08:00:44 +0200, someone wrote:
>
>> On 05/02/2012 01:05 AM, Paul Rubin wrote:
>>> someone<newsboost [at] gmail> writes:
>>>> Actually I know some... I just didn't think so much about, before
>>>> writing the question this as I should, I know theres also something
>>>> like singular value decomposition that I think can help solve
>>>> otherwise illposed problems,
>>>
>>> You will probably get better advice if you are able to describe what
>>> problem (ill-posed or otherwise) you are actually trying to solve. SVD
>>
>> I don't understand what else I should write. I gave the singular matrix
>> and that's it.
>
> You can't judge what an acceptable condition number is unless you know
> what your data is.
>
> http://mathworld.wolfram.com/ConditionNumber.html
> http://en.wikipedia.org/wiki/Condition_number
>
> If your condition number is ten, then you should expect to lose one digit
> of accuracy in your solution, over and above whatever loss of accuracy
> comes from the numeric algorithm. A condition number of 64 will lose six
> bits, or about 1.8 decimal digits, of accuracy.
>
> If your data starts off with only 1 or 2 digits of accuracy, as in your
> example, then the result is meaningless -- the accuracy will be 2-2
> digits, or 0 -- *no* digits in the answer can be trusted to be accurate.

I just solved a FEM eigenvalue problem where the condition number of the
mass and stiffness matrices was something like 1e6... Result looked good
to me... So I don't understand what you're saying about 10 = 1 or 2
digits. I think my problem was accurate enough, though I don't know what
error with 1e6 in condition number, I should expect. How did you arrive
at 1 or 2 digits for cond(A)=10, if I may ask ?



--
http://mail.python.org/mailman/listinfo/python-list


newsboost at gmail

May 2, 2012, 1:32 PM

Post #37 of 47 (2595 views)
Permalink
Re: numpy (matrix solver) - python vs. matlab [In reply to]

On 05/02/2012 04:47 PM, Steven_Lord wrote:
>
> Russ, you and the OP (and others) may be interested in one of the books
> that Cleve Moler has written and made freely available on the MathWorks
> website:
>
> http://www.mathworks.com/moler/
>
> The chapter Linear Equations in "Numerical Computing with MATLAB"
> includes a section (section 2.9, starting on page 14 if I remember
> correctly) that discusses norm and condition number and gives a more
> formal statement of what you described. The code included in the book is
> written in MATLAB, but even if you don't use MATLAB (since I know this
> has been cross-posted to comp.lang.python) there's plenty of good,
> crunchy mathematics in that section.

I use matlab more than python. I just want to learn python, which seems
extremely powerful and easy but yet, I'm a python beginner.

Thank you very much for the reference, Mr. Lord. I'll look closely at
that Moler-book, in about 1/2 hour or so.... Looking forward to learning
more about this...
--
http://mail.python.org/mailman/listinfo/python-list


russ.paielli at gmail

May 2, 2012, 2:45 PM

Post #38 of 47 (2583 views)
Permalink
Re: numpy (matrix solver) - python vs. matlab [In reply to]

On May 2, 1:29 pm, someone <newsbo...@gmail.com> wrote:

> > If your data starts off with only 1 or 2 digits of accuracy, as in your
> > example, then the result is meaningless -- the accuracy will be 2-2
> > digits, or 0 -- *no* digits in the answer can be trusted to be accurate.
>
> I just solved a FEM eigenvalue problem where the condition number of the
> mass and stiffness matrices was something like 1e6... Result looked good
> to me... So I don't understand what you're saying about 10 = 1 or 2
> digits. I think my problem was accurate enough, though I don't know what
> error with 1e6 in condition number, I should expect. How did you arrive
> at 1 or 2 digits for cond(A)=10, if I may ask ?

As Steven pointed out earlier, it all depends on the precision you are
dealing with. If you are just doing pure mathematical or numerical
work with no real-world measurement error, then a condition number of
1e6 may be fine. But you had better be using "double precision" (64-
bit) floating point numbers (which are the default in Python, of
course). Those have approximately 12 digits of precision, so you are
in good shape. Single-precision floats only have 6 or 7 digits of
precision, so you'd be in trouble there.

For any practical engineering or scientific work, I'd say that a
condition number of 1e6 is very likely to be completely unacceptable.
--
http://mail.python.org/mailman/listinfo/python-list


newsboost at gmail

May 3, 2012, 10:30 AM

Post #39 of 47 (2586 views)
Permalink
Re: numpy (matrix solver) - python vs. matlab [In reply to]

On 05/02/2012 11:45 PM, Russ P. wrote:
> On May 2, 1:29 pm, someone<newsbo...@gmail.com> wrote:
>
>>> If your data starts off with only 1 or 2 digits of accuracy, as in your
>>> example, then the result is meaningless -- the accuracy will be 2-2
>>> digits, or 0 -- *no* digits in the answer can be trusted to be accurate.
>>
>> I just solved a FEM eigenvalue problem where the condition number of the
>> mass and stiffness matrices was something like 1e6... Result looked good
>> to me... So I don't understand what you're saying about 10 = 1 or 2
>> digits. I think my problem was accurate enough, though I don't know what
>> error with 1e6 in condition number, I should expect. How did you arrive
>> at 1 or 2 digits for cond(A)=10, if I may ask ?
>
> As Steven pointed out earlier, it all depends on the precision you are
> dealing with. If you are just doing pure mathematical or numerical
> work with no real-world measurement error, then a condition number of
> 1e6 may be fine. But you had better be using "double precision" (64-
> bit) floating point numbers (which are the default in Python, of
> course). Those have approximately 12 digits of precision, so you are
> in good shape. Single-precision floats only have 6 or 7 digits of
> precision, so you'd be in trouble there.
>
> For any practical engineering or scientific work, I'd say that a
> condition number of 1e6 is very likely to be completely unacceptable.

So how do you explain that the natural frequencies from FEM (with
condition number ~1e6) generally correlates really good with real
measurements (within approx. 5%), at least for the first 3-4 natural
frequencies?

I would say that the problem lies with the highest natural frequencies,
they for sure cannot be verified - there's too little energy in them.
But the lowest frequencies (the most important ones) are good, I think -
even for high cond number.


--
http://mail.python.org/mailman/listinfo/python-list


russ.paielli at gmail

May 3, 2012, 10:55 AM

Post #40 of 47 (2582 views)
Permalink
Re: numpy (matrix solver) - python vs. matlab [In reply to]

On May 3, 10:30 am, someone <newsbo...@gmail.com> wrote:
> On 05/02/2012 11:45 PM, Russ P. wrote:
>
>
>
> > On May 2, 1:29 pm, someone<newsbo...@gmail.com>  wrote:
>
> >>> If your data starts off with only 1 or 2 digits of accuracy, as in your
> >>> example, then the result is meaningless -- the accuracy will be 2-2
> >>> digits, or 0 -- *no* digits in the answer can be trusted to be accurate.
>
> >> I just solved a FEM eigenvalue problem where the condition number of the
> >> mass and stiffness matrices was something like 1e6... Result looked good
> >> to me... So I don't understand what you're saying about 10 = 1 or 2
> >> digits. I think my problem was accurate enough, though I don't know what
> >> error with 1e6 in condition number, I should expect. How did you arrive
> >> at 1 or 2 digits for cond(A)=10, if I may ask ?
>
> > As Steven pointed out earlier, it all depends on the precision you are
> > dealing with. If you are just doing pure mathematical or numerical
> > work with no real-world measurement error, then a condition number of
> > 1e6 may be fine. But you had better be using "double precision" (64-
> > bit) floating point numbers (which are the default in Python, of
> > course). Those have approximately 12 digits of precision, so you are
> > in good shape. Single-precision floats only have 6 or 7 digits of
> > precision, so you'd be in trouble there.
>
> > For any practical engineering or scientific work, I'd say that a
> > condition number of 1e6 is very likely to be completely unacceptable.
>
> So how do you explain that the natural frequencies from FEM (with
> condition number ~1e6) generally correlates really good with real
> measurements (within approx. 5%), at least for the first 3-4 natural
> frequencies?
>
> I would say that the problem lies with the highest natural frequencies,
> they for sure cannot be verified - there's too little energy in them.
> But the lowest frequencies (the most important ones) are good, I think -
> even for high cond number.

Did you mention earlier what "FEM" stands for? If so, I missed it. Is
it finite-element modeling? Whatever the case, note that I said, "If
you are just doing pure mathematical or numerical work with no real-
world measurement error, then a condition number of
1e6 may be fine." I forgot much more than I know about finite-element
modeling, but isn't it a purely numerical method of analysis? If that
is the case, then my comment above is relevant.

By the way, I didn't mean to patronize you with my earlier explanation
of orthogonal transformations. They are fundamental to understanding
the SVD, and I thought it might be interesting to anyone who is not
familiar with the concept.
--
http://mail.python.org/mailman/listinfo/python-list


newsboost at gmail

May 3, 2012, 3:21 PM

Post #41 of 47 (2582 views)
Permalink
Re: numpy (matrix solver) - python vs. matlab [In reply to]

On 05/03/2012 07:55 PM, Russ P. wrote:
> On May 3, 10:30 am, someone<newsbo...@gmail.com> wrote:
>> On 05/02/2012 11:45 PM, Russ P. wrote:

>>> For any practical engineering or scientific work, I'd say that a
>>> condition number of 1e6 is very likely to be completely unacceptable.
>>
>> So how do you explain that the natural frequencies from FEM (with
>> condition number ~1e6) generally correlates really good with real
>> measurements (within approx. 5%), at least for the first 3-4 natural
>> frequencies?
>>
>> I would say that the problem lies with the highest natural frequencies,
>> they for sure cannot be verified - there's too little energy in them.
>> But the lowest frequencies (the most important ones) are good, I think -
>> even for high cond number.
>
> Did you mention earlier what "FEM" stands for? If so, I missed it. Is
> it finite-element modeling? Whatever the case, note that I said, "If

Sorry, yes: Finite Element Model.

> you are just doing pure mathematical or numerical work with no real-
> world measurement error, then a condition number of
> 1e6 may be fine." I forgot much more than I know about finite-element
> modeling, but isn't it a purely numerical method of analysis? If that

I'm not sure exactly, what is the definition of a purely numerical
method of analysis? I would guess that the answer is yes, it's a purely
numerical method? But I also thing it's a practical engineering or
scientific work...

> is the case, then my comment above is relevant.

Uh, I just don't understand the difference:

1) "For any practical engineering or scientific work, I'd say that a
condition number of 1e6 is very likely to be completely unacceptable."

vs.

2) "If you are just doing pure mathematical or numerical work with no
real-world measurement error, then a condition number of, 1e6 may be fine."

I would think that FEM is a practical engineering work and also pure
numerical work... Or something...

> By the way, I didn't mean to patronize you with my earlier explanation
> of orthogonal transformations. They are fundamental to understanding
> the SVD, and I thought it might be interesting to anyone who is not
> familiar with the concept.

Don't worry, I think it was really good and I don't think anyone
patronized me, on the contrary, people was/is very helpful. SVD isn't my
strongest side and maybe I should've thought a bit more about this
singular matrix and perhaps realized what some people here already
explained, a bit earlier (maybe before I asked). Anyway, it's been good
to hear/read what you've (and others) have written.

Yesterday and earlier today I was at work during the day so
answering/replying took a bit longer than I like, considering the huge
flow of posts in the matlab group. But now I'm home most of the time,
for the next 3 days and will check for followup posts quite frequent, I
think...

--
http://mail.python.org/mailman/listinfo/python-list


russ.paielli at gmail

May 3, 2012, 3:58 PM

Post #42 of 47 (2588 views)
Permalink
Re: numpy (matrix solver) - python vs. matlab [In reply to]

Yeah, I realized that I should rephrase my previous statement to
something like this:

For any *empirical* engineering or scientific work, I'd say that a
condition number of 1e6 is likely to be unacceptable.

I'd put finite elements into the category of theoretical and numerical
rather than empirical. Still, a condition number of 1e6 would bother
me, but maybe that's just me.

--Russ P.


On May 3, 3:21 pm, someone <newsbo...@gmail.com> wrote:
> On 05/03/2012 07:55 PM, Russ P. wrote:
>
>
>
> > On May 3, 10:30 am, someone<newsbo...@gmail.com>  wrote:
> >> On 05/02/2012 11:45 PM, Russ P. wrote:
> >>> For any practical engineering or scientific work, I'd say that a
> >>> condition number of 1e6 is very likely to be completely unacceptable.
>
> >> So how do you explain that the natural frequencies from FEM (with
> >> condition number ~1e6) generally correlates really good with real
> >> measurements (within approx. 5%), at least for the first 3-4 natural
> >> frequencies?
>
> >> I would say that the problem lies with the highest natural frequencies,
> >> they for sure cannot be verified - there's too little energy in them.
> >> But the lowest frequencies (the most important ones) are good, I think -
> >> even for high cond number.
>
> > Did you mention earlier what "FEM" stands for? If so, I missed it. Is
> > it finite-element modeling? Whatever the case, note that I said, "If
>
> Sorry, yes: Finite Element Model.
>
> > you are just doing pure mathematical or numerical work with no real-
> > world measurement error, then a condition number of
> > 1e6 may be fine." I forgot much more than I know about finite-element
> > modeling, but isn't it a purely numerical method of analysis? If that
>
> I'm not sure exactly, what is the definition of a purely numerical
> method of analysis? I would guess that the answer is yes, it's a purely
> numerical method? But I also thing it's a practical engineering or
> scientific work...
>
> > is the case, then my comment above is relevant.
>
> Uh, I just don't understand the difference:
>
> 1) "For any practical engineering or scientific work, I'd say that a
> condition number of 1e6 is very likely to be completely unacceptable."
>
> vs.
>
> 2) "If you are just doing pure mathematical or numerical work with no
> real-world measurement error, then a condition number of, 1e6 may be fine."
>
> I would think that FEM is a practical engineering work and also pure
> numerical work... Or something...
>
> > By the way, I didn't mean to patronize you with my earlier explanation
> > of orthogonal transformations. They are fundamental to understanding
> > the SVD, and I thought it might be interesting to anyone who is not
> > familiar with the concept.
>
> Don't worry, I think it was really good and I don't think anyone
> patronized me, on the contrary, people was/is very helpful. SVD isn't my
> strongest side and maybe I should've thought a bit more about this
> singular matrix and perhaps realized what some people here already
> explained, a bit earlier (maybe before I asked). Anyway, it's been good
> to hear/read what you've (and others) have written.
>
> Yesterday and earlier today I was at work during the day so
> answering/replying took a bit longer than I like, considering the huge
> flow of posts in the matlab group. But now I'm home most of the time,
> for the next 3 days and will check for followup posts quite frequent, I
> think...

--
http://mail.python.org/mailman/listinfo/python-list


newsboost at gmail

May 3, 2012, 4:59 PM

Post #43 of 47 (2588 views)
Permalink
Re: numpy (matrix solver) - python vs. matlab [In reply to]

On 05/04/2012 12:58 AM, Russ P. wrote:
> Yeah, I realized that I should rephrase my previous statement to
> something like this:
>
> For any *empirical* engineering or scientific work, I'd say that a
> condition number of 1e6 is likely to be unacceptable.

Still, I don't understand it. Do you have an example of this kind of
work, if it's not FEM?

> I'd put finite elements into the category of theoretical and numerical
> rather than empirical. Still, a condition number of 1e6 would bother
> me, but maybe that's just me.

Ok, but I just don't understand what's in the "empirical" category, sorry...

Maybe the conclusion is just that if cond(A) > 1e15 or 1e16, then that
problem shouldn't be solved and maybe this is also approx. where matlab
has it's warning-threshold (maybe, I'm just guessing here)... So, maybe
I could perhaps use that limit in my future python program (when I find
out how to get the condition number etc, but I assume this can be
googled for with no problems)...

--
http://mail.python.org/mailman/listinfo/python-list


steve+comp.lang.python at pearwood

May 3, 2012, 8:52 PM

Post #44 of 47 (2585 views)
Permalink
Re: numpy (matrix solver) - python vs. matlab [In reply to]

On Thu, 03 May 2012 19:30:35 +0200, someone wrote:

> On 05/02/2012 11:45 PM, Russ P. wrote:
>> On May 2, 1:29 pm, someone<newsbo...@gmail.com> wrote:
>>
>>>> If your data starts off with only 1 or 2 digits of accuracy, as in
>>>> your example, then the result is meaningless -- the accuracy will be
>>>> 2-2 digits, or 0 -- *no* digits in the answer can be trusted to be
>>>> accurate.
>>>
>>> I just solved a FEM eigenvalue problem where the condition number of
>>> the mass and stiffness matrices was something like 1e6... Result
>>> looked good to me... So I don't understand what you're saying about 10
>>> = 1 or 2 digits. I think my problem was accurate enough, though I
>>> don't know what error with 1e6 in condition number, I should expect.
>>> How did you arrive at 1 or 2 digits for cond(A)=10, if I may ask ?
>>
>> As Steven pointed out earlier, it all depends on the precision you are
>> dealing with. If you are just doing pure mathematical or numerical work
>> with no real-world measurement error, then a condition number of 1e6
>> may be fine. But you had better be using "double precision" (64- bit)
>> floating point numbers (which are the default in Python, of course).
>> Those have approximately 12 digits of precision, so you are in good
>> shape. Single-precision floats only have 6 or 7 digits of precision, so
>> you'd be in trouble there.
>>
>> For any practical engineering or scientific work, I'd say that a
>> condition number of 1e6 is very likely to be completely unacceptable.
>
> So how do you explain that the natural frequencies from FEM (with
> condition number ~1e6) generally correlates really good with real
> measurements (within approx. 5%), at least for the first 3-4 natural
> frequencies?

I would counter your hand-waving ("correlates really good", "within
approx 5%" of *what*?) with hand-waving of my own:

"Sure, that's exactly what I would expect!"

*wink*

By the way, if I didn't say so earlier, I'll say so now: the
interpretation of "how bad the condition number is" will depend on the
underlying physics and/or mathematics of the situation. The
interpretation of loss of digits of precision is a general rule of thumb
that holds in many diverse situations, not a rule of physics that cannot
be broken in this universe.

If you have found a scenario where another interpretation of condition
number applies, good for you. That doesn't change the fact that, under
normal circumstances when trying to solve systems of linear equations, a
condition number of 1e6 is likely to blow away *all* the accuracy in your
measured data. (Very few physical measurements are accurate to more than
six digits.)



--
Steven
--
http://mail.python.org/mailman/listinfo/python-list


russ.paielli at gmail

May 3, 2012, 9:15 PM

Post #45 of 47 (2578 views)
Permalink
Re: numpy (matrix solver) - python vs. matlab [In reply to]

On May 3, 4:59 pm, someone <newsbo...@gmail.com> wrote:
> On 05/04/2012 12:58 AM, Russ P. wrote:
>
> > Yeah, I realized that I should rephrase my previous statement to
> > something like this:
>
> > For any *empirical* engineering or scientific work, I'd say that a
> > condition number of 1e6 is likely to be unacceptable.
>
> Still, I don't understand it. Do you have an example of this kind of
> work, if it's not FEM?
>
> > I'd put finite elements into the category of theoretical and numerical
> > rather than empirical. Still, a condition number of 1e6 would bother
> > me, but maybe that's just me.
>
> Ok, but I just don't understand what's in the "empirical" category, sorry...

I didn't look it up, but as far as I know, empirical just means based
on experiment, which means based on measured data. Unless I am
mistaken , a finite element analysis is not based on measured data.
Yes, the results can be *compared* with measured data and perhaps
calibrated with measured data, but those are not the same thing.

I agree with Steven D's comment above, and I will reiterate that a
condition number of 1e6 would not inspire confidence in me. If I had a
condition number like that, I would look for a better model. But
that's just a gut reaction, not a hard scientific rule.
--
http://mail.python.org/mailman/listinfo/python-list


newsboost at gmail

May 4, 2012, 6:30 AM

Post #46 of 47 (2557 views)
Permalink
Re: numpy (matrix solver) - python vs. matlab [In reply to]

On 05/04/2012 05:52 AM, Steven D'Aprano wrote:
> On Thu, 03 May 2012 19:30:35 +0200, someone wrote:

>> So how do you explain that the natural frequencies from FEM (with
>> condition number ~1e6) generally correlates really good with real
>> measurements (within approx. 5%), at least for the first 3-4 natural
>> frequencies?
>
> I would counter your hand-waving ("correlates really good", "within
> approx 5%" of *what*?) with hand-waving of my own:

Within 5% of experiments of course.
There is not much else to compare with.

> "Sure, that's exactly what I would expect!"
>
> *wink*
>
> By the way, if I didn't say so earlier, I'll say so now: the
> interpretation of "how bad the condition number is" will depend on the
> underlying physics and/or mathematics of the situation. The
> interpretation of loss of digits of precision is a general rule of thumb
> that holds in many diverse situations, not a rule of physics that cannot
> be broken in this universe.
>
> If you have found a scenario where another interpretation of condition
> number applies, good for you. That doesn't change the fact that, under
> normal circumstances when trying to solve systems of linear equations, a
> condition number of 1e6 is likely to blow away *all* the accuracy in your
> measured data. (Very few physical measurements are accurate to more than
> six digits.)

Not true, IMHO.

Eigenfrequencies (I think that is a very typical physical measurement
and I cannot think of something that is more typical) don't need to be
accurate with 6 digits. I'm happy with below 5% error. So if an
eigenfrequency is measured to 100 Hz, I'm happy if the numerical model
gives a result in the 5%-range of 95-105 Hz. This I got with a condition
number of approx. 1e6 and it's good enough for me. I don't think anyone
expects 6-digit accuracy with eigenfrequncies.



--
http://mail.python.org/mailman/listinfo/python-list


newsboost at gmail

May 4, 2012, 6:36 AM

Post #47 of 47 (2562 views)
Permalink
Re: numpy (matrix solver) - python vs. matlab [In reply to]

On 05/04/2012 06:15 AM, Russ P. wrote:
> On May 3, 4:59 pm, someone<newsbo...@gmail.com> wrote:
>> On 05/04/2012 12:58 AM, Russ P. wrote:
>> Ok, but I just don't understand what's in the "empirical" category, sorry...
>
> I didn't look it up, but as far as I know, empirical just means based
> on experiment, which means based on measured data. Unless I am

FEM based on measurement data? Still, I don't understand it, sorry.

> mistaken , a finite element analysis is not based on measured data.

I'm probably a bit narrow-thinking because I just worked with this small
FEM-program (in Matlab), but can you please give an example of a
matrix-problem that is based on measurement data?

> Yes, the results can be *compared* with measured data and perhaps
> calibrated with measured data, but those are not the same thing.

Exactly. That's why I don't understand what solving a matrix system
using measurement/empirical data, could typically be an example of...?

> I agree with Steven D's comment above, and I will reiterate that a
> condition number of 1e6 would not inspire confidence in me. If I had a
> condition number like that, I would look for a better model. But
> that's just a gut reaction, not a hard scientific rule.

I don't have any better model and don't know anything better. I still
think that 5% accuracy is good enough and that nobody needs 6-digits
precision for practical/engineering/empirical work... Maybe quantum
physicists needs more than 6 digits of accuracy, but most
practical/engineering problems are ok with an accuracy of 5%, I think,
IMHO... Please tell me if I'm wrong.


--
http://mail.python.org/mailman/listinfo/python-list

First page Previous page 1 2 Next page Last page  View All Python python RSS feed   Index | Next | Previous | View Threaded
 
 


Interested in having your list archived? Contact Gossamer Threads
 
  Web Applications & Managed Hosting Powered by Gossamer Threads Inc.