2007: J F M A M J J A S O N D
2008: J F M A M J J A S O N D
2009: J F M A M J J A S O N D
2010: J F M A M J J A S O N D
2011: J F M A M J J A S O N D
2012: J F M A M J J A S O N D
2013: J F M A M J J A S O N D
2014: J F M A M J J A S O N D

Blog, by category: mathematics

from the desk of travis johnson.

cplex matlab interface (from 2012/02/01)

Just for my own reference, I'm documenting the interface to CPLEX.

CPLEX expects a problem in the form

 begin{split} min qquad & g^Td + frac12 d^TWd text{subject to} qquad & c_L leq Ad leq c_U 						 & d_L leq d leq d_U end{split}

and is called by

cplex = Cplex('test');
cplex.Param.feasopt.tolerance.Cur = 1e-8;
if params.printLevel < 8
    cplex.DisplayFunc = [];
end
cplex.Model.sense = 'minimize';
cplex.Param.qpmethod.Cur = 1;
cplex.addCols(gk,[],bl-xk,bu-xk);
cplex.addRows(-ck, A0, -ck);
cplex.Model.Q = W;
cplex.Model.obj = g;
cplex.Model.lb = d_L;
cplex.Model.ub = d_U;
cplex.Model.lhs= c_L;
cplex.Model.rhs= c_U;
cplex.solve();

a trig problem solved in MATLAB (from 2012/02/01)

diagram 

I came across this post. The basic idea is the guy wants to maximize L_1+L_2 constrained to this box, where L_i is the length of beam i. It's constrained to be a 61 cmx61 cm box, but one beam must start from 10cm up from the bottom right corner and the beams must meet at a point along the top of the box. I added the further assumption that the other beam must end in the bottom left corner.

 begin{split} 	T_1 =& L_1sintheta_1 	51 =& L_2costheta_1 	T_2 =& L_2sintheta_2 	61 =& L_2costheta_2 end{split}

which fall from simple trig. There's one more equation, which constrains the side length to 61 cm:

 T_1 + T_2 = 61
Next, I squared each pair of equations to get
 begin{split} 	T_1^2 =& L_1^2sin^2theta_1 	51^2 =& L_1^2cos^2theta_1 	end{split}implies 	L_1^2 = T_1^2 + 51^2

and similarly L_2^2=61^2+T_2^2.

I'm planning on using MATLAB's FMINCON, which means I need to formulate this as a minimization problem. This is accomplished by observing

 	max f(x) iff min -f(x).

Therefore, the final nonlinear program that I want to solve is

 begin{split} 	min qquad & -L_1 - L_2 	text{subject to} qquad & L_1^2 -T_1^2 - 51^2 = 0 							 & L_2^2 -T_2^2 - 61^2 = 0 							 & T_1 + T_2 - 61 = 0 end{split}

which can be solved with the matlab program

function xsol = solveproblem()
f = @(x) -x(1)-x(2);
x0 = [0;0;0;0]; LB=[0;0;0;0];
settings = optimset('TolFun',1e-8,'Algorithm','interior-point');
xsol = fmincon(f,x0,[],[],[],[],LB,[],@nonlincon,settings);

function [c,ceq]=nonlincon(x)
l1=x(1);l2=x(2);t1=x(3);t2=x(4);
c=[];
ceq = [l1^2-t1^2-51^2;
        l2^2-t2^2-61^2;
        t1 + t2 - 61];
end
end

When run, this produces a length 140.5 pair of beams. Hooray!

What could you possibly do with mathematics? (from 2009/08/24)

Recently, at a family gathering, I was confronted by the question many a college graduate is faced after telling someone I had majored in mathematics for my now-finished college degree: “But how are you going to make any money at that?”

Now, certainly it's true: Graduate students don't make that much. The average stipend for a grad student is roughly on par with (but still less than) unemployment checks. But that's okay–in general, mathematicians know that they could make money other places, but they chose it anyways because of their love for the subject. Not that mathematicians make <em>too</em> bad of money anyway: The average in the US is about 50K, with associate professors making more than that. The particularly salary-inclined could pick up other credentials to become actuaries or work in hedge funds.

So perhaps some more interesting, offensive, and pointed questions are 'What could you possibly do with mathematics?’ or 'How could mathematics possibly affect my life?’ These, however, are even easier to dispatch. I'll stick to improvements made in the last century or so. And also keep in mind: All mathematicians do is make definitions and assumptions and logically follow those to their conclusions.

Many modern business and agricultural decisions involve tradeoffs between different capabilities/capacities and their respective payoffs. When these problems can be formulated in a certain format, they can be definitively answered by the mathematical technique of linear programming. Even when they can't, nonlinear/convex/integer optimization frequently offers some insight to the problem solution. But in the linear case, Dantzig's Simplex Algorithm offers insights to many business-related problems.

Nearly everyone these days enjoys technological breakthroughs that rest on mathematics, usually by way of physics. Quantum mechanics has given us transistors(and in turn the computer revolution of the 70s), lasers, MRI machines, cell phones, and myriad other things we take for granted in our everyday lives. None of this would be possible without a mathematical basis on which to make quantitative predictions, and many mathematical tools needed for this sort of thing were invented before the physical problems were posed.

An even more specific example (and personal favorite topic of mine!) of the former is the algorithmic development of the Fast Fourier Transform. This algorithm(a set of instructions) gives us an easy way to express signals in terms of either how they change in time or what frequencies make up the signal. These techniques make it possible to select certain signals out from background noise(filtering–ubiquitous in electronics), but also arise in X-ray diffraction and medical advances like (again) MRI. It's even the critical device that enables music CDs, television, and radios to function without relying on heavy, inefficient, and breakage-prone vacuum tubes. Finally, the more general topic of Fourier analysis provides insights in other topics like statistics, enabling us to talk about most populations in terms of their aggregates.

Finally, to say a few words about the computer revolution: Almost everything here is mathematical. Data packets are sent over the Internet by routing protocols that rely on mathematical algorithms and proofs. UPS uses the <em>same</em> algorithms to route their trucks each day. Search giant Google relies on a mathematical foundation of eigenvalues, a linear algebra topic. Amazon uses clever applications of algorithms to provide you with books you might enjoy. Anyone who uses a bank(even brick and mortar!) relies on mathematical advances from the field of number theory to guarantee that attackers couldn't divert their funds. Fluid dynamics predicts how airplanes will fly according to physical laws and without building the planes.

Fine, you say. We've got all that figured out, and we've got mathematicians to thank. But all that stuff sounds a lot like physics or applied mathematics, so are those guys in pure mathematics wasting their time? And also, what else could there be to think about?

First, advances in a great many applied topics were preceded by an amazing amount of mathematics that was done without the intention to be used. The number theory advances mentioned above is the prototypical example of this. What used to be an esoteric topic is now the basis for encryption, error correction, among other uses. Would we even have secure Internet communication if we hadn't had centuries of number theorists thinking about prime numbers and factorization? This is unknowable, but I'd place my bets towards the negative. There are many, many more examples of this sort of thing.

But where is this all heading? I can't speak too far out of my experience here, but mathematical advances are driving real-world advances in everything else, from communication to medicine. One particularly interesting example of the latter is a project called 'Virtual Physiological Human,’ which aims to create a computer model of humans so exacting that it can be used for drug testing and even tailor specific treatments to one particular patient.

To tie all this up: Even if you don't understand mathematics or even why anyone might enjoy it, do yourself(and society) a favor and don't discourage them. You certainly stand to benefit from their devotion to their field.

Why Do Math and Izhikevich (from 2009/01/06)

Professor Eric Shea-Brown has written up a nice website explaining what we're doing with the computational neuroscience modeling. It's currently on the Why Do Math website at this link: Brain Dynamics: The Mathematics of the Spike.

For my project, we're starting with a simpler model that is similar in behavior but quicker computationally and (somewhat) easier to analyze mathematically, known as the Izhikevich model(after its creator, Eugene Izhikevich). His website has some amazingly cool videos and a lot of papers on what he's been doing. His Website

Whack it with an X squared! (from 2008/08/07)

David and I were working on our Math381 model, and I was getting frustrated because the data we collected and the results from the simulation were not lining up properly, which was quite frustrating. We were hoping to see something like this:

Number of Logins from Data

Instead, we were getting stuff distributed like this:

Simulated Number of Logins

I realized that we needed some function to force a bunch of this junk further left. Recalling an old adage from Mr. Cone’s AP Chemistry class, I decided it was the right time to whack it with an X squared. This is vaguely appropriate, because rand() has a range [0,1), so squaring it should put a whole bunch of stuff further right, but not everything(ie, the first half will end up in the first quarter, the first 34 will end up in the first 916, etc). Imagine my shock when I saw this:

Simulated Number of Logins

This is almost picture perfect what we were hoping to see! I was expecting to see something like this, but I was not expecting it to work out so perfectly. The burden is now on trying to justify that choice…

Dijkstra's Algorithm Paper (from 2008/07/30)

The week before my sister’s wedding, I was tasked with writing a paper on Dijkstra’s Algorithm for my Discrete Mathematical Modeling class. I think I might have missed the mark a little bit, but I had so much fun writing it that I’m posting it here.

I’m almost considering writing some more stuff in this style… anything anyone would like to see written about?

Here’s a link: Dijkstra’s Algorithm

Differential Equations (from 2008/05/12)

I’ve realized that a lot of people are nervous about differential equations. Which is understandable, but in general there’s some pretty straightforward ways to solve a fair number of the ones you come across. I’d really like to write some of it up.

My basic idea is going to be, show a bit about integrating factors, a bit about separation of variables, the characteristic equation, and the method of undetermined coefficients. That covers a lot of physical territory. Then some about reducing order with transformations from n-th order equations to n 1-st order equations, and basic Laplace for a sortof general method. And finally, maybe a tiny bit about numeric methods.

I can almost justify doing this before Monday, since it’d be useful on the dynamical systems final. I might give that a shot this weekend to see what I can crank out.

Powered by Olark