2007: J F M A M J J A S O N D
2008: J F M A M J J A S O N D
2009: J F M A M J J A S O N D
2010: J F M A M J J A S O N D
2011: J F M A M J J A S O N D
2012: J F M A M J J A S O N D
2013: J F M A M J J A S O N D
2014: J F M A M J J A S O N D

Blog, page 0

from the desk of travis johnson.

Ditching the consumer grade wireless router (from 2014/01/24)

My fiancée and I both spend a fair bit of time working from home, so we notice downtime almost immediately. We had a NETGEAR WNDR3400 which periodically disconnected one or the other of us, or just didn't seem very zippy anymore. I was originally considering a top-of-the-line router, such as an Airport Extreme, a NETGEAR Nighthawk AC1900, or an ASUS RT-AC66U.

I had been looking into alternatives for a while, and decided to pull the trigger on the following setup:

  • Purchase a TP-Link 8 port gigabit ($30), an Umbiquiti UniFi ($69), and an extra Ethernet interface.

  • Retire the WNDR3400.

  • Enable routing, masquerade, firewall, and DHCP on my Linux machine.

This setup does have one downside, compared to the top end routers: I give up running 802.11ac, at least for now. But on the plus side, it isolates several distinct functionalities, by having simpler equipment/software at each junction. Instead of a combination router, switch, are wireless AP, each component is a physically distinct, debuggable, swappable, replaceable, upgradable component. Furthermore, each is also extendable: I can add capacity by purchasing another switch if we ever have more than 8 devices; I can replace the access points if the UniFi becomes flakey or as new wireless technology comes out (or if I decide to upgrade to the UniFi AC AP.) I can also extend the wireless network simply by purchasing additional UniFi devices.

The UniFi AP seems to be the absolute best part of the setup. It bills itself as an Enterprise WiFi system; no disagreement from me. It is a great product: Cheaper than wireless routers, much more configurable, extremely well thought out. They provide a power over Ethernet injector, which means there is only one cable plugged in to the AP. The AP itself is meant to be mounted on a wall of ceiling, and includes all the hardware required to do so. I get a strong wireless signal from the opposite side of the apartment, which used to degrade the signal substantially. I do still get some signal degradation, but I haven't mounted the device on the wall, so I think there is still room for improvement. To give a rough idea of the speed differential:

271 KB/s vs 7.4 MB/s 

I am also considering a few further, future upgrades:

  • Upgrade physical Linux machine.

  • Enable (for example) ESXi virtualization.

  • Set up one virtualized instance as a dedicated router, with either a minimal Linux distribution or one of the *bsd variants. (I'm leaning toward pfsense.)

  • Set up dnsmasq or similar functionality for DNS names at home.

  • Enable QoS to ensure VOIP and other transmissions are not interrupted by BitTorrent, HTTP, or SSH transfers.

  • (Possibly) bridge the comcast router.

Language Fluidity (from 2013/12/29)

I have learned a lot of programming languages over the course of my life. So I wanted to do two things:

  1. Discuss a few effective ways to pick up a new language.

  2. Discuss a few references that I've found helpful for some languages.

I should start with a disclaimer: I do enjoy learning new languages, but mostly I enjoy creating things in those languages. I'm not sure all the effort I have spent learning many languages would not have been better spent learning many fewer languages to a deeper level, but I do feel like it is extremely important to learn principles, not languages, and to use the best tool for a particular job. Many times principles are most clearly illustrated with contrast between languages, and many more times a particular tool has been a patently poor choice for a particular job.

That being said, it's important to know your “first tongue” languages in significant depth.

Projects for New Languages

Starting in a language can be bewildering. Formal courses provide an obvious route if you know less than a couple of different languages. These can be formal courses that teach a particular language, such as An Introduction to Interactive Programming in Python or Functional Programming in Scala, or they might be courses like Computational Investing, which teach investing topics with a strong computational/Python slant. Finally, Codecademy is another a great option because it walks you through the basics and gives feedback on actual code you write.

The trouble with those kinds of courses is that they waste a lot of time discussing basics of computation: data types, function calls, conditionals, and looping. While this is great for beginners–and even beginners to a different paradigm like functional programming–it is tedious to relearn for loops for the 30th time. Once I understand the basic nouns and verbs of a particular language, I usually jump right in to some problems.

  1. The “99 Problems” lists are truly fantastic. The original was P-99: Ninety-Nine Prolog Problems; I've seen specialized lists for Lisp (L-99), Scala, and Haskell. You can also find solutions for tons of languages, such as this list of Python solutions.

  2. Another great resource (and in fact, what I used to learn Python) is Project Euler. I also really enjoyed working through some bioinformatics problems on a site called Rosalind.

  3. Translate a (small-ish) project of yours to a new language.

  4. Implement a (small-ish) algorithm in your chosen language: K-Means, Newton's Method, or some numerical linear algebra routines are some great options. The key is not necessarily to write industrial grade code (because you probably can't compete with existing implementations of those algorithms), but that there are industrial grade codes out there to compare your solutions with.

Resources for particular languages

I can't start a list like this without calling out two references in particular:

  1. Structure and Interpretation of Computer Programs: An undeniably fantastic introduction to computation.

  2. Code: The Hidden Language of Computer Hardware and Software: An undeniably fantastic introduction to computers themselves, from the individual wires on up.

I'm going to limit this list to just four languages: C++, Haskell, MATLAB, and Python:

  1. C++: I highly recommend Accelerated C++: Practical Programming by Example, which was my refresher to C++ in about 2010. After reading through that, it's probably worth reading Effective C++, Effective STL, and More Effective C++, all by Scott Meyers. While Accelerated C++ will get you up to speed on the basic syntax, the others will teach you good C++ style and structure. This is critical, because C++ has a lot of pitfalls and it's worth the effort to avoid them.

  1. Haskell: I don't know Haskell nearly as well as the other languages in this section, but I did enjoy working through Real World Haskell and Haskell the Hard Way. I also really enjoyed Write Yourself a Scheme in 48 Hours and JEKOR's Redoing Make (Haskell From Scratch) videos–both were extremely convincing arguments to use Haskell.

  1. MATLAB: Data-Driven Modeling & Scientific Computation: Methods for Complex Systems & Big Data by J. Nathan Kutz is hands down the best and most comprehensive MATLAB reference I know of. The hard part about learning MATLAB is that often, you are learning a great deal of mathematics as well. This book steps through a lot of it, taking third-quarter calculus as the baseline.

  1. Python: I learned Python long enough ago that I have forgotten exactly what I read while I was learning it, but two resources I do remember liking a lot were Dive Into Python by Mark Pilgrim and How to Think Like A Computer Scientist by Allen B. Downey, which also has a free pdf version.

square (from 2013/11/16)

I probably should have been working on my thesis, but I had a phone interview with Square. I wanted to write a bit of code to get myself “warmed up” but wanted it to be kinda fun. So I decided to write some code that drew a square.

#include <iostream>
using namespace std;

int main() {

    cout << "+";
    for (int i=0; i<8; i++) cout << "-";
    cout << "+" << endl;

    for (int j=0; j<5; j++) {
        cout << "|";
        for (int i=0; i<8; i++)
            cout << " ";
        cout << "|";
        cout << endl;

    cout << "+";
    for (int i=0; i<8; i++) cout << "-";
    cout << "+" << endl;

    return 0;

which yields a nice little square:

|        |
|        |
|        |
|        |
|        |

(Well, okay, it's not a square, but pipes are taller than they are wide, so I guess we made a square-ish rectangle. Which would not have been as good of a name for a company. But I digress.)

Fine and good, but there's so much repetition! We can do better–Programming languages have functions!

#include <iostream>
using namespace std;

void top() {
    cout << "+";
    for (int i=0; i<8; i++) cout << "-";
    cout << "+" << endl;

void middles() {
    cout << "|";
    for (int i=0; i<8; i++)
        cout << " ";
    cout << "|";
    cout << endl;


int main() {
    for (int j=0; j<5; j++) {

    return 0;

Now we've abstracted out some of the logic into functions. This looks a bit better. But even the top and middles functions seem pretty similar; why not just make the first and middle arguments to a function?

#include <iostream>
using namespace std;

void row(string first_last, string middle) {
    cout << first_last;
    for (int i=0; i<8; i++) cout << middle;
    cout << first_last << endl;

int main() {
    row("+", "-");
    for (int j=0; j<5; j++) {
        row("|", " ");
    row("+", "-");

    return 0;

Much better! But wait. We have this weird mix of functions and constants, with functions calling constants. Why do these even need to be functions at all? Why not just encode it all in the data?

One way to pull that off would be to have a tiny little running environment, which could count, print the current element, or do a newline. That might look something like this:

#include <iostream>
using namespace std;

void interp(string program) {
    unsigned int program_counter = 0;
    while (program_counter < program.size()) {
        if ( '0' <= program[program_counter] && program[program_counter] <= '9') {
            if (program[program_counter] == '0') {
                // nop.
            } else {
                program[program_counter] = program[program_counter]-1;
                program_counter -= 2;
        } else if (program[program_counter]=='n') {
            cout << endl;
        } else {
            cout << program[program_counter];
        program_counter += 1;

int main() {
    string program = "+-8+n| 8|n| 8|n| 8|n| 8|n| 8|n+-8+n";

    return 0;

Fine. But now our data seems a bit redundant again, and that's just inexcusable. One way we could deal with this is to permit recursion. I'll handle this in a dumb way: just extracting the substrings between parenthesis and pass that to another interp instance.

#include <iostream>
using namespace std;

void interp(string program) {
    // current index into string. (think of as a program counter!)
    unsigned int program_counter=0;
    string last_recurse="";

    while (program_counter<program.length()) {
        if ('0' < program[program_counter] && program[program_counter] <= '9') {
            // 0-9 control looping:
            if ('0' != program[program_counter]) --program_counter;
            else ++program_counter;
        } else if ('(' == program[program_counter]) {
            unsigned int openparen = 1, count = 0;
            while (0 != openparen) {
                if (')' == program[program_counter]) {
                    if (0==openparen) break;
            last_recurse = program.substr(program_counter-count, count);
        if (')'==program[program_counter]) interp(last_recurse);
        else if ('n'==program[program_counter]) cout << endl;
        else cout << program[program_counter];

int main() {
    string sorig = "+-7+n(| 7|n)5+-7+nnn";
    return 0;

Okay, but now we realize that we did the standard engineering thing and made a really ugly square, which looks nothing like the Square logo. Keeping the same interp function, we can simply change the data strings to generate something a bit different:

int main() {
    // square in positive space
    string spos="#8n# 6#n(# 2#2 2#n)2# 6#n#8n";
    // square in negative space
    string sneg=" 8n #6 n( #2 2#2 n)2 #6 n 8n";
    return 0;


#      #
#  ##  #
#  ##  #
#      #

 ##  ##
 ##  ##

and that's awesome.

There's one more clever/silly thing we can try. Why have comments when you can write self-documenting code?

int main() {
    string spos="square, >0 space:n#8n# 6#n(# 2#2 2#n)2# 6#n#8nnn";
    string sneg="square, <0 space:n 8n #6 n( #2 2#2 n)2 #6 n 8n";
    return 0;

Code on Github: square-lang.

some setup notes (from 2013/10/27)


Zsh is an amazing (mostly) drop-in replacement for bash.

On OSX, just run

> brew install zsh

You also want to install oh-my-zsh.

I went a little crazy with plugins. My list is:

plugins=(autojump brew git git-flow gnu-utils gpg2 osx textmate zsh-syntax-highlighting history-substring-search)


I ssh a lot. So I have a bunch of host entries in my .ssh/config file. Each looks like:

Host [shortname]
     Hostname [hostname]
     User     [username for hostname]
     IdentityFile [my home directory]/.ssh/[id_rsa file for this host]

You also need identity files for hosts you use. I tend to use a different one on each host I connect to; some might consider this overly paranoid. Probably it'd be better to use 1 key but only for a shorter period of time. In any case, you need to use

$ ssh-keygen -f .ssh/id_rsa
$ ssh-add .ssh/id_rsa

The second command might need to be issued every reboot. You should create a new id_rsa file per machine you might SSH from. Those are then added to the .ssh/authorized_keys file of each machine you want to allow ssh into.

Now (using the oh-my-zsh tricks, above), you should be able to tab-complete SSH or SCP commands at will. Pretty sweet, yeah?

overly-ambitious-isqo and the design of numerical codes (from 2013/10/23)

I have finally released the overly-ambitious-isqo project on github!

I wanted to call out two particular design concerns I had.

rich language

My first goal was to try very hard to build up the C language to very succinctly express the main algorithm in src/isqo_functor.cpp in extremely rich language. It seems like numerical code is typically implemented with loops like for (int i=0; i<N; i) and method calls like deltal(xhat, mu). I have found it much easier to reason and think deeply about codes like for (int primal_index=0; primal_index < num_primal; primal_index) and method calls like linear_model_reduction(penalty_iterate, penalty_parameter).

It's a simple thing, but if you want to calculate the linear model reduction of the penalty iterate, spotting the bug between these lines:

  linear_model_reduction(penalty_iterate, penalty_parameter)
  linear_model_reduction(feasibility_iterate, penalty_parameter)

is much easier to spot than the same bug in the equivalent code

  deltal(hatx, mu)
  deltal(barx, mu)

since the later requires remembering that hat represents “penalty” and bar represents “feasibility.” This kind of code is not AT ALL common in numerical codes, and I think that is a big huge bummer, because the code is incredibly difficult to understand, and it doesn't need to be.

The other advantage of building the interfaces this way is that it is trivial to later add performance “tricks” like caching the last linear model reduction for a particular iterate, since the entire class hierarchy is set up so that such a mechanism could be used in common for all of the computable functions we have added. Pretty neat, right?

strong types

The other big goal was to try to have extremely strong types in the code. Typical numerical codes have functions which accepts 20 pointers to double arrays, each of which is critical for the computation.

This seems, to say the least, completely insane to me! There are so many errors that the compiler could catch, simply by defining them as a structure and passing pointers/references to those instead! And it enforces a very clear standard for otherwise arbitrary things. The most obvious convention is simply the order in which they're called–why should a programmer have to remember or look up 20 items (all of identical types, mind you!) when really they're just trying to pass in one particular structure? It's not just that, though. Another example: Some drafts of our document included a matrix equality constraint like

   A*x = b

but some external libraries we linked to specified the same constraint as

   A*x + c = 0.

The original code for the algorithm used the former convention, which was an endless source of bugs and countless, “multiply every element by -1 here” type code. There's no reason for it! It's much simpler to define a type

EqualityConstraint - a constraint of the type A*x=b
   Matrix A - stores values of matrix
   Value b - stores values of the "right-hand side" of the equality constraint

Don't you agree?

a really simple iterative refinement example. (from 2013/10/17)

I wanted to have a quick example of what iterative refinement is and looks like.

Suppose you wanted to solve the (very simple!) problem of solving 5.1x=16, without dividing by decimal numbers. (Yes, this is a bit contrived–I'm not sure why you would be able to divide by an integer to get a decimal, but not divide by a decimal to get a decimal. Bear with me.)

The standard way would be, of course, to simply evaluate x = (5.1)^{-1}16=3.137254. But this is not available–you can't find an inverse of multiplying by 5.1!

So instead, you start with some point x_0=0. You calculate the residual, r_0 = 16-5.1cdot x_0=16. (Multiplying decimals is allowed. Just not inverses.) Now you solve the simpler problem 5d_0=r_0 for d_0, or 5d_0=16. Because now the coefficient is integral, you get d_0=(5)^{-1}16=3.2. Then we generate a new update for x via x_1=x_0+d_0 = 3.2.

This is already somewhat close!

Can we make it better? Well, sure. Just repeat the process:

  • r_1 = 16-5.1*3.2=-.32

  • d_1 = (5)^{-1}r_1 = (5)^{-1}(-.32) = -.064

  • x_2 = x_1 + d_1 = 3.2 - 0.064 = 3.136

Repeating the process a few more times:

iteration iterate residual correction
0 0.000000000e+00 1.600000000e+01 3.200000000e+00
1 3.200000000e+00 -3.200000000e-01 -6.400000000e-02
2 3.136000000e+00 6.400000000e-03 1.280000000e-03
3 3.137280000e+00 -1.280000000e-04 -2.560000000e-05
4 3.137254400e+00 2.559999997e-06 5.119999994e-07
5 3.137254912e+00 -5.119999713e-08 -1.023999943e-08

The algorithm for this looks like this: Suppose:

  • you want to solve alpha x = beta,

  • you can't calculate alpha^{-1},

  • you can calculate hat{alpha}^{-1},

  • hat{alpha}approx alpha.

Start with x_0 given and let i=0.

  1. Calculate the residual r_i=beta - alpha cdot x_i.

  2. Calculate d_i=hat{alpha}^{-1} r_i.

  3. Finally, update the iterate: x_{i+1} = x_i + d_i.

Now then, here's the punchline. What if alpha is a matrix? Then it turns out that alpha^{-1} isn't impossible to compute, just expensive. And also that hat{alpha}^{-1} is simply much faster to compute. Everything else we've said still applies–we can still use iterative refinement until the problem is solved.

There are some bounds for when we would expect this to work. But I'll address that another day.

C/C++ Values, Pointers, and References (from 2013/09/20)

I had an embarrassing realization: While C does have an address-of operator, it does not include reference variable or pass-by-reference syntax. It's embarrassing because I should have realized that it was a C++ invention; I guess some of my “C” code actually really has been C++ code.

For future reference, here's the current summary of my understanding:

var *var &var
int xv=3; value invalid memory location of value
int *xp=&xv; memory location value memory location of pointer.
int &xr=xr; value invalid memory location of value (all invalid in C)

As one other note, there are three ways you can pass things in C:

void square_value(int x) { x = x*x; }
void square_pointer(int *x) { if (x != NULL) *x = *x * *x; }
void square_reference(int &x) { x = x * x; }

Of course, the first doesn't update x. The next two do, but the pointer version is wordier because we have to check for null, whereas the reference version is legal by default. The point of the pointer and reference versions is that we don't need to create a copy of the object. That doesn't matter for integers (and in fact even makes integers slower!) but matters a lot for objects.

Numerical Development on OSX (in the command line) (from 2013/08/19)

I've been working on C implementations of my research projects, which can of course be a perilous project. I've found some tools that make it hugely, hugely better.


You can't do a list like this without mentioning homebrew. You want homebrew instead of MacPorts or Fink or bailing twine and chewing gum or whatever else you were thinking about using. Just do it: You can find the homepage at brew.sh or just install with:

$ ruby -e "$(curl -fsSL https://raw.github.com/mxcl/homebrew/go)"


It's really easy to accidentally use freed memory if you aren't careful with your pointers. These bugs are pretty hard to find, because they induce bugs far away from the actual mistake. One thing that makes it much, much easier is libgmalloc, a library included in OSX. On Linux, I believe that Electric Fence does a similar thing. The point of these libraries is to fail as soon as you dereference a freed pointer or read/write past the end of the buffer, instead of much later, when your program is exhibiting some undefined behavior (because you read/wrote memory important for something other than you thought.)

It is very straightforward to use. You just define an environment variable to load that library instead, and then run your code:

$ DYLD_INSERT_LIBRARIES=/usr/lib/libgmalloc.dylib ./my_buggy_code

The upside is: If you have a memory bug, the program will crash. The downsides: It will run slower. It also might miss slight overruns.

You can try to catch more of the 'slight overruns’ with the MALLOC_STRICT_SIZE option.

You can also use it inside of gdb/lldb:

(gdb) set env DYLD_INSERT_LIBRARIES /usr/lib/libgmalloc.dylib
(lldb) settings set target.env-vars DYLD_INSERT_LIBRARIES=/usr/lib/libgmalloc.dylib

which brings me to my next item:


I am more of a 'add print statements and inspect run logs post-mortem’ kind of programmer (and not very proud of that…), but sometimes there is a genuine call to run a debugger and step through execution. For those times, I've always just used gdb. But it turns out that gdb doesn't work as well on OSX–I found I was missing symbol names for a bunch of stuff, which is really annoying. So, lately, I've been using LLVM's lldb. There is extremely good documentation, tutorials, examples, gotchas, and troubleshooting available at LLDB Homepage.

It's an extremely good idea to run lldb with libgmalloc, because then you can actually home in on what freed variables are being accessed. For example, let's debug the code in the libgmalloc man page. The following commands compile the code, start the debugger, define the environment variable, and run:

$ cc -g3 -o gmalloctest gmalloctest.c
$ lldb ./gmalloctest
(lldb) settings set target.env-vars DYLD_INSERT_LIBRARIES=/usr/lib/libgmalloc.dylib
(lldb) run
Process 28744 stopped
* thread #1: tid = 0x1c03, 0x0000000100000ec3 gmalloctest`main + 51 at gmalloc\
   test.c:10, stop reason = EXC_BAD_ACCESS (code=1, address=0x1003b3000)
    frame #0: 0x0000000100000ec3 gmalloctest`main + 51 at gmalloctest.c:10
   7   	  unsigned i;
   9   	  for (i = 0; i < 200; i++) {
-> 10  	    buffer[i] = i;
   11  	  }
   13  	  for (i = 0; i < 200; i++) {

(I left out some GaurdMalloc and lldb startup messages.)


Google Performance Tools is a complete package for doing performance diagnostics. In particular, they have a CPU profiler and a drop-in replacement malloc library. I haven't used the malloc library (and in fact it's broken on my install,) so I'll just discuss the CPU profiler.

Installation is easy, if you followed the first step. Just run

$ brew install google-perftools

Then you need to specify that the library should run, in

  • Compile your code with the -lprofiler flag (preferred option!) and run

$ CPUPROFILE=/tmp/myprofile ./mybuggycode
  • Run a command like:

$ CPUPROFILE=/tmp/myprofile DYLD_INSERT_LIBRARY=/usr/local/lib/libprofiler.dylib ./mybuggycode

This inserts the library and specifies a CPU profile file.

So, now the big downside: You must recompile your code to work with Google Profiler, at least for modern versions of OSX. The relevant flag is -Wl,-no_pie. This, in my opinion, is a pretty strong reason to stick with Instruments.


Valgrind is another useful memory error finding tool. It can find a few more mistakes There are a few gotchas to running valgrind on OS X, though. For some reason, valgrind doesn't seem to always pick up the symbols. This is simple to fix, though:

valgrind --dsymutil=yes ./my_buggy_code

There are a ton of other useful options. Instead of rewriting the manual, I'll just list a few of my other favorites:

  • leak-check=full

  • show-reachable=yes

  • num-callers=20

  • track-fds=yes

Lately, though, I've been trying to use Instruments. OSX includes a really great utility for doing both profiling and memory checks. It's definitely worth looking into as well. But I'll have to save it for another day and another post.

spring-mass visualization (from 2013/07/31)

I'm working on a paper about an algorithm for hotstarting nonlinear program solves; one application of this might be in the realm of nonlinear model predictive control. In these types of models, we first define the physical equations for the system under consideration. They are subject to some control parameters, which are just a mathematical representation of the input we could give the system. We also define an objective–something that we would like to minimize(usually something like time or energy).

The problem we picked to test our code with is a simple one: We have several masses connected by springs, arranged left-to-right. The far left ball is fixed in position. The masses all start in a horizontal configuration. Gravity then takes over, and the masses fall into place. Since they are connected by springs, it takes a while to settle into place. We can control this system by moving the far right ball around. The goal we define is to come to rest as soon as possible(the idea being that we could move that far-right ball around to help it come to rest.)

There's a bit more detail to the mathematical formulation, but that's all in the paper. Here, I want to go over the visualization. So, to start, here's the visualization I generated:

Optimal Control of Spring-Connected Masses from Travis Johnson on Vimeo.

The basic flow I used to generate it:

  1. Run the mathematical model to actually generate the optimal controls and resulting point mass positions, and output them to a file.

  2. Run POVray on each file to generate pictures.

  3. String the pictures together using ffmpeg.

Before moving on, I want to note that this is a really inefficient way to do this. POVray can do file reading, but I couldn't get it working properly, and I was trying to crank something out really quickly, so I used this kinda hacky way. Hopefully someday I can update it to do a more reasonable job.

Now for the details:

Position Generation

Again, a lot of the details of the algorithm will be in the paper. The main trick I used here was to have the MATLAB code actually output POVray code. In particular, each NMPC solution generated something like:

#include "spiral2_common.inc"

//  1
object {Make_Spiral2(1.400397, 0.000000,0.000000,0.000000, 0.744091,-1.186355,0.000000, <0,0,0.000000> ) }
object {Make_Spiral2(0.780335, 0.744091,-1.186355,0.000000, 1.499604,-1.381607,0.000000, <0,0,0.000000> ) }
object {Make_Spiral2(0.750502, 1.499604,-1.381607,0.000000, 2.250000,-1.394229,0.000000, <0,0,0.000000> ) }
object {Make_Spiral2(0.750502, 2.250000,-1.394229,0.000000, 3.000396,-1.381607,0.000000, <0,0,0.000000> ) }
object {Make_Spiral2(0.780335, 3.000396,-1.381607,0.000000, 3.755909,-1.186355,0.000000, <0,0,0.000000> ) }
object {Make_Spiral2(1.400397, 3.755909,-1.186355,0.000000, 4.500000,0.000000,0.000000, <0,0,0.000000> ) }

sphere { <0.000000,0.000000,0.000000>, 0.25
        texture{ pigment{ color rgb<0,1,0>}
                 finish { phong 1}}
sphere { <4.500000,0.000000,0.000000>, 0.3
        texture{ pigment{ color rgb<1,0,0>}
                 finish { phong 1}}

Here, the numbers actually come from the simulation. Also note that I needed to generate some 57 different .pov files, one for each control we solved for.

POVray Spirals

So I had all the positions written out into different files, but we still need to define the Make_Spiral2 function.

I searched around and found Friedrich A. Lohmueller Spiral Tutoral. Basically, he rotates a sphere through three-space, taking the union of the many spheres, yielding a spring or spiral shape. It was pretty close to what I wanted, except that I wanted to specify start and end points for my springs and masses. So I changed the code to automatically rotate the correct amount based on the start/end coordinates given. The full file spiral2_common.inc also needs to define light sources, camera views, and any colors and textures:

// POV-Ray 3.6/3.7 Scene File "spiral0.pov"
// created by Friedrich A. Lohmueller, 2003 / 2010 / Jan-2011
// Modified by traviscj, July 2013.
// Demonstrates the animation of a spiral
#version 3.6; // 3.7;
global_settings{ assumed_gamma 1.0 }
#default{ finish{ ambient 0.1 diffuse 0.9 conserve_energy}}
#include "colors.inc"
#include "textures.inc"
// camera ------------------------------------------------------------------
#declare Camera_0 = camera {angle 25
                            location  <2.5 , -6 ,-50>
                            right     x*image_width/image_height
                            look_at   <2.5 , -6 , 0.0>}
// sun ---------------------------------------------------------------------
light_source{< 1500,2500,-2500> color White}
// sky ---------------------------------------------------------------------
sky_sphere { pigment { color rgb <1.0,1.0,1.0>
                     } // end of pigment
           } //end of skysphere
//---------------------------- objects in scene ----------------------------
#macro Make_Spiral2(Sp_Length, xx1,yy1,zz1, xx2,yy2,zz2, rotrot)
#declare Spiral  =  //--------------------------------- the spiral
// #declare Sp_Length = 2.0;
 #local SPIRAL_RADIUS = .1;
 #local N_per_Rev = 500;   // Number of Elements per revolutions
 #local N_of_Rev  = 8.00;  // Total number of revolutions
 #local H_per_Ref = Sp_Length / N_of_Rev;// Height per revolution
 #local Nr = 0;                          // start loop
 #while (Nr< N_per_Rev*N_of_Rev)
   sphere{ <0,0,0>,0.025
           translate<SPIRAL_RADIUS, -Nr*H_per_Ref/N_per_Rev, 0>
           rotate<0,  Nr * 360/N_per_Rev,0>
           texture{ Chrome_Metal
                    finish { phong 1}}
 #local Nr = Nr + 1;    // next Nr
 #end // --------------------------------- end of loop

  sphere { <0,0,0>, 0.2
          texture{ pigment{ color rgb<0,0,1>}
                   finish { phong 1}}
} // end of union  -------------------------------- end of spiral
Spiral rotate(<0,0,90+degrees(atan2((yy2-yy1),(xx2-xx1)))>) translate(< xx1,yy1,zz1>)

Running POVray

Before compiling the POVray code, obviously, POVray is required. On OSX, Homebrew handles this nicely:

brew install povray

Before running POVray, I created .ini files for each one. I think there must be a better way, but it's a start:





With all of those .ini files in place, the most straightforward way to run them all is just

for f in $(ls spiral2-*.ini); do povray $f; done

This gives me a folder full of PNG files.

Assembling video

This ended up being trickier than I thought it should be. I couldn't really find a really solid video convert command, but I finally found mencoder's image encoding syntax and ran

$ mencoder mf://\*.png -mf w=800:h=600:fps=4:type=png -ovc lavc -lavcopts vcodec=mpeg4:mbd=2:trell -oac copy -o output.avi

Note that this downsamples the resolution; I could get a bit better video by not doing that.

As usual, I uploaded everything to github.

python decorators (from 2013/07/30)

I came across something that I think is actually pretty neat. Python supports ‘‘decorating’’ functions, which lets you define behaviors that essentially wrap around a given function. The benefit is that you can write a simple function and decorate it to do something fancier.

I realized I wanted to do this with a simple timing output. Say my function was just something really simple like

def f_plain(x,y):
    print("x + y = {}".format( x+y))
    return x+y
print ("---")
print (f_plain(1,3))

Now, to time it, I could define a function like:

def f_plain_timed(x, y):
    import time
    start = time.time()
    retval = f_plain(x, y)
    end = time.time()
    print("Time elapsed in f_plain: {} seconds".format(end-start))
    return retval

print ("---")
print (f_plain_timed(1,3))

But this isn't a very general way to deal with it–I'd have to redefine f_plain_timed for each function I wanted to time. So instead, I can define a function that takes a function as an argument. There's a bunch of steps here that I should be showing, and am not, but the idea is to get something like this:

def decor(g):
    def decor_inner(*args, **kwargs):
        import time
        start = time.time()
        retval = g(*args, **kwargs)
        end = time.time()
        print("Time elapsed in {}: {} sec".format(g.__name__, end-start))
        return retval
    return decor_inner

def f_decor(x,y):
    print("x + y = {}".format(x+y))
    return x+y
print ("---")
print (f_decor(1,3))

What's going on here? Several things:

  1. We define a decorator ‘‘decor’’

  2. Define a decorator inner ‘‘decor_inner’’ which calls the function we passed originally

  3. Python allows the syntactic sugar ‘‘@decor’’ to mark that ‘‘f_decor’’ should actually be the composition of the function ‘‘decor’’ and the function ‘‘f_decor’’ as written. It returns a function with the call signature defined by ‘‘f_decor’’.

Finally–our decorators can actually even take parameters. This might be useful for specifying what logfiles a particular function should use. It's also used in flask to define routes for applications, among other things. To define this kind of decorator, we need to wrap one more function:

def decor_msg(message):
    def decor_msg_real(g):
        def decor_msg_real_inner(*args, **kwargs):
            import time
            start = time.time()
            retval = g(*args, **kwargs)
            end = time.time()
            print(message+" Time elapsed in {}: {} sec".format(g.__name__, end-start))
        return decor_msg_real_inner
    return decor_msg_real

m = "Hello!!!"
def f_decor_msg(x,y):
    print("x + y = {}".format( x+y))
    return x+y

print ("---")
print (f_decor(1,3))

I think that about ties it up. Back to work!

newer page newest oldest older page
Powered by Olark