<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Research on traviscj/blog</title>
    <link>https://traviscj.com/blog/tags/research/</link>
    <description>Recent content in Research on traviscj/blog</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <lastBuildDate>Tue, 09 Jan 2018 08:00:00 +0000</lastBuildDate>
    <atom:link href="https://traviscj.com/blog/tags/research/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>profitably wrong</title>
      <link>https://traviscj.com/blog/post/2018-01-09-profitably_wrong/</link>
      <pubDate>Tue, 09 Jan 2018 08:00:00 +0000</pubDate>
      <guid>https://traviscj.com/blog/post/2018-01-09-profitably_wrong/</guid>
      <description>&lt;p&gt;I&amp;rsquo;ve come to realize that my career so far has been built on being &amp;ldquo;profitably wrong.&amp;rdquo;&#xA;I think this is interesting because the usual approaches are being &amp;ldquo;profitably fast&amp;rdquo; (optimizing)&#xA;or &amp;ldquo;profitably better&amp;rdquo; (improving),&#xA;and most people think of any kind of wrongness as being a terrible thing.&#xA;But sometimes the best way to optimize or improve is &lt;em&gt;approximating&lt;/em&gt;!&lt;/p&gt;&#xA;&lt;p&gt;The definitions of &amp;ldquo;profitably&amp;rdquo; has changed as I&amp;rsquo;ve worked on different things, as has the specific type of &amp;ldquo;wrongness&amp;rdquo;.&#xA;A couple specific ways accepting &amp;ldquo;wrongness&amp;rdquo; have been profitable for me include:&lt;/p&gt;</description>
    </item>
    <item>
      <title>logging</title>
      <link>https://traviscj.com/blog/post/2014-09-26-logging/</link>
      <pubDate>Fri, 26 Sep 2014 00:00:00 +0000</pubDate>
      <guid>https://traviscj.com/blog/post/2014-09-26-logging/</guid>
      <description>&lt;p&gt;In grad school, I spent a lot of time writing code that read output from nonlinear optimization solvers, and tried&#xA;to do useful things with it.&#xA;A much better way to do that is called &amp;ldquo;structured logging&amp;rdquo;, an idea I experimented with a bit during grad school.&#xA;It has also been coming up in my working life, so I wanted to delve into it a bit deeper.&#xA;For a quick introduction, check out &lt;a href=&#34;http://gregoryszorc.com/blog/category/logging/&#34;&gt;Thoughts on Logging&lt;/a&gt;.&#xA;For a lot longer introduction, see &lt;a href=&#34;http://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying&#34;&gt;The Log: What every software engineer should know about real-time data unifying&lt;/a&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>piping for fun and profit</title>
      <link>https://traviscj.com/blog/post/2014-05-29-piping_for_fun_and_profit/</link>
      <pubDate>Thu, 29 May 2014 00:00:00 +0000</pubDate>
      <guid>https://traviscj.com/blog/post/2014-05-29-piping_for_fun_and_profit/</guid>
      <description>&lt;p&gt;I recently discovered something pretty cool: groovy, and in particular groovysh. It lets you do cool stuff like run&#xA;JVM functions:&lt;/p&gt;&#xA;&lt;pre&gt;&lt;code&gt;➜  ~  groovysh&#xA;Groovy Shell (2.3.3, JVM: 1.8.0)&#xA;Type &#39;:help&#39; or &#39;:h&#39; for help.&#xA;-------------------------------------------------------------------------------&#xA;groovy:000&amp;gt; new Random().nextInt()&#xA;===&amp;gt; 909782845&#xA;&lt;/code&gt;&lt;/pre&gt;&#xA;&lt;p&gt;But the sad part is that it seems pretty slow on my machine:&lt;/p&gt;&#xA;&lt;pre&gt;&lt;code&gt;➜  ~  time (echo :q | groovysh)&#xA;Groovy Shell (2.3.3, JVM: 1.8.0)&#xA;Type &#39;:help&#39; or &#39;:h&#39; for help.&#xA;-------------------------------------------------------------------------------&#xA;groovy:000&amp;gt; :q&#xA;( echo :q | groovysh; )  16.56s user 0.31s system 201% cpu 8.384 total&#xA;&lt;/code&gt;&lt;/pre&gt;&#xA;&lt;p&gt;That&amp;rsquo;s more than 8 seconds just to start up and shut down a prompt that I might just run one command in!&lt;/p&gt;</description>
    </item>
    <item>
      <title>overly-ambitious-isqo and the design of numerical codes</title>
      <link>https://traviscj.com/blog/post/2013-10-23-overly-ambitious-isqo_and_the_design_of_numerical_codes/</link>
      <pubDate>Wed, 23 Oct 2013 00:00:00 +0000</pubDate>
      <guid>https://traviscj.com/blog/post/2013-10-23-overly-ambitious-isqo_and_the_design_of_numerical_codes/</guid>
      <description>&lt;p&gt;I have finally released the &lt;a href=&#34;https://github.com/traviscj/overly-ambitious-isqo&#34;&gt;overly-ambitious-isqo&lt;/a&gt; project on github!&lt;/p&gt;&#xA;&lt;p&gt;I wanted to call out two particular design concerns I had.&lt;/p&gt;&#xA;&lt;h2 id=&#34;rich-language&#34;&gt;rich language&lt;/h2&gt;&#xA;&lt;p&gt;My first goal was to try very hard to build up the C language to very succinctly express the main algorithm in &lt;code&gt;src/isqo_functor.cpp&lt;/code&gt; in extremely rich language. It seems like numerical code is typically implemented with loops like &lt;code&gt;for (int i=0; i&amp;lt;N; i)&lt;/code&gt; and method calls like &lt;code&gt;deltal(xhat, mu)&lt;/code&gt;. I have found it much easier to reason and think deeply about codes like &lt;code&gt;for (int primal_index=0; primal_index &amp;lt; num_primal; primal_index)&lt;/code&gt; and method calls like &lt;code&gt;linear_model_reduction(penalty_iterate, penalty_parameter)&lt;/code&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>spring-mass visualization</title>
      <link>https://traviscj.com/blog/post/2013-07-31-spring-mass_visualization/</link>
      <pubDate>Wed, 31 Jul 2013 00:00:00 +0000</pubDate>
      <guid>https://traviscj.com/blog/post/2013-07-31-spring-mass_visualization/</guid>
      <description>&lt;p&gt;I&amp;rsquo;m working on a paper about an algorithm for hotstarting nonlinear program solves; one application of this might be in the realm of nonlinear model predictive control.&#xA;In these types of models, we first define the physical equations for the system under consideration.&#xA;They are subject to some control parameters, which are just a mathematical representation of the input we could give the system.&#xA;We also define an objective&amp;ndash;something that we would like to minimize(usually something like time or energy).&lt;/p&gt;</description>
    </item>
    <item>
      <title>Perturbation Theory Problems with bvp4c</title>
      <link>https://traviscj.com/blog/post/2012-10-22-perturbation_theory_problems_with_bvp4c/</link>
      <pubDate>Mon, 22 Oct 2012 00:00:00 +0000</pubDate>
      <guid>https://traviscj.com/blog/post/2012-10-22-perturbation_theory_problems_with_bvp4c/</guid>
      <description>&lt;p&gt;I have been watching Nathan Kutz&amp;rsquo; lectures on Coursera.&#xA;One change he made to the course since I took AMATH 581 at University of Washington was introducing the MATLAB function &lt;em&gt;bvp4c&lt;/em&gt;.&#xA;I immediately realized that this would be nice for solving boundary layer problems that arise in asymptotics.&lt;/p&gt;&#xA;&lt;p&gt;Following my life philosophy of doing the dumbest thing that could possibly work, I tried implementing Nathan&amp;rsquo;s code for a single-layer boundary layer problem from Holmes, Chapter 2:&lt;/p&gt;</description>
    </item>
    <item>
      <title>Numerical Recipes &amp; Scientific Libraries</title>
      <link>https://traviscj.com/blog/post/2012-05-26-numerical_recipes_scientific_libraries/</link>
      <pubDate>Sat, 26 May 2012 00:00:00 +0000</pubDate>
      <guid>https://traviscj.com/blog/post/2012-05-26-numerical_recipes_scientific_libraries/</guid>
      <description>&lt;p&gt;I attended a talk on how to use &lt;a href=&#34;http://www.it.northwestern.edu/research/adv-research/hpc/quest/index.html&#34;&gt;Quest&lt;/a&gt;, Northwestern University&amp;rsquo;s TOP500 supercomputer(At least during June 2010). Most of it was a routine introduction to MPI, but one interesting question raised was what routines we should be using in our scientific computing codes. A lot of holdouts were still using [ Numerical Recipes] for their research-level codes, which strikes me as a backwards way about it. Numerical Recipes is a starting point, and probably &lt;a href=&#34;http://web.archive.org/web/20021015200910/http://math.jpl.nasa.gov/nr/nr-alt.html&#34;&gt;not the best&lt;/a&gt; thing to use: &lt;a href=&#34;http://mingus.as.arizona.edu/~bjw/software/boycottnr.html&#34;&gt;it has awful licensing&lt;/a&gt; and &lt;a href=&#34;http://www.uwyo.edu/buerkle/misc/wnotnr.html&#34;&gt;might not even be that reliable!&lt;/a&gt;&lt;/p&gt;</description>
    </item>
    <item>
      <title>spamfunc for optimization in matlab</title>
      <link>https://traviscj.com/blog/post/2012-01-30-spamfunc_for_optimization_in_matlab/</link>
      <pubDate>Mon, 30 Jan 2012 00:00:00 +0000</pubDate>
      <guid>https://traviscj.com/blog/post/2012-01-30-spamfunc_for_optimization_in_matlab/</guid>
      <description>&lt;h1 id=&#34;what-spamfunc-is&#34;&gt;what spamfunc is&lt;/h1&gt;&#xA;&lt;p&gt;In developing optimization algorithms, one of the most tedious parts is trying different examples, each of which might have its own starting points or upper or lower bounds or other information.&#xA;The tedium really starts when your algorithm requires first or second order information, which might be tricky to calculate correctly.&#xA;These bugs can be pernicious, because it might be difficult to differentiate between a bug in your algorithm and a bug in your objective or constraint evaluation.&#xA;Handily, Northwestern Professor &lt;a href=&#34;http://users.iems.northwestern.edu/~4er/&#34;&gt;Robert Fourer&lt;/a&gt; wrote a language called &lt;a href=&#34;http://www.ampl.com/&#34;&gt;AMPL&lt;/a&gt;, which takes a programming-language specification of objective and constraints and calculates derivatives as needed.&#xA;The official amplfunc/spamfunc reference is contained in &lt;a href=&#34;http://ampl.com/REFS/HOOKING/#UsewithMATLAB&#34;&gt;Hooking Your Solver to AMPL&lt;/a&gt;, but I&amp;rsquo;m shooting for a more low-key introduction.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Why Do Math and Izhikevich</title>
      <link>https://traviscj.com/blog/post/2009-01-06-why_do_math_and_izhikevich/</link>
      <pubDate>Tue, 06 Jan 2009 00:00:00 +0000</pubDate>
      <guid>https://traviscj.com/blog/post/2009-01-06-why_do_math_and_izhikevich/</guid>
      <description>&lt;p&gt;Professor Eric Shea-Brown has written up a nice website explaining what we&amp;rsquo;re doing with the computational neuroscience modeling. It&amp;rsquo;s currently on the Why Do Math website at this link: &lt;a href=&#34;http://dev.whydomath.org/node/HHneuro/index.html&#34;&gt;Brain Dynamics: The Mathematics of the Spike&lt;/a&gt;.&lt;/p&gt;&#xA;&lt;p&gt;For my project, we&amp;rsquo;re starting with a simpler model that is similar in behavior but quicker computationally and (somewhat) easier to analyze mathematically, known as the Izhikevich model(after its creator, Eugene Izhikevich). His website has some amazingly cool videos and a lot of papers on what he&amp;rsquo;s been doing.&#xA;&lt;a href=&#34;http://vesicle.nsi.edu/users/izhikevich/&#34;&gt;His Website&lt;/a&gt;&lt;/p&gt;</description>
    </item>
    <item>
      <title>Computer Ressurection and Elastic Cloud Experimentation</title>
      <link>https://traviscj.com/blog/post/2008-11-29-computer_ressurection_and_elastic_cloud_experimentation/</link>
      <pubDate>Sat, 29 Nov 2008 00:00:00 +0000</pubDate>
      <guid>https://traviscj.com/blog/post/2008-11-29-computer_ressurection_and_elastic_cloud_experimentation/</guid>
      <description>&lt;p&gt;I was home on Thanksgiving Break with Sharvil, and we decided to revive some old computers. Partly I&amp;rsquo;d like to experiment with some clustering stuff without incurring CPU time at the AMATH department or Teragrid stuff I&amp;rsquo;m likely gonna be working on soon with Shea-Brown&amp;rsquo;s neuroscience research. So, it turns out I resurrected about 5-6 old computers(final tally is still waiting on the number of successful Xubuntu installs on them, among other practical issues(where the hell am I going to put six computers&amp;hellip;?): The very first computer I built(a P3 450), P3 700, Dual P2 266, a couple of AMD64 3200&amp;rsquo;s, and a Sony Vaio P3 733. The cool thing is that the neuron spiking models are basically embarassingly parallel(well, each run isn&amp;rsquo;t necesarily, but from what I&amp;rsquo;ve gathered so far, we&amp;rsquo;re looking for averages over a bunch of them. So, sweet! Again, this would be terrible for actual research, especially against something like TG or even Amazon&amp;rsquo;s EC2&amp;ndash;which is another thing I really need to check out.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Fame and FORTRAN</title>
      <link>https://traviscj.com/blog/post/2008-11-08-fame_and_fortran/</link>
      <pubDate>Sat, 08 Nov 2008 00:00:00 +0000</pubDate>
      <guid>https://traviscj.com/blog/post/2008-11-08-fame_and_fortran/</guid>
      <description>&lt;p&gt;I must be getting more popular on some search engines somewhere. I just got six random comment-spam messages. Awesome. I guess that&amp;rsquo;s why the more important bloggers have come to rely on Bayesian filters and soforth for taming the wild flow of spam. Hopefully that trend doesn&amp;rsquo;t continue.&lt;/p&gt;&#xA;&lt;p&gt;Also, it seems as though I am now learning FORTRAN. I&amp;rsquo;m sortof starting working with Eric Shea-Brown on some Neuroscience research, working with HPC on NSF&amp;rsquo;s Teragrid. It&amp;rsquo;s pretty exciting stuff, and I&amp;rsquo;m really excited about getting moving on it. Anyways, back to FORTRANizing, I suppose.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
