Lambda Expressions – an Octave/Scala Comparison

Posted on December 3, 2012

0


CC-by-SA

Too bad I wasn’t able to complete the Coursera’s Scala and Machine Learning courses on time to get a final grading as I was originally planning. Oh well, diapers and work come first.

Hannah-chan with Style

Hannah-chan with Style

In addition to that, two courses I’ve been waiting for a while (the U-Illinois’ Heterogeneous Parallel Programming and Stanford’s Algorithms 2 courses) have already started. The parallel programming course is already a week in progress. Time to catch up and move on to the next plate challenge thingie.

Nevertheless…

I did get a lot of learning done with the Scala and Machine Learning courses. They both played a nice synergy. The Machine Language course used GNU Octave, a Matlab-like language for statistical and numerical computation. Given my unfamiliarity with Octave, I ended up prototyping and verifying my algorithms in Scala and then porting them into Octave.

Using that synergy I was able to delve into Octave’s basic syntax for functional programming. What better way to illustrate the basic similarities (and differences) between Scala and Octave than with an example.

For Starters, A Scala Appetizer

Let’s consider we want to implement a function that approximates the derivative of any arbitrary differentiable function. We can do so by simply implementing the general formula for a derivative:

f'(x) = \lim_{\epsilon\rightarrow 0}\frac{f(x+\epsilon)-f(x-epsilon)}{2\epsilon}

Here is one way to do it in Scala using functional programming concepts. The function that does the grading approximation receives another function as a first-class parameter:

http://www.scala-lang.org/node/4142 by ichoran

/*
 * Based on http://www.scala-lang.org/node/4142#comment-16806,
 * we use these two lines to add an implicit exponentiation shortcut
 * to numeric operations
 */
class ExtDouble(d:Double){ def ~^(e:Double) = scala.math.pow(d,e) }
implicit def extDouble(d:Double) = new ExtDouble(d)

// our derivative approximation function
def gradient(f: (Double)=>Double, epsilon: Double) : (Double) => Double =
	(x) => (f(x+epsilon) - f(x-epsilon))/(2*epsilon)

/*
 * let's try to approximate the derivative of 2 * x^4 + 2
 * (which is 8 * x) when x = 100 (whose value is 80)
 */

def J(x:Double)  = 2*(x)~^4 + 2

// calling the actual function
gradient(J,0.0000001)(100) // yields 7999999.523162842

// calling a named lambda
val mylambda = (x:Double) => 2*(x)~^4 + 2
gradient(mylambda,0.0000001)(100) // also yields 7999999.523162842

// calling a lambda
gradient(
	(x) => 2*(x)~^4 + 2,
	0.0000001)(100) // yields 7999999.523162842, too

Simple functional programming bread-and-butter. By being able to treat functions as first-class types, we can pass them around as arguments (as well as create function literals.)

Now, Onto Octave

A similar base implementation of our gradient approximating function could be created in Octave. Before we proceed, a quick note. In Octave, a function declaration includes a named return value (in this case retval). Don’t let that syntax quirk throw you off.

# our approximation algorithm in Octave
function retval = gradient(J,theta,epsilon)
  retval = (J(theta+epsilon) - J(theta-epsilon))/(2*epsilon);
endfunction

# and the function whose slope we want to approximate when x=100
function retval = J(x)
  retval = (2 * (x^4)) + 2;
endfunction

# and now, some applications of the gradient function
# similar to the ones in the Scala example.

# using the function itself
gradient(@J, 100, 0.0000001)

# using a lambda expression bound to a variable/symbol
k= @(x) (2 * (x^4)) + 2
gradient(k, 100, 0.0000001)

# using a lambda expression directly
gradient(@(x) (2*(x^4))+2, 100, 0.0000001)

So far so good, but notice in the Octave code snippet below what happens when you call it with actual parameters. Specifically, notice the @ symbol prefixing the actual parameter J (the function whose derivative we want to approximate):

gradient(@J, 100, 0.0000001)

What’s going on here? Well, in Octave, when we APPLY the gradient function with actual parameters, we need to get a reference to the target function. In this case, we use Octave’s @ (function handle) operator.

An algorithm can be defined so that it takes a function as an argument, and uses it without any syntactic quirk. But, on the application of the function, on the actual call, we need to get a handle to a function before passing it around as a parameter value.

So, do we use the @ operator like that all the time? Not really. Here is where Octave’s documentation is a bit tricky to delve into (if it is anywhere in the Octave docs, I have not found it explicitly).

Consider the case when you want to compute the gradient to a lambda bound to a variable name.

k= @(x) (2 * (x^4)) + 2
gradient(k, 100, 0.0000001)

Now, before we proceed, play close attention to this important subtlety. In the previous case, we applied the @ operator to the function name J when passed as an argument to the gradient function.

In this other case, we apply the @ operator on the function expression itself before binding it/assigning it to a variable, in this case k.

Here is the key ingredient to notice. If a function expects one of its formal parameters to be a function, then the actual invocation needs to pass a function handle.

In the first case

gradient(@J, 100, 0.0000001)

the gradient function requires its first actual argument to be a function handle. The expression @J does just that.

In the second case

k= @(x) (2 * (x^4)) + 2
gradient(k, 100, 0.0000001)

variable k becomes a function handle. As a result, we do not need to prefix it with @ when passing it to the gradient function.

What is happening here is that, unlike the previous example, the @ operator is applied to the function expression before binding it to k. From that point on, Octave sees k as a function handle and passes it accordingly.

In the third case

gradient(@(x) (2*(x^4))+2, 100, 0.0000001)

we are passing a lambda expression to gradient, which requires a function handle as actual parameter.

So we do the same bizness; we apply the @ operator to the lambda expression itself. And voila, we have a function handle going into the gradient function.

The usage of function handles might seem a bit clunky, but hey, all languages have clunky constructs. After all,  Scala does not possess an exponentiation operator – the design of flexible language extensions possess a problem in using ^ or ** for it as you would find it in other languages.

Nevertheless, function handles are a lot easier and safer than function pointers, so… I can live with them if and when I use Octave’s functional programming capabilities.

zippy

CC-by-SA

Advertisements