Message Turncoat in a DM to get moderator attention

Users Online(? lurkers):
7 / 27 posts
Posts: 2266
0 votes RE: Computational Models

Using Monte Carlo to value TSLA puts with a 450 strike price

Posted Image

Fig 1. Historical Volatility (.64) to represent sigma

Posted Image

Fig 2. Using Implied Volatility (.75) to represent sigma

 

Posts: 2266
0 votes RE: Computational Models

I am now studying Stepanovs work and will being going over his general theory of programming in another thread but here I will be going over some of generic programming exercises. Generic programming as an idea is very interesting, it is built around making algorithms abstract and general enough to work efficiently across systems which is pretty desirable in todays programming environment given how common over optimization of algos are. An added note, Stepanov believes in the necessity of a mathematical foundation for computer science that is explicit instead of implicit which makes his work interesting to someone like myself who too lives in two worlds. 

All code explored will be in C++ as type


Generic programming was officially introduced in 1988 in the Musser-Stepanov paper in which it is stated as the following,

"By generic programming we mean the definition of algorithms and data structures at an abstract or generic level thereby accomplishing many related programming tasks simultaneously. The central notion is that of generic algorithms which are parameterized procedural schemata that are completely independent of the underlying data representation and are derived from concrete efficient algorithms"

which in summary makes this style of programming as being primarily concerned with generalizations of algorithms. In this sense generic programming functions effectively as the 'abstract algebra' of computational languages as it is the exploration of computational structures that are generalizations of specific. Essentially, given a set of algorithms what makes them structurally similar despite use cases being fundamentally different? 

Generic programming is therefore very useful in the overall understanding of algorithms and data structures but to those concerned with computational theory and more generally CS as an extension of Applied Mathematics it potentially serves a much greater role.  Generic structures and their connection to Algebraic structures alludes to generic programming serving as a definitional foundation of all software. By the this I mean that Algebraic structures are the literal definitions for all software. 

 

Associativity is a key principal in algebra as it is one of the fundamental properties that define a field which is a the general object that the more fluid algebras all conform to. Associativity is defined as, a + (b + c) =  (a + b) + c , so it is merely property some sets of objects have when the order of their sum does not effect the result. It is this property of fields that inspired Stepanovs search for algebraic structures across software and as such it is where he begins as example in the majority of his introductory work. 

So lets talk about multiplication and why associativity is the key structure in efficient multiplication. 

Let a and n be two integers. How would we turn n*a into a int class that returns the intended product? 

You could do something like this, 

int multiply(int n, int a) {

   if (n==1) return a;   

   return multiply(n-1, a) + a; 

Here we have two common multiplication equations we are converting two deal with a special case of identity in multiplication and the general cases. 

if (n==1) return a; handles identity, that is 1*a = a, and return multiply(n-1,a) + a handles the inductive case, (n + 1)*a = n*a + a which for those in who speak CS will recognize as the most basic recursive case (induction and recursion are deeply related within type theory). This handling of multiplication is really quite common given it is a simple and adequate solution. However, it is costly compared to another approach to multiplication such as the Russian Peasant algorithm. 

The Russian Peasant Algorithm can be written as follows, 

bool odd(int n) {return n & 0x1;}
int half(int n) {return n >> 1;}

int multiplyAss(int n, int a) {
   if (n==1) return a;
   int result = multiply1(half(n), a + a);
   if (odd(n)) result = result + a;
   return result;
}

The case of identity is no different here but what distinguishes the two approaches is the recursive use of the half function instead of induction. After each recursion the integer a is doubled and the integer n is halved, in doing so we effectively reduce the number of additions (no longer doing n-1 operations) and form a more optimized general case of multiplication. The difference in speed is night and day. Where does this speed more generally come from, what is the structure the makes the Russian Peasant Algorithm faster? It takes advantage of Associativity. 

Multiply:

Posted Image

MultiplyAss: 

Posted Image

The algo can continued to be optimized from here by using multiple-accumulation and making the procedure  tail-recursive, but associativity still acts as the primary structure. 

Posts: 1319
0 votes RE: Computational Models

I am now studying Stepanovs work and will being going over his general theory of programming in another thread but here I will be going over some of generic programming exercises. Generic programming as an idea is very interesting, it is built around making algorithms abstract and general enough to work efficiently across systems which is pretty desirable in todays programming environment given how common over optimization of algos are. An added note, Stepanov believes in the necessity of a mathematical foundation for computer science that is explicit instead of implicit which makes his work interesting to someone like myself who too lives in two worlds. 

All code explored will be in C++ as type


Generic programming was officially introduced in 1988 in the Musser-Stepanov paper in which it is stated as the following,

"By generic programming we mean the definition of algorithms and data structures at an abstract or generic level thereby accomplishing many related programming tasks simultaneously. The central notion is that of generic algorithms which are parameterized procedural schemata that are completely independent of the underlying data representation and are derived from concrete efficient algorithms"

which in summary makes this style of programming as being primarily concerned with generalizations of algorithms. In this sense generic programming functions effectively as the 'abstract algebra' of computational languages as it is the exploration of computational structures that are generalizations of specific. Essentially, given a set of algorithms what makes them structurally similar despite use cases being fundamentally different? 

Generic programming is therefore very useful in the overall understanding of algorithms and data structures but to those concerned with computational theory and more generally CS as an extension of Applied Mathematics it potentially serves a much greater role.  Generic structures and their connection to Algebraic structures alludes to generic programming serving as a definitional foundation of all software. By the this I mean that Algebraic structures are the literal definitions for all software. 

 

Associativity is a key principal in algebra as it is one of the fundamental properties that define a field which is a the general object that the more fluid algebras all conform to. Associativity is defined as, a + (b + c) =  (a + b) + c , so it is merely property some sets of objects have when the order of their sum does not effect the result. It is this property of fields that inspired Stepanovs search for algebraic structures across software and as such it is where he begins as example in the majority of his introductory work. 

So lets talk about multiplication and why associativity is the key structure in efficient multiplication. 

Let a and n be two integers. How would we turn n*a into a int class that returns the intended product? 

You could do something like this, 

int multiply(int n, int a) {

   if (n==1) return a;   

   return multiply(n-1, a) + a; 

Here we have two common multiplication equations we are converting two deal with a special case of identity in multiplication and the general cases. 

if (n==1) return a; handles identity, that is 1*a = a, and return multiply(n-1,a) + a handles the inductive case, (n + 1)*a = n*a + a which for those in who speak CS will recognize as the most basic recursive case (induction and recursion are deeply related within type theory). This handling of multiplication is really quite common given it is a simple and adequate solution. However, it is costly compared to another approach to multiplication such as the Russian Peasant algorithm. 

The Russian Peasant Algorithm can be written as follows, 

bool odd(int n) {return n & 0x1;}
int half(int n) {return n >> 1;}

int multiplyAss(int n, int a) {
   if (n==1) return a;
   int result = multiply1(half(n), a + a);
   if (odd(n)) result = result + a;
   return result;
}

The case of identity is no different here but what distinguishes the two approaches is the recursive use of the half function instead of induction. After each recursion the integer a is doubled and the integer n is halved, in doing so we effectively reduce the number of additions (no longer doing n-1 operations) and form a more optimized general case of multiplication. The difference in speed is night and day. Where does this speed more generally come from, what is the structure the makes the Russian Peasant Algorithm faster? It takes advantage of Associativity. 

Multiply:

Posted Image

MultiplyAss: 

Posted Image

The algo can continued to be optimized from here by using multiple-accumulation and making the procedure  tail-recursive, but associativity still acts as the primary structure. 

Beautiful. This is particularly useful in embedded systems where multiplication is costly. Well done.

last edit on 1/12/2021 5:27:31 AM
Posts: 2266
0 votes RE: Computational Models

Beautiful. This is particularly useful in embedded systems where multiplication is costly. Well done.

 My desire to explore generic programming came out of our discussion of languages. 


I want to touch on motivation here. In conversation with TPG (who has only just begun is CS journey) he asked what '>>' means and why n/2 doesn't suffice. When we are constrained at the systems level (Jims example) general operators such as '*' or '/' can be costly and as such we need to work with bytes instead of those operators. This motivates the use of this form of multiplication and alludes to the reality of '>>' as an operator, it is an operator that shifts bits. Hence int half(int n) {return n >> 1;} defines division of n by 2 by shifting bytes around instead of using the black box operation '/' which is defined in a sufficient but less efficient manner. In short, '/' is fine for general abstract programming but does not suffice when every byte counts. I called the algorithm the Russian Peasant Algorithm but if we named it in the context of the operations it utilizes we would call it Add Shift Multiplication as it is multiplication by means of adding and shift bytes. 

To continue on from the previous example, the Associative structure can be optimized via multi-accumulation and tail recursion. 

Multi-accumulation is often seen in signals and systems programming because it cuts out costly function calls when using recursion. In that sense it's not so different from the logic behind using the half(n) function instead of induction because the general intent is to reduce the number of recursive instances of the function. The way this is achieved is by using a third integer r to accumulate all partial products of n*a and by this accumulation of partial products we reduce the number of recursive calls. 

bool odd(int n) {return n & 0x1;}
int half(int n) {return n >> 1;}

int multiplyAssAcc(int r, int n, int a) {
    if (n==1) return r + a;
    if (odd(n)) {
        return multiplyAssAcc(r+a, half(n), a+a);
    } else {
        return multiplyAssAcc(r, half(n), a+a);
    }
}

Posted Image

By using accumulation we've increased the performance by a factor of 2. 

To increase performance further, although not by much, we can use tail recursion meaning that are recursive calls are only ever made in the return value. This would drastically speed up the multiplication procedure when compared to the original add shift method but when accumulation has already been used for optimization the performance increase is not noticeable. However, we can change the structure of our logic as a means of optimization.  

By taking seriously the typical use cases of such an algorithm we can reduce the number of logical comparisons. It is not often that we are going to utilize the identity principal of multiplication, as such we need to consider the costs of constantly checking the 1*a=a case. In every the add-shift method and accumulation method the identity case is checked first, we are always checking if n==1 despite it rarely occurring in general and never occurring in the case that n is even. By changing the structure of our logic to ensure that n==1 is only checked when n is odd we will see an increase of performance by another factor of 2. 

int multiply_TR1(int r, int n, int a){
    if(odd(n)){
        r = r+a;
        if(n==1) return r;
    }
    return multiply_TR1(r, half(n), a+a);
}

Posted Image

 

All of these approaches to multiplication are valid and have varying levels of performance. Once again structurally in so far as the fundamental algebra is concerned they are all instantiations of Associativity. That is generically speaking they are all the same algebraic object being manipulated in clever ways. 

Posts: 2266
0 votes RE: Computational Models

So far I've talked about generic programming as a concept and useful perception for understanding algorithmic structures and their relationship with the algebraic structures that define them. Generic programming is more than a concept or way of perceiving problems, it is also a way to actually program. So the focus of this thread is to go over the actual implementation of such programs in C++ but if you prefer more abstract languages such as Java or Python they too include generic programming capabilities in their standard library.

In C++ generic programing is achieved by utilizing templates which came into fashion after Stepanov and Lee released the Standard Template Library in 1993. The template library includes all the basic functionality you'd expect from any library (operators, iterators, objects, allcators, containers, etc) but also included the create of a class that could be declared with generic types.

Generic types in the context of programming are a means to construct an algorithm without the constraints of committing it to a specific type definition. This captures the fundamental concept of generic programming being a structure focused way of programming as opposed to a type declaration approach. The utility of generic types is that it allows you to declare a single function that can service multiple types and as such it only has to be declared once instead of being declared multiple times as type specific.

If you had function that I wanted to run  multiple datatypes through typical class restrictions of type necessitate me to create a type specific class for both types, that is I would need to write the same function for both. This is common for sorting algorithms for instance (sorting, bubble, etc), we usually use a pretty generic structures to sort multiple types of objects. Templates are useful in these instances in which I am to rely on the same structure to serve the same function but for different object types. 

Instead of, 

int function(int a) {

    int b;

    // some procedure on b

    return b;

}

double function(int a) {

    double b;

    // some procedure on b 

    return b;

}

You would write, 

template <typename T>

T function(T a) {

    T b;

    // some procedure on b

    return b;

}

The template has a generic type until it is used in relation to an object of a specific type in which case it takes on the type of the object. so for the template function I could make a a floating point type and in doing so the function template will go from generic to type specific. I could then call the function again and declare a as an integer and the generic template will then become of type integer. 

Hence, 

int a = 1;

int b = function(a);

will return an object of type integer and, 

double a = 1.0;

double b = function(a);

will return a floating point object.

A question I initially had about template and the generic approach is how it differs from inheritance given they both have polymorphic qualities and conceptually seem to serve similar functions. On the type level template and inheritance are quite different as inheritance is 'generic' in a hierarchical sense which is not flexible with types. Furthermore, hierarchical is polymorphic at runtime while templates are at compile time. This latter is one of the things that specify their use cases beyond type specific constraints and flexibility, inheritance is costly as runtime while templates are costly at compile time. 

My general rule now when it comes to my own projects is that whenever the number of objects types exceed the number of structures I will use templates instead of type specific classes. 

Posts: 2266
0 votes RE: Computational Models

So I am working on a long term project that hopefully is a useful guide for others. It is a text that bridges the gap between mathematics and computer science. It will consist of a standard construction of the major branches of mathematics but using a programming language such as C++ as the objective language. That is a long ways off though, so I am just messing around with things at the moment to test the waters if you will. 

This program plays around with the countability of sets. 

It computes the values of two values x and y which represent a range (generated randomly) and then computes rational values between x and y while indexing each one. Essentially, you are iterating through the density between x and y. 

import numpy as np

maximum = 10
n = 100

x = np.random.randint(1, maximum)
den_xy = np.random.randint(1, maximum)


def density():
    for i in range(1, n + 1):
        number = x * (n + 1) + i
        density = den_xy * (n + 1)
    print ('(' + str(i) + ') ' + str(number) + '/' + str(density) + ' = ' + "%.8f" % (1.0 * number / density))

density()

Result>

x = 7/8 = 0.875000
y = 8/8 = 1.000000
(1) 288/328 = 0.87804878
(2) 289/328 = 0.88109756
(3) 290/328 = 0.88414634
(4) 291/328 = 0.88719512
(5) 292/328 = 0.89024390
(6) 293/328 = 0.89329268
(7) 294/328 = 0.89634146
(8) 295/328 = 0.89939024
(9) 296/328 = 0.90243902
(10) 297/328 = 0.90548780
(11) 298/328 = 0.90853659
(12) 299/328 = 0.91158537
(13) 300/328 = 0.91463415
(14) 301/328 = 0.91768293
(15) 302/328 = 0.92073171
(16) 303/328 = 0.92378049
(17) 304/328 = 0.92682927
(18) 305/328 = 0.92987805
(19) 306/328 = 0.93292683
(20) 307/328 = 0.93597561
(21) 308/328 = 0.93902439
(22) 309/328 = 0.94207317
(23) 310/328 = 0.94512195
(24) 311/328 = 0.94817073
(25) 312/328 = 0.95121951
(26) 313/328 = 0.95426829
(27) 314/328 = 0.95731707
(28) 315/328 = 0.96036585
(29) 316/328 = 0.96341463
(30) 317/328 = 0.96646341
(31) 318/328 = 0.96951220
(32) 319/328 = 0.97256098
(33) 320/328 = 0.97560976
(34) 321/328 = 0.97865854
(35) 322/328 = 0.98170732
(36) 323/328 = 0.98475610
(37) 324/328 = 0.98780488
(38) 325/328 = 0.99085366
(39) 326/328 = 0.99390244
(40) 327/328 = 0.99695122

 

 

Posts: 2266
0 votes RE: Computational Models

I am now playing around with groups, rings, and fields. 

Here's a taste for instance of showing that a certain set has a multiplicative inverse and thereby meets the conditions of being field. This is not efficient but it explicitly shows the transformations and thereby acts as a automated written proof. 

import sympy as sy

x, y, z2 = sy.symbols('x y z2')
sy.init_printing(use_unicode=True)

r1 = sy.Matrix([[x, y, 1, 0]])
r2 = sy.Matrix([[-y, x, 0, 1]])
print (r1)
print (r2)
print ('z2 = x**2 + y**2')


def output():
print ('-------------------------------')
print (r1)
print (r2)


def H(row, k):
for col in range(0, 3):
h_space = row * k
return h_space

r2 = r2 + H(r1, y / x)
output()

r2[1] = z2 / x
output()

row2 = r2 * (x / z2)
output()

r1 = r1 + H(r2, -y)
output()

r1[2] = x ** 2 / z2
r1 = r1 * (1 / x)
output()

print ('Inverse Matrix = ')
print ('(1/z2) . [', r1[2] * z2, ', ', r1[3] * z2, ']')
print (' [', r2[2] * z2, ', ', r2[3] * z2, ']')

Result > 

Matrix([[x, y, 1, 0]])
Matrix([[-y, x, 0, 1]])
z2 = x**2 + y**2
-------------------------------
Matrix([[x, y, 1, 0]])
Matrix([[0, x + y**2/x, y/x, 1]])
-------------------------------
Matrix([[x, y, 1, 0]])
Matrix([[0, z2/x, y/x, 1]])
-------------------------------
Matrix([[x, y, 1, 0]])
Matrix([[0, z2/x, y/x, 1]])
-------------------------------
Matrix([[x, y - y*z2/x, 1 - y**2/x, -y]])
Matrix([[0, z2/x, y/x, 1]])
-------------------------------
Matrix([[1, (y - y*z2/x)/x, x/z2, -y/x]])
Matrix([[0, z2/x, y/x, 1]])
Inverse Matrix =
(1/z2) . [ x , -y*z2/x ]
[ y*z2/x , z2 ]

7 / 27 posts
This site contains NSFW material. To view and use this site, you must be 18+ years of age.