# Today I Learned

Some of the things I've learned every day since Oct 10, 2016

Continuing with learning Haskell, here are some functions you can operate on lists with:

• head returns the first element of a list
• tail returns everything except the first element
• last returns the last element
• init returns everything except the first element

[Note: all 4 of the above methods give errors when handed empty lists.]

• null xs returns True iff xs is empty (i.e. length xs == 0)
• take n xs returns the sublist of the first n elements of xs . (If length xs < n the function call is still valid and just returns xs itself.)
• drop n xs returns the sublist of xs without the first n elements
• elem x xs or equivalently x ​elem xs returns True iff x is an element of xs
• ranges have convenient notation in Haskell: [1..5] is equivalent to the list [1, 2, 3, 4, 5]. Note that 2 dots are used, not 3, and also that the endpoints of the range are inclusive.
• You can also use step sizes other than the default 1 in ranges. For instance, [1, 4 .. 20] is equivalent to the list [1, 4, 7, 10, 13, 16, 19]. This comes in handy when you want to specify a range in descending order, e.g. [5, 4..1] gives [5, 4, 3, 2, 1].
• Since Haskell evaluates things like ranges lazily, you can also specify finite subsets of infinite ranges, e.g. take 3 [1, 2..] gives [1, 2, 3].
• One possible use of the above is using it on the infinite list range cycle xs which just concatenates xs onto itself indefinitely.
• List comprehensions have a very set-builder-notation-inspired syntax in Haskell. For instance, the comprehension [2*x | x <- [1..10], 2*x >= 12] is equivalent to the Python list comprehension [2*x for x in range(1, 11) where 2*x >= 12]and gives the list [12, 14, 16, 18, 20].

Continuing to familiarize myself with the basics of Haskell syntax, so here are some of the oddities thereof I came across today:

• if statements are expressions, and thus must evaluate to a value, and thus require an else clause. Contrast this with imperative languages like Python and Java where if clauses are instructions, not expressions, and the else bit is optional.
• Lists are homogenous, which isn’t that weird but is worth noting. For instance, you can’t have ints and strings in the same list. Also [1, 2, 3.0] will evaluate to the list [1.0, 2.0, 3.0]: since the list has to be homogenous the ints must be converted to equivalent floats.
• There’s a really nice consistency in the syntax between variable definition (x = 0) and function definition (f x = 2*x). Both x and f are just expressions dependent on 0 or more arguments after all, so Haskell syntax reflects this view that values are just 0-ary functions.
• ++ is used for list concatenation, e.g. [1, 2] ++ [3, 4] evaluates to [1, 2, 3, 4]. You can use sloppy infix notation without parentheses, so that "f" ++ "o" ++ "o" is a valid expression resulting in the string "foo". Concatenating lists s and t with s ++ t takes $O(m)$ time, where $m$ is the length of s. (Haskell walks through the entire first argument to ++.)
• : is the symbol for the cons operator. For instance, 1 : [2, 3] evaluates to the list [1, 2, 3](and in fact [1, 2, 3] is just syntactic sugar for 1 : 2 : 3 : [].

## 204: Some Introductory Haskell Syntax

Picking up Haskell again. I taught myself a little bit of its syntax years ago, but I’d forgotten all of it so I’m starting from scratch. Here’s some of the basics of Haskell syntax I learned today:

• /= is used for ‘not equal’.
• &&  is used for logical AND, || for logical OR, but not is used for logical negation.
• single quotes denote characters, while double quotes denote strings. For instance, 'h' == "h" results in error because you can’t compare the types Char and [Char].
• You need to surround a negative number by parentheses (e.g. (-5)), otherwise the compiler thinks you’re trying to apply subtraction.
• 5 + 4.0 is a valid expression because 5 can be treated as a float. However, the converse isn’t true: floats can’t act as ints in the same way.
• Function application is typically in prefix notation and doesn’t require (or allow) parentheses surrounding arguments, as is common in other languages. For instance, min 4 5 is a valid expression and results in the value 4.
• Function evaluations like that of min also have the highest priority in arithmetic/logical expressions. For instance, succ 3 * 10 gives 40 while succ (3 * 10) gives 31.
• For convenience and readability, functions with exactly 2 arguments can be called in infix notation if the function name is sandwiched by backticks. For instance, div 9 4 and 9 div 4 both return 2.

## 203: The Unit Type

In type theory, a unit type is a type which can take on only one value. With its underlying set being a (the) singleton set, values of this type contain no information within its scope. (Contrast with the boolean type, which has 2 values, and where the value T does carry information because it is not F.)

This doesn’t mean the type itself can’t carry information in a larger scope, however. For instance, in programming languages it’s common to have a unit type and use its single value for representing a lack of a value. Python has the ‘None’, Java has ‘void’/’null’, and Haskell has ‘()’, to give a few examples. This value is useless within the scope of its type, but useful when other types are in scope.

## 202: The XY Problem

The ambiguously-named XY Problem is a meta-problem at the intersection of communication and problem solving, common in places like technical support and Stack Exchange. Essentially, it’s what can happen when you ask how to implement a chosen solution to a problem rather than ask how to solve the problem itself.

Suppose a person A is trying to solve a problem X, and is attempting to solve it via another problem Y. (An equivalent view is that A is trying to do X by doing Y.) To this end, they ask person B for help with solving problem Y, but do not give B the context of why they want to solve Y, i.e. what they intend to use Y for.

Now suppose solving Y is an inefficient or otherwise bad way of going about solving X, or maybe not a valid way at all. B has no idea of this and will nevertheless waste time and possibly other resources helping A solve Y, which may or may not do any good in the end.

[If this is too abstract, there are a lot of good examples in this post.]

Clearly the better meta-solution here is for A to give the context of why they want to solve Y, thereby allowing B to infer that the real problem is X and instead helping A solve that.

Moral of the story: context is important when asking for help. If you’re the asker you should try and provide it, and if you’re the asked you should ask questions to make sure you’re really dealing with the root problem.

Both of the git commands “git add -A” and “git add .” stage all modified files, including new and deleted ones. The difference between them is in their directory scope. “git add -A” includes files higher than the working directory if they belong to the same git repository, while “git add .” will ignore these and only include those in the working directory.

## 200: Constant-Q Transform

The constant-Q transform (CQT), similar to the Fourier transform, is one which transforms a signal into its component frequencies.

I don’t have nearly enough of a background in signal processing to explain its mechanisms in depth, but one important aspect of the CQT which differs from the Fourier transform is in how well-suited it is to audio signals representing musical information.

CQT has an advantage over the Fourier transform and many others because it mirrors the exponential/logarithmic nature of the human auditory system. The human brain typically hears 2 tones differing in their frequency by a factor of 2 as having the same distance between them, regardless of frequency. For instance, a person will perceive the ‘distance’ between the notes A4 (440 hz) and A5 (880 hz) to be the same as that between A5 and A6 (1760 hz), even though the latter interval is twice as large as the former. Thus perceived pitch has a roughly logarithmic structure with respect to frequency.

The CQT, as mentioned, mirrors this logarithmic structure, thus doing a better job of processing auditory information “as the ear does”. For this reason it’s a good choice for a variety of music analysis applications, from identifying instruments in a recording via separation by timbre to identifying keys.

## 199: Chromagrams

Chromagrams are similar to spectrograms in that they are representations of variations in frequency of an audio signal over time. Chromagrams differ, however, in that they concern themselves not with the absolute frequency of a signal but rather with its chroma: its perceived pitch.

Chroma/pitch is a circular percept. That is, if you start a tone at middle C (~262 hz) and gradually increase it in frequency, when the frequency is doubled to ~524 hz you will arrive at a note that sounds higher than middle C but somehow “the same”, so this note is also called a ‘C’. It’s this perceived quality of sameness in frequencies differing by a factor of 2 that creates the phenomenon of the octave, and makes possible the vast majority of conventional ‘Western’ music structure. [See this Wikipedia page for a better explanation.]

Because of this circularity in pitch, we can abstract away and put all possible frequencies into a finite number of ‘bins’ (pitch classes), where frequencies in the same bin are those that sound ‘the same’. The way this is conventionally done is to classify all frequencies into the 12 notes A, A#, B, C, …, G, G#.

What a chromagram does is just keep track of which of these bins the sound in an audio signal belongs to as time progresses. This has many uses, including distributional key-finding (which is what I’m currently using them for.)

As an example, below is the chromagram for ‘Rich Girl’ by Hall & Oates, generated using librosa.

The song is in the key of F major, and if you kind of squint you can see that the majority of the strength of the signals over time belong to the notes F, G, A, Bb, C, and D, which aligns with this key.

Another more clear example is the chromagram below for another less noisy piece in D minor:

In this one you can clearly see the majority of notes belonging to the categories D, F, A, and C, which again aligns with the piece’s key.

## 198: Abstract Classes in Python

While Python doesn’t have a default, out-of-the-box option for abstract classes like other languages such as Java, there are several ways to achieve the equivalent: a class which can’t itself be instantiated but has associated methods which subclasses must implement.

In Python 3.4+, you can use the $\texttt{abc}$ (Abstract Base Class) module:

from abc import ABC, abstractmethod
class Abstract(ABC):
@abstractmethod
def foo(self):
pass

The class inheriting from the $\texttt{ABC}$ class guarantees it can’t itself be instantiated, and the $\texttt{@abstractmethod}$ decorator requires subclasses implement the method.

In Python 2 and 3.0+, the $\texttt{abc}$ (Abstract Base Class) module can still be used, but its use in each is a little more complicated:

# Python 3.0+
from abc import ABCMeta, abstractmethod
class Abstract(metaclass=ABCMeta):
@abstractmethod
def foo(self):
pass
# Python 2
from abc import ABCMeta, abstractmethod
class Abstract:
__metaclass__ = ABCMeta

@abstractmethod
def foo(self):
pass

An alternative to using $\texttt{abc}$ which is less ‘proper’ in a sense but also less complicated to set up is to do

class Abstract(object):
def __init__(self):
raise Exception('Abstract is an abstract class and can't be instantiated')
def foo(self):
raise NotImplementedError('subclasses must override foo()!')

class Derived(Abstract):
def foo(self):
print 'Foo'

so that an Exception is raised when you try to directly instantiate the super class, and if the subclass doesn’t implement the abstract method then it’s super’s implementation will be called and will raise a $\text{NotImplementedError}$.

[Taken from stackoverflow.]

## 197: Instance, Class, Static Methods in Python

In Python, a method in a class can be either an instance method, class method, or static method.

Instance methods are the most common type of method and require no decorator to mark them as such. (The typical function with no decorator within a class is an instance method.) They require an associated instance of a class and as such require the additional argument $\texttt{self}$ when they are called. Because they ‘belong’ to an instance, they also know their associated class.

Class methods, by contrast, do not have an associated instance but do know their associated class. They are designated with the $\texttt{@classmethod}$ decorator and require an argument for their class, which is automatically passed when they are called from the class with dot notation. (e.g. $\texttt{MyClass.my\_method()}$.)

Static methods not only don’t belong to an instance, but don’t know anything about their associated class. When called from their class (e.g. $\texttt{MyClass.my\_method()}$), there’s no arguments automatically passed regarding information about the class. In this sense they’re pretty much just functions that are attached to the class for convenience. Static methods are designated with the $\texttt{@staticmethod}$ decorator.