Today I Learned

Some of the things I've learned every day since Oct 10, 2016

Category Archives: learning

208: The Einstellung Effect

The Einstellung Effect is a cognitive phenomenon wherein an agent attempting to solve a problem fails to think outside the box, where the ‘box’ results from its previous experience solving problems.

More specifically, the agent working on a problem is predisposed to attempt solutions similar to those it’s had success with in the past — even though these may be sub-optimal or not viable solutions at all for the problem at hand — rather than go in with an “open mind” and experiment with other approaches. Previous experience is overly relied on or mistaken to be more useful than it is, so the area of the possible solution space the agent examines is narrow/specific to a fault.

Perhaps unsurprisingly, studies have implied the effect seems to increase with age in humans, which goes along with stereotypes of younger minds being more flexible and open to possibilities while older minds have a stronger tendency to “stay on the tracks”.

The Einstellung Effect is one of the obstacles to good problem-solving which can be effectively avoided in many cases by periodic reminders to the problem solver to simply take a step back and re-examine things.

Advertisements

202: The XY Problem

The ambiguously-named XY Problem is a meta-problem at the intersection of communication and problem solving, common in places like technical support and Stack Exchange. Essentially, it’s what can happen when you ask how to implement a chosen solution to a problem rather than ask how to solve the problem itself.

Suppose a person A is trying to solve a problem X, and is attempting to solve it via another problem Y. (An equivalent view is that A is trying to do X by doing Y.) To this end, they ask person B for help with solving problem Y, but do not give B the context of why they want to solve Y, i.e. what they intend to use Y for.

Now suppose solving Y is an inefficient or otherwise bad way of going about solving X, or maybe not a valid way at all. B has no idea of this and will nevertheless waste time and possibly other resources helping A solve Y, which may or may not do any good in the end.

[If this is too abstract, there are a lot of good examples in this post.]

Clearly the better meta-solution here is for A to give the context of why they want to solve Y, thereby allowing B to infer that the real problem is X and instead helping A solve that.

Moral of the story: context is important when asking for help. If you’re the asker you should try and provide it, and if you’re the asked you should ask questions to make sure you’re really dealing with the root problem.

76: Probabilistic Classifiers

As used in machine learning, the term probabilistic classifier refers to a function f: X \rightarrow \pi _Y, where X is the set of objects to be classified, Y is the set of classes, and \pi _Y is the set of probability distributions over Y. That is, a probabilistic classifier takes an object to be classified and gives the probability of that object belonging to each of the possible classes.

This contrasts with a non-probabilistic classifier, which instead is simply a function g: X \rightarrow Y that assigns a single class given an object. Often, the choice is simply that category with the highest probability.