Friday Fun (Thanksgiving 24-Nov-2017 Edition)

  • Can an AI be taught to explain itself? Cliff Kuang, New York Times Magazine
    This is a good account of some of the problems we face with machine learning today. There is a clear disconnect between the results you get with good applications of ML, and understanding why they work the way they do. I am not convinced, however, that just adding a second network on the side to explain the first really will solve the problem — it begs the question of how we will understand what that network is doing.
  • Come On Eileen, Dexy’s Midnight Runners. It’s worth finding different versions of this song and listening, because there are some fun intros and exits you don’t hear on the usual radio mix! See the wikipedia page for a nice discussion.

Friday Fun (17-Nov-2017 edition)

Friday Fun (10-Nov-2017 edition)

  • Sixty Years of Software Development Life Cycle Models, Kneuper, Ralf. IEEE Annals of the History of Computing. The Hegelian account of software development life cycles is apparent to anyone who’s been around for more than a decade, or even worked in different sectors of the industry. In my mind, what Kneuper brings to the discussion in this case is not a simple account of the thesis, antithesis, and synthesis of software development life cycles, but interesting facts about their early development. Prototypes played a role much earlier in lifecycle planning than I think many have been aware of, as was an iterative approach with feedback loops in general.
  • The Worst Day Since Yesterday, Flogging Molly. It’s been that kind of a week around here. I highly recommend you go out, get a Guiness, and crank up Flogging Molly as loud as your speaker will allow. You can’t go wrong with that on a Friday evening.

Friday Fun (03-Nov-2017 edition)

  • Idea of Order at Kyson Point, Brian Eno. Brian Eno needs no introduction; this is a nice short recent work he put out this year.
  • Deep Reinforcement Learning: Pong from Pixels. As promised, here’s a bit of a flashback on reinforcement learning, a neat older result on using reinforcement learning to train a network to play Atari video games. It’s important to recognize in this work, too, just like with the AlphaGo Zero work, that the resulting network does not understand what it’s doing. It can’t explain the rules, doesn’t have any abstractions. It’s just very, very, very good at pattern recognition.