Hello, Science! [Search results for Cognitive Science

  • Gold's Theorem

    After seeing this amazing talk by Josh Tenenbaum on videolectures.net, I started reading up on some very cool stuff at the intersection of machine learning and cognitive science. This brought me to read on Gold's theorem and the poverty of the stimulus. Very roughly, Gold's theorem says that any learner (be it a child or a computer) cannot "learn" a language by only acquiring sentences from the language she has to learn. Some people use this theorem to make the following argument: a toddler will only hear sentences from the language she is learning, she never gets to hear "wrong" (as in not in the language) sentences. Hence, since by Gold's theorem this toddler cannot learn the language, it must be innate: language abilities must be wired into our brains in some way. Gold's Theorem and Cognitive Science, by Kent Johnson is a very enjoyable read for more background on Gold's theorem and how it applies to the question of language acquisition.

    Johnson's paper mentions something that I had never thought about: according to Morgan, a child acquires language after hearing about 4 million sentences. Now think about how many sentences we have access to to train our NLP algorithms on. This is orders of magnitude more than a person ever gets to hear and yet I would say we are far from building a computer system that can manipulate language as accurate as humans. From a Bayesian perspective, this could translate into assuming children having a really good prior which they start from when learning language. If the Bayesian way is the right way to look at this question, I really wonder how humans acquire this prior: how much is wired up in our brains, how much is it influenced by our sensory system,... ?

  • NIPS 2008 Accepted Papers Out!

    Hot from the presses, the full list of NIPS 2008 accepted papers are out. I quickly browsed through the papers and can say I am looking forward to these:

    Cognitive Science

    • Analyzing human feature learning as nonparametric Bayesian inference, J. Austerweil, T. Griffiths
    • Depression: an RL formulation and a behavioural test, Q. Huys, j. vogelstein, P. Dayan
    • Modeling human function learning with Gaussian processes, T. Griffiths, C. Lucas, J. Williams, M. Kalish
    • Modeling the effects of memory on human online sentence processing with particle filters, R. Levy, F. Reali, T. Griffiths

    Neuroscience

    • Characterizing neural dependencies with Poisson copula models, P. Berkes, F. Wood, J. Pillow
    • Dependent Dirichlet Process Spike Sorting, J. Gasthaus, F. Wood, D. Gorur, Y. Teh

    Machine Learning

    • Gates, T. Minka, J. Winn: the next big thing in graphical models?
    • Non-stationary dynamic Bayesian networks, J. Robinson, A. Hartemink
    • Nonparametric Bayesian Learning of Switching Linear Dynamical Systems, E. Fox, E. Sudderth, M. Jordan, A. Willsky: some of which we've seen at the NPBayes 2008 workshop
    • The Mondrian Process, D. Roy, Y. Teh: some of which we've seen at the NPBayes 2008 workshop

Random for art:

  1. Introducing...
  2. All You Blog Froggers Who Chatted it Up
  3. I'm Tired. So the Filter in My Brain Isn't Working Properly...You'll See What I Mean.
  4. While You Were Sleeping
  5. No Tasty Tuesday Today
  6. Mom N' Me Moday
  7. I'm Weird...Here's Another Reason Why
  8. Guest Post: Farm-Raised Humor: Daily Life with My Kids
  9. Gettin' in the Picture for Mom n' Me Monday
  10. Keely's Got Questions...and I've Got Answers