What happens when 10 000 of the brightest minds in technology gather together in Dublin? This week saw the latest installment of what has become the largest technology event in Europe, the Web Summit. Or, as it was now rebranded, simply The Summit.
The event gathered hundreds of startups together, pitching concepts, sharing ideas and creating new networks. The atmosphere at the event was simply outstanding, what with thousands of people really excited and engaged by their own passionate projects, as well those of the others’.
What struck me as one of the most interesting aspects of the event was how the entire crowd fell in place together, with practically no differentiation or evaluation of people, whether they were alpha stage startup founders or world-class serial entrepreneurs.
The explanation, I suppose, is pretty simple. The startup culture is about potential. These ten thousand people have realized that although someone may not yet have the big breakthrough in their hands, with enough hard work and enough experiments they are very likely to land one day on something valuable.
As one of the investors I met at the event – who was looking to invest from 50 million upwards – said, you never know what’s going to be the next Pinterest. To this end, even a multi-million dollar investor will meet the blazing-eyed idea cannon on equal grounds.
Another thing that I found interesting about the event was the amount of awesome ideas people were developing. It struck me that pretty much every half sound idea that I have heard somebody talk about is already being done by somebody else. What this means is that no matter how great the idea you get, someone else will be at it already.
To this end, it is superbly important to find an idea that you are truly passionate about. This was repeated again and again by the speakers at the Summit, such as Gentry Underwood of Mailbox, Drew Houston of Dropbox and Elon Musk of Tesla and SpaceX. Even if you have the best idea in the world, it’s worth nothing without execution. And executing ideas is some of the hardest work in the world. So you’d better love what you’re doing, or you won’t be doing it for very long.
Altogether getting a look at the European startup scene was truly exciting, coupled with the amazing talks, meetings, and, of course, the delightful Irish pubs.
If you have anything to do with the startup scene, I suggest you take a look at what is going down in Dublin at next year’s Summit. And even if you don’t, it might be worthwhile to look at how the next generation is changing the world.
by Lauri Calonius
There is a growing interest in the idea that cognitive processes are not solely confined in the head and explained simply in terms of brain processes. The type of body we possess and the natural and cultural environment we are surrounded by are taken more into account in the explanations of cognitive phenomena such as memory and problem solving tasks.
In my thesis It Ain’t All in the Head: Situating Cognition to the Body and the Surrounding World, four different approaches to cognition that conceive it in this bodily and/or worldly situated way are looked into. More specifically “embodied-embedded cogntion”, “enactive cognition”, “extended cognition” and “distributed cognition” are compared and contrasted with each other and the more orthodox “brain-bound” conception.
In addition, critique towards the more unorthodox positions is examined, but which ultimately ends up leveling the ground between the unorthodox and orthodox positions. Thus highlighting the viability of the positions that credit more role for the body and the world in explaining cognitive phenomena.
Finally, the issue of cognitive agency (i.e. what elements of the body and the world may be said to be responsible for a given cogitive property or process) is also examined in the light of these different approaches.
The main goal of the thesis then is to elucidate different positions that depart from the traditional brain centered conception of cognition and draw out the similarities as well as the differences between the approaches.
Moreover, even if these approaches still remain distinct without a clear unified conception of cogntion there could be said to be a kindling of an emerging paradigm that could be applied to other interesting philosophical questions such as the issue of cognitive agency.
The take-home message from the thesis is that even if the liberation of cognition from the confines of the head is a complex issue, being open to this kind of possibility will nevertheless bring forth new and interesting ways of understanding cognitive phenomena.
You can read the entire thesis here.
Extended Mind is not just a nice brand name for a task manager or another note taking app. We are very serious about the fact that certain mental operations such as workflow management and declarative memory can be externalized. This will change the way you think.
The original argument stems from Andy Clark’s and David Chalmers’ awesome 1998 paper called – that’s right – “The Extended Mind”. In that paper the philosophers Clark and Chalmers asked a simple question: if an activity, such as recollection, that is typically thought of as mental involves an external component such as a notebook, is the external bit then a part of the mental activity? Clark and Chalmers answer with a resounding yes. We agree.
We really still don’t know what exactly the mind is. But we do know a lot about how it works.
We know that the brain has a lot to do with the way the mind works. Certain areas of the brain light up with certain thoughts and actions. But the thoughts and actions are not in the brain. They just coincide with brain activity. Much like if you move your hand, the movement is not in your brain, even if the motor cortex always lights up about the same way when you move the hand.
So we can show that mental function correlates with the brain. Likewise, we can show mental function correlates with the tools you use. If you have stuff stored on your smartphone, you simply remember things better than if you didn’t have. And if you have a task list, you are simply more effective than if you didn’t have one.
But the mind is a funny thing. It’s not just one thing, but at least two.
There is the intuitive, non-conscious System 1 of your mind that does most of the hard work. And then there is the reflective, conscious System 2 that does the thinking part.
The former can process a whopping 11.2 million bits of information per second. No wonder that things just pop into your head. The latter, in turn, can only process a meager 40 bits per second, that is three or four things at a time. Again, no wonder it’s so damn hard sometimes to dig things up from your mind. You just know that you know – but you cannot for the love of it recall what is it exactly that you know.
This we want to change.
There are three development goals we have set for the Extended Mind.
We want it to be simple. The 40 bits of your reflective mind are easily distracted by whatever is on the screen. So we’ll put only what you need on the screen.
We want it to be fast. It’s not much of a substitute for mental function if it takes ages to boot. Access times to whatever is on your Extended Mind should be less than ten seconds; comparable to what it normally takes to refresh your biological memory. (We actually tested this on Trivial Pursuit.)
And we want it to be focused. Most software have so many features that the ones you most typically use get lost amidst them. We’ll include only what you most typically need. This means the Extended Mind will have only about 5% of the features the other available apps. But those 5% we’ll do really well.
Your biological mind is great at being creative, at understanding emotions, creating connections and feeling deep. But it sucks at storing and organizing things.
The digital tools in turn are not that awesome with feelings. But they are great at focusing your attention, at organizing a whopping amount of data and at storing stuff so that you can return to it ten years from now.
That is, unless the software you use gets in the way of what you want to get done.
So this is our vision:
Let’s leave the storing to the digital mind so that the biological mind can focus on the feeling and the creating. And let’s have the two work seamlessly together.
Only the essential, with no bells and whistles.
Just your mind, extended.
Sign up for beta here.
Mr. Estrada argues that my basic position requires a strong differentiation between the technological and cultural. This is, however, not what I have intended to convey. My paper rather concerns an argument for the comparison of the plausibility of Vernor Vinge’s AI (artificial intelligence) and IA (intelligence amplification) hypotheses. In other words, I do agree with much that Mr. Estrada writes. We are, in many senses, “tools all the way down”. As what comes to the nature of the mind, it is in a very profound sense extended to begin with. If an AI were forthcoming, it would in many senses contribute as an extended resource to the human mind.
My claim in the paper is not so much intended as the comparison of the intrinsic nature of a biological mind to a simulated mind (which, as I think Mr. Estrada rightly points out, cannot justly be separated), but rather the plausibility of whether an IA or AI explosion will take place sooner: in other words, where the focal point of the intelligence explosion will be: in the network itself (IA), or in identifiable components of it (AI).
The problem with the plausibility of the AI hypothesis is not that it would be impossible or somehow IA-incompatible. It is rather that we are not very likely to reach it before an IA explosion takes place. In addition to the complexity of the nervous system that can be postulated on the grounds of the Stanford experiment, the integration of nervous and extended processes is of a far higher order than in a simple sensory coupling or a feedback loop. The nervous system is dynamic to a far greater degree than any known computational system as is demonstrated by the massive literature on neuroplasticity. The nervous system does not compute – synaptic connections grow and shrink. The brain is not a machine. It is a garden.
In the light of what we now know about brain function, in the nervous system the hardware and the software are intrinsically intertwined. In other words, the brain is not a static processing and memory system where information is stored, but rather a dynamic feedback mechanism that *produces* information by creating complex enough connections. Once you add to this the ability to augment these connections by using the environment, there is a very profound sense in which human intelligence differs dramatically from what has been postulated as machine intelligence. Using Searle as an example was simply to show that there are some dramatic difficulties in attributing intelligence to a machine (whereas attributing intelligence to a human-machine coupling is by no means problematic).
Incidentally, this is not to say there could not be an intelligent machine. I do not subscribe to the fundamental Searlean assumption that this would be philosophically impossible. Quite the opposite: if a machine is constructed that for all purposes acts like an intelligent agent, it should be treated as an intelligent agent, even if this behavior came about in the way of complex enough computation. But this is not at all the point I am trying to make in the paper.
What I am arguing is that while an AI explosion may as well be on the way, it is not very likely to happen very soon. But once real-time networking of human beings is achieved (which should happen in a few years now), the IA explosion will take place. I have no doubts that this will also contribute to the AI explosion as well, whatever that will mean then, which will in turn augment the capacity of the IA system and so on and so forth.
To sum up, none of this is to say either that human intelligence and machine intelligence should be intrinsically separated, or even that the simulation of intelligence were impossible. It is just to say that the intelligence explosion that involves real-time networking of existing nervous systems of human beings is somewhat likelier to happen sooner than a significant enough advance in computing technology.