— Home of XPL (eXtensible Process Language)
It's like XML, but you can store source code in it.
Google is now telling us that they are behind a project developing cars that drive themselves in traffic.
It's been a "secret" project for a while, according to the report; but they're working with people in a geographic area that's been heavily funded in robotics and unmanned vehicles and has taken home some trophies in competition. (DARPA Grand Challenge)
I'm not familiar with google's autonomous vehicle technology; not being in their inner circle of secret friends. So, I can't tell you anything about how good it really is. And, since they're not using HLL (yet, or as far as I know), I'm not going to endorse their effort. :)
I just thought I'd comment on the first response I got after posting their article link to my Facebook page. "No thanks. I prefer to be the one making decisions behind my wheel ;-)"
I'm not picking on the guy who said it. His comment undoubtedly represents the feelings of a lot of people. It's just a prediction; something I've been quite good at, and probably am even more so in my old age.
But this prediction is at least based on straightforward logic. It's pretty easy really. Autonomous vehicles already outperform humans in a number of ways, and we can expect even more improvement in the future. Most auto companies that have invested in autonomous technology have focused the most attention on safety. Step one is using sensors to detect and understand the environment and surrounding traffic, and to warn or avoid dangerous situations.
The more difficult challenge is to accurately predict when autonomous vehicles will be accepted by state law, opening the door to mass production and sales, which will in turn increase investment in research and development even further and increase the pace at which further improvement is realized. I'm optimistic, not just based on what I've said in this paragraph, but based even more on R&D I've been involved with. We can increase the pace of improvement dramatically, and accomplish things that humans cannot. (Yes, the age of AI is upon us - and yes, we're still human. It's the AI you see ...)
"Eventually, it will be illegal for humans to drive," I commented on Facebook.
"By that time it will be illegal for us to think and our humanity will have already been robbed," responded my Facebook friend.
I understand the sentiment (which is why I do indeed take the conversation seriously) but think the two issues are separate. It will eventually be illegal for humans to drive on public roads and highways because a much safer alternative will be available. Humans driving cars will be considered (relatively) too dangerous. Why should the rest of humanity take the risk of being slaughtered by accidents, when such things are extremely rare when the machine does the driving?
I believe that you'll still be allowed to do most of the thinking, at least for a while. An autonomous vehicle doesn't care whether you go to Aunt Suzie's or the race track on Saturday. But one day, even the arguments about whether it's faster to take the Lincoln or Holland Tunnel will be a thing of the past. Honestly, do you think you'll miss it?