Notes on artificial intelligence, December 2017
Feed: Planet big data.
Author: Curt Monash.
December 12, 2017
Most of my comments about artificial intelligence in December, 2015 still hold true. But there are a few points I’d like to add, reiterate or amplify.
1. As I wrote back then in a post about the connection between machine learning and the rest of AI,
It is my opinion that most things called “intelligence” — natural and artificial alike — have a great deal to do with pattern recognition and response.
2. Accordingly, it can be reasonable to equate machine learning and AI.
- AI based on machine learning frequently works, on more than a toy level. (Examples: Various projects by Google)
- AI based on knowledge representation usually doesn’t. (Examples: IBM Watson, 1980s expert systems)
- “AI” can be the sexier marketing or fund-raising term.
3. Similarly, it can be reasonable to equate AI and pattern recognition. Glitzy applications of AI include:
- Understanding or translation of language (written or spoken as the case may be).
- Machine vision or autonomous vehicles.
- Facial recognition.
- Disease diagnosis via radiology interpretation.
4. The importance of AI and of recent AI advances differs greatly according to application or data category.
- Machine learning and AI have little relevance to most traditional transactional apps.
- Predictive modeling is a huge deal in customer-relationship apps. The most advanced organizations developing and using those rely on machine learning. I don’t see an important distinction between machine learning and “artificial intelligence” in this area.
- Voice interaction is already revolutionary in certain niches (e.g. smartphones — Siri et al.). The same will likely hold other natural language or virtual/augmented reality interfaces if and when they go more mainstream. AI seems likely to make a huge impact on user interfaces.
- AI also seems likely to have huge impact upon the understanding and reduction of machine-generated data.
5. Right now it seems as if large companies are the runaway leaders in AI commercialization. There are several reasons to think that could last.
- They have deep pockets. Yes, but the same is true in any other area of technology. Small companies commonly out-innovate large one even so.
- They have access to lots of data for model training. I find this argument persuasive in some specific areas, most notably any kind of language recognition that can be informed by search engine uses.
- AI technology is sometimes part of a much larger whole. That argument is not obviously persuasive. After all, software can often be developed by one company and included as a module in somebody else’s systems. Machine vision has worked that way for decades.
I’m sure there are many niches in which decision-making, decision implementation and feedback are so tightly integrated that they all need to be developed by the same organization. But every example that remotely comes to mind is indeed the kind of niche that smaller companies are commonly able to address.
6. China and Russia are both vowing to lead the world in artificial intelligence. From a privacy/surveillance standpoint, this is worrisome. China also has a reasonable path to doing so (Russia not so much), in line with the “Lots of data makes models strong” line of argument.
The fiasco of Japan’s 1980s “Fifth-Generation Computing” initiative is only partly reassuring.
7. It seems that “deep learning” and GPUs fit well for AI/machine learning uses. I see no natural barriers to that trend, assuming it holds up on its own merits.
- Since silicon clock speeds stopped increasing, chip power improvements have mainly taken the form of increased on-chip parallelism.
- The general move to the cloud is also not a barrier. I have little doubt major cloud providers could do a good job of providing GPU-based capacity, given that:
- They build their own computer systems.
- They showed similar flexibility when they adopted flash storage.
- Several of them are AI research leaders themselves.
Maybe CPU vendors will co-opt GPU functionality. Maybe not. I haven’t looked into that issue. But either way, it should be OK to adopt software that calls for GPU-style parallel computation.
8. Computer chess is in the news, so of course I have to comment. The core claim is something like:
- Google’s AlphaZero technology was trained for four hours playing against itself, with no human heuristic input.
- It then decisively beat Stockfish, previously the strongest computer chess program in the world.
My thoughts on that start:
- AlphaZero actually beat a very crippled version of Stockfish.
- That’s still impressive.
- Google only released a small fraction of the games. But in the ones it did release, about half had a common theme — AlphaZero seemed to place great value on what chess analysts call “space”.
- This all fits my view that recent splashy AI accomplishments are focused on pattern recognition.
Leave a Reply
You must be logged in to post a comment.