Through the looking glass: For most major tech advancements, the more mature and better developed a technology gets, the easier it is to understand. Unfortunately, it seems the exact opposite is happening in the world of artificial intelligence, or AI. As machine learning, neural networks, hardware advancements, and software developments meant to drive AI forward all continue to evolve, the picture they're painting is getting even more confusing.

At a basic level, it's now much less clear as to what AI realistically can and cannot do, especially at the present moment. Yes, there's a lot of great speculation about what AI-driven technologies will eventually be able to do, but there are several things that we were led to believe they could do now, which turn out to be a lot less "magical" than they first appear.

In the case of speech-based digital assistants, for example, there have been numerous stories written recently about how the perceived intelligence around personal assistants like Alexa and Google Assistant are really based more around things like prediction branches that have been human built after listening to thousands of hours of people's personal recordings.

It's not hard to see that some of the original promise of AI isn't exactly living up to expectations.

In other words, people analyzed typical conversations, based on those recordings, determined the likely steps in the dialog, and then built sophisticated logic branches based on that analysis. While I can certainly appreciate that it represents some pretty respectable analysis and the type of percentage-based predictions that early iterations of machine learning are known to do, it's a long way from any type of "intelligence" that actually understands what's being said and responds appropriately. Plus, it clearly raises some serious questions about privacy that I believe have started to negatively impact the usage rates of some of these devices.

On top of that, recent research by IDC on real-world business applications of AI showed failure rates of up to 50% in some of the companies that have already deployed AI in their enterprises. While there are clearly a number of factors potentially at play, it's not hard to see that some of the original promise of AI isn't exactly living up to expectations. Of course, a lot of this is due to the unmet expectations that are almost inevitably part of a technology that's been hyped up to such an enormous degree.

Early discussions around what AI could do implied a degree of sophistication and capability that was clearly beyond what was realistically possible at the time. However, there have been some very impressive implementations of AI that do seem to suggest a more general-purpose intelligence at work. The well-documented examples of systems like AlphaGo, which could beat even the best players in the world at the very sophisticated, multi-layer strategy necessary to win at the ancient Asia game called Go, for example, gave many the impression that AI advances had arrived in a legitimate way.

In addition, just this week, Microsoft pledged $1 billion to a startup called OpenAI LP in an effort to work on creating better artificial general intelligence systems. That's a strong statement about the perceived pace of advancements in these more general-purpose AI applications and not something that a company like Microsoft is going to take lightly.

The problem is, these seemingly contradictory forces, both against and for the more "magical" type of advances in artificial intelligence, leave many people---myself included---unclear as to what the current state of AI really is. Admittedly, I'm oversimplifying to a degree. There are an enormous range of AI-focused efforts and a huge number of variables that go into these efforts, so it's not realistic to expect, much less find, a simple set of reasons for how or why some of the AI applications seem so successful and why some are so much less so (or, at the very least, at lot less "advanced" than they first appear). Still, it's not easy to tell how successful many of the early AI efforts have been, nor how much skepticism we should apply to the promises being made.

Interestingly, the problem extends into the early hardware implementations of AI capabilities and the features they enable as well. For example, virtually all premium smartphones released over the last year or two have some level of dedicated AI silicon built into them for accelerating features like on-device face recognition, or other computational photography features that basically help make your pictures look better (such as adding bokeh effects from a single camera lens, etc.)

The confusing part here is that the availability of these features is generally not dependent on whether your phone includes, for example, a Qualcomm Snapdragon 835 or later processor or Apple A11 or later series chip, but rather what version of Android or iOS you're running. Phones that don't have dedicated AI accelerators still offer the same functions (in the vast majority of cases) if they're running newer versions of Android and iOS, but the tasks are handled by the CPU, GPU, or other component inside the phone's SoC (system on a chip). In theory, the tasks are handled slightly faster, slightly more power efficiently, or, in the case of images, with slightly better quality if you have dedicated AI acceleration hardware, but the differences are currently very small and, more importantly, subject to a great deal of variation based on software and software layer interactions. In other words, even phones without dedicated AI acceleration at the silicon level are still able to take advantage of these features.

This is due, primarily, to the extremely complicated layers of software necessary to write AI applications (or features). Not surprisingly, writing code for AI is very challenging for most people to do, so companies have developed several different types of software that abstract away from the hardware (that is, put more distance between the code that's being written and the specific instructions executed by the silicon inside of devices). The most common layer for AI programmers to write is within what are called frameworks (e.g., TensorFlow, Caffe, Torch, Theano, etc.). Each of these frameworks provide different structures and sets of commands or functions to let you write the software you want to write. Frameworks, in turn, talk to operating systems and translate their commands for whatever hardware happens to be on the device.

In theory, writing straight to the silicon (often called "the metal") would be more efficient and wouldn't lose any performance benefits in the various layers of translation that currently have to occur. However, very few people have the skills to write AI code straight to the metal. As a result, we currently have a complex development environment for AI applications, which makes it even harder to understand how advanced these applications really are.

Ultimately, there's little doubt that AI is going to have an extremely profound influence on the way that we use virtually all of our current computing devices, as well as the even larger range of intelligent devices, from cars to home appliances and beyond, that are still to come. In the short term, however, it certainly seems that the advances we may have been expecting to appear soon, still have a way to go.

Bob O'Donnell is the founder and chief analyst of TECHnalysis Research, LLC a technology consulting and market research firm. You can follow him on Twitter . This article was originally published on Tech.pinions.