Advertisement

AI Infuses the Next Generation of Web Application Development

By on

Click to learn more about author James Kobielus.

Mainstream Web application development is beginning to incorporate the tools of Artificial Intelligence (AI). Before long, we’ll see programmers of all sort incorporating AI into even the most mundane browser-based applications.

If you doubt what I just said, take a look at the following recent industry developments:

Citizen Developers are Wielding AI-infused Robotic Process Automation (RPA) Tools

RPA auto-generates source code and other program elements from externally observable application artifacts and behaviors. RPA tools converge augmented programming with Machine Learning (ML), business process orchestration and Web content management. Essentially, RPA software “robots” infer an application’s underlying logic from its presentation layer, user-interface controls, interaction and messaging flow, and application programming interfaces.

In this regard, the robots rely on Machine Learning, Deep Learning (DL), and Natural Language Processing (NLP), and computer vision to infer source code from externally accessible program elements. Key RPA capabilities include screenscraping of UI presentation elements, optical character recognition of on-screen text, auto-sensing of browser-level control and domain object models, recording of human-user keystrokes and clicks, and user-specified flowcharting of UI flows.

Web Developers are Using JavaScript to Compose AI in the Browser

The Web developer ecosystem depends intimately on front-end JavaScript programming frameworks such as React, Angular, and Vue, as well as back-end frameworks such as Node.js. Increasingly, there are JavaScript ML, D, and NLP libraries to use front-end Web apps. As discussed in this recent blog, available JavaScript libraries for ML, DL, and NLP include Brain.js, SynapticNeatapticConventjsWebdnnDeeplearnjsTensorflow Deep PlaygroundCompromiseNeuro.jsmljsMind, and Natural.

Most support interactive visualization of ML/DL/NLP models in the browser. They generally include built-in neural-net such as multi-layer perceptrons, multi-layer long-short term memory networks, liquid state machines, and gated recurrent units, as well as pre-built models for classification, regression, and image recognition. Some compress model data and accelerate execution through JavaScript APIs such as WebAssembly and WebGPU, and also leverage local GPUs through WebGL and other interfaces. They differ in their support for supervised, unsupervised, and reinforcement learning.

Microservices Developers are Decoupling AI Microservices for Orchestration on Kubernetes

Behind the browser and into the Cloud, microservices are being distributed across edge environments for distributed orchestration. To this end, IBM has just introduced a new open-source framework for running distributed DL microservices over Kubernetes. The new Fabric for Deep Learning (FfDL), which is the foundation of IBM’s recently released Deep Learning as a Service (DLaaS) Cloud offering, reduces the need for tight coupling between DL microservices that have been built in TensorFlow, PyTorch, Caffe2, and other libraries and frameworks.

It does this by keeping each microservice as simple and stateless as possible, exposing RESTful APIs. FfDL uses REST APIs to access multiple DL libraries. It isolates DL component failures and allows each service to be independently developed, tested, deployed, scaled, and upgraded. It supports flexible management of DL hardware resources, training jobs, and monitoring and management across heterogeneous clusters of GPUs and CPUs on top of Kubernetes.

All of this enables the framework to support scalable, resilient, and fault tolerant execution of distributed DL applications. And it allows DL microservices running on scattered compute nodes to learn from massive amounts of data.

Developers Accessing all AI Cloud Services Through Framework-agnostic API Abstraction Layer

Web, mobile, desktop, and other applications are beginning to access AI functionality through easy abstractions that are agnostic to the underlying models, algorithms, libraries, and frameworks. To that end, the Linux Foundation has just announced a very important initiative: the Acumos AI Project. Backed by AT&T, Tech Mahindra, and others, Acumos defines APIs, an open-source framework, and an AI model catalog to facilitate framework-agnostic access to AI apps.

The framework does this by exposing AI frameworks such as TensorFlow with a common API. It will include a visual design editor and drag-and-drop application design and chaining lets trainers and other end users deploy normally complicated AI apps for training and testing in minutes. It supports such languages as Java, Python, and R. It will package and export production-ready AI applications as Docker files and incorporate code that Baidu is expected to contribute to leverage Kubernetes‘ elastic scheduling.

Please refer to this recent Wikibon research note for a larger discussion of trends of AI’s role in the larger paradigm of augmented programming.

Leave a Reply