Artificial Intelligence May 30, 2019 Last Updated: June 13, 2019


The last decade has seen an unimaginable progress in the area of Artificial Intelligence with giant leaps in the advancement of Machine Learning and Deep Learning Algorithms. Making it possible to create intelligent decision making in software which has otherwise seemed impossible a decade ago.

Predictive analytics, NLP Chatbots, Facial and Object Recognition, Sentiment Analysis, Recommendations/Content filtering based on content classification all seem like an easy thing to do with companies and people coming up with state of the art algorithms using their favorite math library. That, and there are AI based startups already offering their products in various sectors like Virtual Reality, Internet of Things, E-Commerce and Marketing Intelligence.

You can perform some quick google searches that will tell you, how many billions have already been invested in AI startups in various Geographies. Other searches will tell you what innovations are happening in the area of Artificial Intelligence(AI), Machine Learning(ML) and Deep Learning while other searches will walk you through how to quickly build an NLP chatbot in under 10 steps. What I am interested in knowing is what is the future of AI?  Where is AI going to take us after 10 years? Are the AI-based products and value propositions ready to handle that demand that will be after 10 years? I don’t know that. But what I do know is that there will be 3 things that will be crucial in shaping the future of AI.

Distributed Artificial Intelligence

Before I begin making my point, here is what I want you to think over :

Training a typical machine learning model using the traditional approach requires you to build a large dataset locally and keep it on your local machine or a data center. This is a very centralized approach and most big companies have been doing that for many years. Data is gathered on a centralized server and machine learning models are trained over it.

That being said, Imagine, what if the data is no longer applicable or is outdated and no longer serves its purpose, what if your model needs to evolve continuously over a new dataset? What if training the model requires a very large dataset and that can not be kept in one place? How would you scale your cognitive system?

My first argument is the ability to learn continuously and adapt to dataset changes needs a distributed approach. Continual Learning is the paradigm of learning continuously and adaptively and updating our predictive models as and when data becomes available in real time while still reusing the previously gained knowledge. Now how can you possibly scale your cognitive system when data keeps coming in? Distributing the cognitive system seems quite an obvious approach.

My second argument is that with the evolution of AI chipsets, AI is moving away from cloud to the end devices. Mobile devices are becoming exceedingly powerful with GPU enabled chipsets. This will allow AI to be federated into edge devices rather than being centralized. In 2017, Google coined the term Federated Learning, in which a model is trained on data residing on edge devices like mobile phones, also tackling the privacy issues of keeping user data on centralized private servers. Federated learning as I see it is again a form of a distributed approach to machine learning.

My third argument is that AI applications require a lot of computing resources, especially if they require a very large dataset. Distributing the processing to set of autonomous processing nodes helps in speeding up computation. Which means you are using multiprocessor systems and clusters of computers running a multi-agent system.

Distributed Artificial Intelligence is a class of technologies that uses Multi-Agent Systems to distribute problem-solving over a set of autonomous processing nodes distributed at a very large scale. Decision making is done based on interactions or communication between these agents. Each agent in the system only has partial knowledge and does not even know what constitutes the reward function or even has no clue of the system dynamics.

I am highly convinced of the fact that whatever AI may emerge to become in a few years from now. It will, however, be distributed.

If you have any questions or you want to discuss your AI project, just click here.

Comments

  1. Sarah johnes says:

    Artificial Intelligence (AI) is the branch of computer sciences that emphasizes the development of intelligence machines, thinking and working like humans. For example, speech recognition, problem-solving, learning and planning.

  2. MBB says:

    Distributed AI may be the only way for FOSS projects to participate. A lot of AI code is or starts open and is based on public data. But can only be run on giant computers by large corporations, making the end product private.
    For projects like Gimp and Libreoffice, combining with technologies like BOINC and torrent will be the only way to offer open alternatives to machine translation.

Leave a Reply

Your email address will not be published. Required fields are marked *