Sign In  |  Register  |  About Daly City  |  Contact Us

Daly City, CA
September 01, 2020 1:20pm
7-Day Forecast | Traffic
  • Search Hotels in Daly City

  • CHECK-IN:
  • CHECK-OUT:
  • ROOMS:

The Future of Ai Will Take a Different, More General Approach

10/09/2021, Santa Clara, CA // PRODIGY: Feature Story //

The California-based startup ORBAI has developed and patented a design for AGI that can learn more like the human brain by interacting with the world, encoding and storing memories in narratives, dreaming about them to form connections, creating a model of its world, and using that model to predict, plan and function at a human level, in human occupations.

With this technology, ORBAI aims to develop Human AI, with fluent conversational speech on top of this AGI core, to provide AI professional services, all the way from customer service to legal, medical, to financial advice, with these services provided online, inexpensively, for the whole world. The core of the Legal AI has already been tested in litigation, with great success.

Brent Oster, President/CEO of ORBAI has helped Fortune 500 companies (and startups) looking to adopt ‘AI’, but consistently found that DL architectures tools fell far short of their expectations for ‘AI’. Brent started ORBAI to develop something better for them.

Today, if we browse the Internet for news on AI, we find that AI just accomplished something humans already do, yet far better. Still, it isn't easy to develop artificial general intelligence (AGI) through human-created algorithms. Do you think AGI may require machines to create their own algorithms? According to you, what is the future of machines that learn to learn?

This is correct, today people design deep learning networks by hand, defining the layers and how they connect, but after a lot of tinkering, they can only get each network to do a specific task, like CNNs for image recognition, or RNNs for speech recognition, or Reinforcement learning for simple problem solving, like games or mazes. All of these require a very well defined and constrained problem, and labelled data or human input to measure success and train. This limits the effectiveness and breadth of application for each of these specific methods.

ORBAI has built a toolset called NeuroCAD (https://www.orbai.ai/neurocad-v4-1.htm) that uses a process with genetic algorithms to evolve more powerful and general purpose spiking neural networks to shape them to fill in the desired functionality, so yes, the tools are designing the AI. One example is our SNN autoencoder that can learn to take in any type or 2D or 3D spatial-temporal input, encode it to a latent, compressed format and also decode it. The cool part is you don't have to format nor label your data. It learns the encoding automatically. This takes the functionality of CNNs, RNNs, LSTMs, GANs, and combines them into one more powerful general purpose analog neural network that can do all these tasks. By itself this is very useful, as the output can be clustered, then the clusters labelled, or associated with other modalities of input, or used to train a conventional predictor pipeline.

But this is for designing components. There is a second level to NeuroCAD that allows these components to be assembled and connected into structures, and these composite structures can be evolved to do very general tasks. For example, we may want to build a robot controller, so we put two vision autoencoders for stereo vision, a speech recognition autoencoder for voice commands, and autoencoders for sensors and motion controllers. Then we put an AI decision making core in the middle, that can take in our encoded inputs, store them in memory, learn how sequences of these inputs evolve in time, and store models for what responses are required. Again, all of these autoencoders and components are evolved to their specific area, and how they connect is evolved, as is the decision core in the middle.

To get this to work, we have to take some guesses about how to design this artificial decision core, the brain in the middle, and seed the genetic algorithms with a couple decent designs, so it will process the sensory input, store it and build relationships between the memories, build narratives with inputs and actions with progressively more advanced models that make the robot better able to understand what to do given specific instructions and the state of its world. Once we have an initial guess, we start evolving the components and how they connect to each other and the architecture of the decision making core.

So the short answer is yes, we will have evolutionary genetic algorithms design our AI, from the components, to the way they connect, to how they solve problems, starting with small 'brains' and working up, like biological evolution did.

For details, see the ORBAI Patents and NVIDIA GTC presentations listed at the bottom of our AGI page: https://www.orbai.ai/artificial-general-intelligence.htm

Many experts, including computer scientists and engineers, predict that artificial general intelligence (AGI) is possible in the near future. But, ORBAI shows us that it is coming even sooner than we likely anticipated. Could you please shed some light on the project and tell us more about the 3D characters?

What is usually meant is superhuman AGI, which is the apex of this process, but there are degrees and flavors of artificial general intelligence along the way.

- Having more general neural nets that can combine the functionality of CNNs, RNNs, RL, and other Gen 2 AI components into one neural net architecture that is more general and more powerful - One year

- Building an artificial intelligence that can take in sensory inputs, form memories and associations between them, plan and make decisions with them, at the level of insect - Two years, a rodent - Three years

- Human-like conversational speech and and general purpose decision making, but trained only in a specific vocation - Four years for first implementation, 6 years to Make it really work. Some vocations like Law and Medicine have constrained spaces of information and decisions, so are easier than building a general human

- These vocational AIs can be trained independently, then later be migrated to a common architecture and combined to form a multi-skilled AGI. It would not be a general human AI, but would have superhuman capability to do areas of each profession, have deeper and wider knowledge reach, and the ability to model the future, plan and predict better than humans.

- Perfecting AGI, making a completely conversational, human-level general AI that is indistinguishable from us and can pass a Turing Test will most likely require building a synthetic AGI, that is much more powerful than human, that can then use all that power to emulate or mimic a human being, if that is what we want it to do.

What most people talk about as AGI is actually superhuman artificial general intelligence. But how do we measure "superhuman"? Deep learning AI is already superhuman in some very specific areas, and with advances like ORBAI is doing, will become superhuman in broader professional areas in analysis, planning, and prediction. We will have better conversational speech, we might pass the Tuiring test in 4-6 years, but how can speech become superhuman after that? Mastering 8 languages or more? Hm, this gets a bit muddier. I think superhuman is when AGI can solve most problems and predict into the future far better than us.

We base our AGI curve on Moore's Law, and unlike current Gen 2 DNN based AI, we are using analog neural net computers that scale proportionally with existing hardware, and evolve to become more efficient, and have greater capability with time:

So in summary what ORBAI is building is an AGI that can take in and analyze large amounts of arbitrary format and types of input data, build models of how its perceived world works, and make predictions and plan using those models, then apply that to specific fields like law, medicine, finance, enterprise planning, administration, agriculture, and others. Because human speech fits this concept of modelling a bi-modal sequence of events, it will be a feature, with the speech anchored to the rest of the memories and world data to give it context and relevance.

From ordering groceries through Alexa to writing an email with Siri, AI has been transforming many aspects of our lives. According to you, how will ORBAI's 3D characters help people transform their lives and bring a change?

I have personally used the Alexa, Google and Siri voice interfaces in my home and have done my best to integrate them into my life and make use of them, but I still find them difficult and awkward, always feeling like there is an easier way to do the same task on a mobile screen. I think this is because these voice interfaces are the equivalent of what we had with the DOS-era command-line interfaces, where you state a command, then a set of parameters, and they have to be properly formatted, and correct, like "Alexis, what is the weather in Seattle tomorrow". and the speech has to be crisply enunciated in an abnormal, staccato pattern. ORBAI did a lot of work in 2019, with testing many speech APIs in the home, and at conferences with holographic character kiosks, and found that most ordinary people cannot figure out how to talk to them properly, don't know how to cue the device to listen, and tend to launch into long, rambling monologues, so voice interfaces just don't work for them.

By creating a more advanced, conversational AI that uses our core technology to parse speech, and understand the flow of human speech, to tie it to memories of real concepts, events, and contexts, we can do a more natural back and forth flow of conversation between the person and AI that is much more relevant and grounded, and the AI can direct the conversation to get specific information from the user. Having a 3D character onscreen is more to get the person to look at the device, speak clearly into the microphone, and so the AI can watch the person speak, pick up facial expressions and even lip-read to augment the speech recognition. The characters also have a cool factor, will make our products recognizable and also make for excellent branding. There are already attorneys that fear Justine Falcon, Legal AI.

Having inexpensive access to professional services like legal, medicine, finance, education, and other information professions online with AI would greatly improve many people's lives, even if it initially allows them to do a Q&A session with the AI and it leads them through better defining the issues they are having, then refer them to a professional, like a Doctor, providing a concise and professionally written report with the correct language and format to that professional. This would reduce the office visit time, and determine whether coming in is even necessary. Speaking with a lawyer about an issue is difficult for an average person because law is so precisely defined, and the language differs greatly from plain language and concepts, so the AI would be like a bridge to translate between them. For the developing world, extending the AI capability to doing medical diagnosis, and actually acting in a professional capacity in other fields would provide people with their first and only access to these kind of services, both changing and saving lives. With the advent of more advanced AGI for doing diagnosis, litigation, financial planning - the sky becomes the limit.

AI has already shown confinement in automating a lot of tasks successfully. Still, most laypeople have a universal question: can AI completely replace flesh-and-blood professionals in the future, or will it just act as a support system to ease the pressure of work?

The two hardest professions to replace will be Housekeeper and Handyman, simply because these professions require great manual dexterity, and ability to solve a wide variety of unstructured spatial problems, using various tools to accomplish an almost unlimited variety of tasks, and a robot body that is strong and dexterous enough with enough power to work all day at these tasks.

The simpler professions to automate with AGI will be the information professions, where a large amount of knowledge, mental models built on that knowledge, and a limited scope of actions or outcomes to perform constrain the problem. That is why we picked an AI lawyer, and an AI doctor to be the first candidates for AGI - they are both structured information professions like this.

We have seen how in many situations, AI and automation augment professions. ATMs and online banking reduced the work that bank tellers personally have to do, but mostly offloaded the mundane and repetitive work. Most likely this trend of AI augmenting humans will continue

We have been told that ORBAI is launching an equity fundraising campaign. Could you please tell us how people can invest in the future and, in turn, what benefits they will be getting?

Yes, ORBAI launched an equity crowdfunding campaign on 24 Sept 2021 on www.startengine.com/orbai. The details of the offering are on our campaign page, but the SEC rules prevent us from communicating any specifics to the public, as it would be solicitation. StartEngine also has a great deal of general information about equity crowdfunding at www.startengine.com.

Media Contact:
ORBAI Technologies, Inc.
Brent Oster
+1 408-963-8671
brent.oster@orbai.com



Source: Prodigy.press

Release ID: 42149

Original Source of the original story >> The Future of Ai Will Take a Different, More General Approach




Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.
 
 
Copyright © 2010-2020 DalyCity.com & California Media Partners, LLC. All rights reserved.