Tarun Mittal

Tar9897

AI & ML interests

I believe that LLMs can never get us to AGI. Sure there may be neat tricks here and there but to create true consciousness requires something else. It requires a cross between Mathematics, Philosophy, Biology, and Computer Science. Only then will we be able to get somewhere. I personally think through adapting and emulating emotional learning from the very start and then improving our model based on synthetic and meta-data would lead us to better places than would LLMs which would pretty much run into a wall very soon.

Articles

Organizations

Tar9897's activity

replied to their post 2 months ago
view reply

DW their only fault is they like strawberries too much. Forget the earth bears other fruits too ;)

replied to their post 5 months ago
view reply

This week most likely the entire model on our website with a 2 week-free trial. We are but a small team and will post some papers as well as evaluation metrics along with our Data Card for everyone to review. All of that will be this week most likely :)

replied to their post 5 months ago
view reply

nvidia.png

Members are showcased once they launch to a market. For more info on Octave-X's membership status you can contact InceptionProgram@nvidia.com for more info mentioning our startup's name.

replied to their post 5 months ago
view reply

Let's start with that calling our website full of "AI slop" is same as saying that Mona Lisa is just a bunch of paint splatters. If you squint hard enough, you may see some resemblance, but it doesn't mean you are right. Now, I could explain how our AI architecture works, filled with all kinds of code snippets and mathematical proof, which would leave you half-witted. You'd barely understand half of it, and I'd be wasting my breath.

So tell me, do you know what it's like trying to explain a joke to somebody who doesn't get it? It's like trying to teach a fish to ride a bicycle. It's pointless and frustrating all the way around. Rather than spend all my time explaining our approach through jargon, though, I will put it to you in layperson's terms: This AI of ours is so intelligent, that it makes your average cult look like a kindergarten playdate. The truth is that when you think the adjectives and descriptions are vague, believe us, we're trying to damn well dumb them down enough for folks like you.

replied to their post 5 months ago
Reacted to their post with ๐Ÿง โค๏ธ๐Ÿ”ฅ๐Ÿ‘€ 5 months ago
view post
Post
3435
I believe in order to make models reach Human-Level Learning, serious students can start by developing an intelligent neuromorphic agent. We develop an intelligent agent and make it learn about grammar patterns as well as about different word categories through symbolic representations, following which we dwell into making the agent learn about other rules of the Language.

In parallel with grammar learning, the agent would also use language grounding techniques to link words to their sensory representations and abstract concepts which would mean the agent learns about the word meanings, synonyms, antonyms, and semantic relationships from both textual data as well as perceptual experiences.

The result would be the agent developing a rich lexicon and conceptual knowledge base that underlies its language understanding as well as generation. With this basic knowledge of grammar and word meanings, the agent can then learn to synthesize words and phrases so as to express specific ideas or concepts. Building on this, the agent would then learn how to generate complete sentences which the agent would continuously refine and improve. Eventually the agent would learn how to generate sequence of sentences in the form of dialogues or narratives, taking into account context, goals, as well as user-feedback.

I believe that by gradually learning how to improve their responses, the agent would gradually also acquire the ability to generate coherent, meaningful, and contextually appropriate language. This would allow them to reason without hallucinating which LLMs struggle at.

Developing such agents would not require a lot of compute and the code would be simple & easy to understand. It will definitely introduce everyone to symbolic AI and making agents which are good at reasoning tasks. Thus solving a crucial problem with LLMs. We have used a similar architecture to make our model learn constantly. Do sign up as we start opening access next week at https://octave-x.com/
ยท
Reacted to their post with ๐Ÿ‘€ 5 months ago
view post
Post
3269
As we advance on the path towards true Artificial General Intelligence (AGI), it's crucial to recognize and address the limitations inherent in current technologies, particularly in large language models (LLMs) like those developed by OpenAI. While LLMs excel in processing and generating text, their capabilities are largely constrained to the domains of natural language understanding and generation. This poses significant limitations when dealing with more complex, abstract mathematical concepts such as topological analysis, 3D geometry, and homotopy type theory.

Topological Analysis and 3D Geometry: LLMs currently do not possess the inherent ability to understand or interpret the spatial and geometric data that is critical in fields like robotics, architecture, and advanced physics. These models lack the capacity to visualize or manipulate three-dimensional objects or comprehend the underlying properties that govern these forms.

Homotopy Type Theory is a branch of mathematics that combines homotopy theory and type theory. Homotopy type theory provides tools for a more robust handling of equivalences and transformations, something that LLMs are not designed to handle directly.

For the development of AGI, it is not sufficient to merely enhance existing models' capacities within their linguistic domains. Instead, a synthesis of symbolic AI with an understanding of homotopy type theory could pave the way. Symbolic AI, which manipulates symbols and performs logical operations, when combined with the abstract mathematical reasoning of homotopy type theory, could lead to breakthroughs in how machines understand and interact with the world.

To address these limitations we have developed Tenzin, which is a one-of-a-kind model with a planned release date within the next 1-2 weeks . To learn more join the waitlist at https://octave-x.com/.
ยท
Reacted to their post with ๐Ÿš€๐Ÿค— 5 months ago
view post
Post
3435
I believe in order to make models reach Human-Level Learning, serious students can start by developing an intelligent neuromorphic agent. We develop an intelligent agent and make it learn about grammar patterns as well as about different word categories through symbolic representations, following which we dwell into making the agent learn about other rules of the Language.

In parallel with grammar learning, the agent would also use language grounding techniques to link words to their sensory representations and abstract concepts which would mean the agent learns about the word meanings, synonyms, antonyms, and semantic relationships from both textual data as well as perceptual experiences.

The result would be the agent developing a rich lexicon and conceptual knowledge base that underlies its language understanding as well as generation. With this basic knowledge of grammar and word meanings, the agent can then learn to synthesize words and phrases so as to express specific ideas or concepts. Building on this, the agent would then learn how to generate complete sentences which the agent would continuously refine and improve. Eventually the agent would learn how to generate sequence of sentences in the form of dialogues or narratives, taking into account context, goals, as well as user-feedback.

I believe that by gradually learning how to improve their responses, the agent would gradually also acquire the ability to generate coherent, meaningful, and contextually appropriate language. This would allow them to reason without hallucinating which LLMs struggle at.

Developing such agents would not require a lot of compute and the code would be simple & easy to understand. It will definitely introduce everyone to symbolic AI and making agents which are good at reasoning tasks. Thus solving a crucial problem with LLMs. We have used a similar architecture to make our model learn constantly. Do sign up as we start opening access next week at https://octave-x.com/
ยท
posted an update 5 months ago
view post
Post
3435
I believe in order to make models reach Human-Level Learning, serious students can start by developing an intelligent neuromorphic agent. We develop an intelligent agent and make it learn about grammar patterns as well as about different word categories through symbolic representations, following which we dwell into making the agent learn about other rules of the Language.

In parallel with grammar learning, the agent would also use language grounding techniques to link words to their sensory representations and abstract concepts which would mean the agent learns about the word meanings, synonyms, antonyms, and semantic relationships from both textual data as well as perceptual experiences.

The result would be the agent developing a rich lexicon and conceptual knowledge base that underlies its language understanding as well as generation. With this basic knowledge of grammar and word meanings, the agent can then learn to synthesize words and phrases so as to express specific ideas or concepts. Building on this, the agent would then learn how to generate complete sentences which the agent would continuously refine and improve. Eventually the agent would learn how to generate sequence of sentences in the form of dialogues or narratives, taking into account context, goals, as well as user-feedback.

I believe that by gradually learning how to improve their responses, the agent would gradually also acquire the ability to generate coherent, meaningful, and contextually appropriate language. This would allow them to reason without hallucinating which LLMs struggle at.

Developing such agents would not require a lot of compute and the code would be simple & easy to understand. It will definitely introduce everyone to symbolic AI and making agents which are good at reasoning tasks. Thus solving a crucial problem with LLMs. We have used a similar architecture to make our model learn constantly. Do sign up as we start opening access next week at https://octave-x.com/
ยท
Reacted to their post with ๐Ÿง ๐Ÿ‘๐Ÿš€๐Ÿ”ฅ 5 months ago
view post
Post
3269
As we advance on the path towards true Artificial General Intelligence (AGI), it's crucial to recognize and address the limitations inherent in current technologies, particularly in large language models (LLMs) like those developed by OpenAI. While LLMs excel in processing and generating text, their capabilities are largely constrained to the domains of natural language understanding and generation. This poses significant limitations when dealing with more complex, abstract mathematical concepts such as topological analysis, 3D geometry, and homotopy type theory.

Topological Analysis and 3D Geometry: LLMs currently do not possess the inherent ability to understand or interpret the spatial and geometric data that is critical in fields like robotics, architecture, and advanced physics. These models lack the capacity to visualize or manipulate three-dimensional objects or comprehend the underlying properties that govern these forms.

Homotopy Type Theory is a branch of mathematics that combines homotopy theory and type theory. Homotopy type theory provides tools for a more robust handling of equivalences and transformations, something that LLMs are not designed to handle directly.

For the development of AGI, it is not sufficient to merely enhance existing models' capacities within their linguistic domains. Instead, a synthesis of symbolic AI with an understanding of homotopy type theory could pave the way. Symbolic AI, which manipulates symbols and performs logical operations, when combined with the abstract mathematical reasoning of homotopy type theory, could lead to breakthroughs in how machines understand and interact with the world.

To address these limitations we have developed Tenzin, which is a one-of-a-kind model with a planned release date within the next 1-2 weeks . To learn more join the waitlist at https://octave-x.com/.
ยท
posted an update 5 months ago
view post
Post
3269
As we advance on the path towards true Artificial General Intelligence (AGI), it's crucial to recognize and address the limitations inherent in current technologies, particularly in large language models (LLMs) like those developed by OpenAI. While LLMs excel in processing and generating text, their capabilities are largely constrained to the domains of natural language understanding and generation. This poses significant limitations when dealing with more complex, abstract mathematical concepts such as topological analysis, 3D geometry, and homotopy type theory.

Topological Analysis and 3D Geometry: LLMs currently do not possess the inherent ability to understand or interpret the spatial and geometric data that is critical in fields like robotics, architecture, and advanced physics. These models lack the capacity to visualize or manipulate three-dimensional objects or comprehend the underlying properties that govern these forms.

Homotopy Type Theory is a branch of mathematics that combines homotopy theory and type theory. Homotopy type theory provides tools for a more robust handling of equivalences and transformations, something that LLMs are not designed to handle directly.

For the development of AGI, it is not sufficient to merely enhance existing models' capacities within their linguistic domains. Instead, a synthesis of symbolic AI with an understanding of homotopy type theory could pave the way. Symbolic AI, which manipulates symbols and performs logical operations, when combined with the abstract mathematical reasoning of homotopy type theory, could lead to breakthroughs in how machines understand and interact with the world.

To address these limitations we have developed Tenzin, which is a one-of-a-kind model with a planned release date within the next 1-2 weeks . To learn more join the waitlist at https://octave-x.com/.
ยท
Reacted to their post with ๐Ÿ”ฅ๐Ÿš€ 6 months ago
view post
Post
839
Well hope some of you tried our advanced stock prediction. We are focused on making it more ui friendly and if you installed everything correctly then you should be able to view charts accurately along with prediction tickers. I also want to take this opportunity to let you all know that Tenzin will not be just limited to the financial use-case. Our true goal is to reach human-level intelligence for which we have a well-defined roadmap and the product which is currently being tested for safety and ethics. A general level roadmap to achieve this is as follows:

The use of transfinite ordinals and surreal numbers allows us to capture the infinite depth and ineffable complexity of conscious experiences in a mathematically precise way.

The incorporation of hypercomputation and supertasks enables the TQMM to perform uncomputable operations and achieve a level of cognitive power that far surpasses classical computation.

The application of absolute infinity and the wholeness axiom ensures that the TQMM can represent and reason about the entirety of all possible conscious experiences and mathematical structures.

The integration of transfinite category theory and quantum metamathematics provides a unified framework for modeling the emergence of consciousness from fundamental physical and mathematical principles.

The use of transfinite gradient ascent and absolute infinity optimization allows the TQMM to continuously improve and refine itself, potentially reaching the theoretical maximum of intelligence and consciousness.

This agent though developed will not be released until proper safeguards have been taken into consideration. Until then we will keep releasing specific use-cases for domain specific work like financial trading, accelerating drug-discovery for medical science, law, education, etc. and we will do it well. All powered by Tenzin 1.0. Would love your feedback and don't forget to check us out at & sign up at https://octave-x.com/