Beyond Transformers: Symbolica launches with $33M to change the AI industry with symbolic models
Artificial intelligence startup Symbolica AI launched today with an original approach to building generative AI models.
The company is aiming to tackle the expensive mechanisms behind training and deploying large language models such as OpenAI’s ChatGPT that are based on Transformer architecture.
Alongside that news, it also revealed today that it has raised $33 million in total funding combined from a Series A and seed funding round led by Khosla Ventures. Other investors included Day One Ventures, Abstract Ventures Buckley Ventures and General Catalyst.
Transformer deep learning architectures have overtaken every other type — especially for large language models, as seen with OpenAI’s ChatGPT, Anthropic PBC’s Claude, Google LLC’s Gemini and many others. That’s thanks to their popularity and the broad presence of tools for their development and deployment, but they’re extremely complex and expensive. They also take colossal amounts of data and energy, are difficult to validate and have a tendency to “hallucinate,” which is when a model confidently relates an inaccurate statement as if it’s true.
Symbolica approaches building AI models through structured models that define tasks through manipulating symbols, as opposed to Transformers, which use the contextual and statistical relationships between inputs and learn from past content given to them. Symbols in symbolic AI represent a set of rules, allowing them to be pretrained for particular tasks — such as coding or word processing capabilities.
The startup uses structured mathematics that defines the relationship between symbols according to a concept known as “categorical deep learning.” It explained in a paper that it recently co-authored with Google DeepMind. Structured models categorize and encode and encode the underlying structure of data, which means that they can run on less computational power and rely on less overall data than large, complex unstructured models such as GPT.
“It’s possible to produce domain-tailored structured reasoning capabilities in much smaller models, marrying a deep mathematical toolkit with breakthroughs in deep learning,” Symbolica Chief Executive George Morgan told TechCrunch.
The company intends to produce a toolkit that will allow for the construction of models and those models will be “interpretable,” meaning that users will be able to understand how the AI network came to a determination. That should open up high transparency within models meaning that they will be much more easily monitored and debugged by developers.
Interpretability is a requirement for building better AI in the future and fundamental for highly regulated industries where inaccuracy risks could be catastrophic such as healthcare and finance. It is also important when understanding what an AI knows and how it came to a decision will be necessary for applying transparency for regulatory audits.
Morgan told Reuters that the company’s first product will be a coding assistant, but it will not launch until early 2025 because the company will first need to build and train its model.
Image: rawpixel/Freepik
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU