Foundation Models
Bloom
BLOOM is an open-access multilingual language model that contains 176 billion parameters. It is capable of following human instructions in dozens of languages. It is designed to continue text from a given prompt using vast amounts of text data and industrial-scale computational resources.
Llama 2
Meta developed and released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters...
GPT 3.5
GPT 3.5 is an advanced conversational artificial intelligence language model developed by OpenAI. It has been trained on a diverse range of internet text to generate human-like responses in natural language conversations....
GPT-J
GPT-J is a GPT-2-like causal language model trained on the Pile dataset with 6 billion parameters. GPT-J is intended to generate text in a variety of contexts, such as chatbots, language translation, and content creation...
FLAN-T5
FLAN-T5 was released in the paper Scaling Instruction-Finetuned Language Models - it is an enhanced version of T5 that has been fine-tuned in a mixture of tasks. Flan-T5 is intended to be used as a conversational AI assistant, capable of answering questions, providing explanations, and engaging in interactive dialogue with users...
Grounding DINO
Open-set object detector that by combines a Transformer-based detector DINO with grounded pre-training. It can detect arbitrary objects with human inputs such as category names or referring expressions. Useful for zero shot object detection tasks...
Yolo8 Object Detection
Ultralytics YOLOv8 is the latest version of the acclaimed real-time object detection and image segmentation model. YOLOv8 is built on cutting-edge advancements in deep learning and computer vision, offering unparalleled performance in terms of speed and accuracy...
YOLOv8 Classification
Ultralytics YOLOv8 is the latest version of the acclaimed real-time object detection and image segmentation model. OLOv8 is built on cutting-edge advancements in deep learning and computer vision, offering unparalleled performance in terms of speed and accuracy. The YOLOv8 image classification model is designed to detect 1000 pre-defined classes in images in real-time...
YOLOv7
YOLOv7 is the fastest and most accurate real-time object detection model for computer vision tasks. It was introduced to the YOLO family in July'22. YOLOv7 established a significant benchmark by taking its performance up a notch. The models are trained using the COCO dataset entirely.
YOLOv6
YOLOv6: a single-stage object detection framework dedicated to industrial applications. OLOv6 Nano model has achieved an mAP of 35.6% on the COCO dataset. Also, it runs at more than 1200 FPS on an NVIDIA Tesla T4 GPU with a batch size of 32.
Detectron2
Detectron2 is Facebook AI Research's next-generation library that provides state-of-the-art detection and segmentation algorithms. It includes implementations for the following object detection algorithms: Mask R-CNN, RetinaNet, Faster R-CNN, RPN, Fast R-CNN, TensorMask, PointRend, DensePose.
SAM (Segment Anything Model) 2D & 3D
SAM (Segment Anything Model) was released by Meta. It is an AI model that can segment any object in an image or video with a single click. SAM is trained on a massive dataset of over 1 billion masks, which allows it to generalize to new types of objects and images. SAM has learned a general understanding of what objects are...
Meta MMS (Massively Multilingual Speech)
MMS supports speech-to-text (ASR) and text-to-speech for 1,107 languages and language identification for over 4,000 languages. It was released by Meta. The Massively Multilingual Speech models outperform existing models and cover 10 times as many languages.
Contact our sales
Your name
Your email
Your company
Your requirements