Product upvotes vs the next 3

Waiting for data. Loading

Product comments vs the next 3

Waiting for data. Loading

Product upvote speed vs the next 3

Waiting for data. Loading

Product upvotes and comments

Waiting for data. Loading

Product vs the next 3

Loading

FieldDay

Create custom vision AI apps, using just your phone

FieldDay lets anyone create vision AI. Collect a custom data set based on your unique expertise, perfecting the algorithm through iterative training, right on your phone. Once it works, integrate with your favorite tool or workflow.

Top comment

We believe that the camera is one of the most overlooked productivity platforms of our age. Between Google Lens, the advent of QR code scanning, mobile AR, and emerging platforms like Apple Vision Pro and Humane Ai Pin — cameras have proven themselves to be an excellent tool beyond just taking photos of loved ones. However we believe camera computing has been held back by access to approachable tools for subject-matter experts, enthusiasts and hackers to build the ecosystem of camera apps. That’s why today, we are excited to introduce FieldDay — the first step on our mission to build the IDE for camera computing. Today, cameras — through the advances in mobile AR+AI — understand the world in walls, floors, and hands. However trying to get cameras to do anything truly custom still requires machine learning engineering. To enable anyone to build for camera computing, training custom vision AI should feel like authoring, not machine learning engineering: → At the core of FieldDay is data-collection. Creating custom datasets is a matter of minutes. FieldDay is mobile-first because we believe iteration is key in building production-grade vision AI, and we needed to tighten the loop between data collection, training, and testing — there’s no better way to build these systems than in the field, and we are the first and only tool that allows you to do that. We are looking forward to bring collaboration to this experience in the near future as well. → After you’ve trained your first model, you will see live-feedback appear directly on the viewfinder — and if you see a mistake, you can interactively correct the model right in place. Within your first 15 minutes you will have trained multiple models and see your vision AI improve rapidly. → Lastly, we currently support a select set of platforms (Snap Lens Studio, WebAR/Niantic 8thWall, and SwiftUI) and all industry-standard formats to make it as easy as possible to take your model to your favourite platform. So far we’ve seen creators in the AR space building for platforms like Snap Lens Studio and Niantic 8thWall use FieldDay to rapidly build custom experiences. I also built an AI for detecting British street furniture design icons from scratch in just one weekend — all in the field. Our team has been excited about camera computing ever since we’ve launched Anonymous Camera so we cannot wait to hear your feedback and see what you will build with it. 📸🤖