A "virtual satellite" to map Earth in unprecedented detail
New AI model integrates petabytes of Earth observation data that stitches together data from actual satellite images, radar, climate simulations, and more to map Earth’s land and coastal waters.
This is a super cool new model from Google DeepMind!
We’re introducing a new way to analyze the planet. Google’s Satellite Embedding dataset uses the power of AI to pack a year’s worth of multi-source satellite data into every single 10-meter pixel, enabling faster and more powerful geospatial analysis. Welcome to the future of deep learning in Earth Engine.
The model enables a kind of "virtual satellite" that can be used to see what's going on all over the planet — in 10m increments.
This first-of-its-kind dataset was generated using AlphaEarth Foundations, Google DeepMind’s new geospatial AI model that assimilates observations across diverse sources of geospatial information, including optical and thermal imagery from Sentinel-2 and Landsat satellites, radar data that can see through clouds, 3D measurements of surface properties, global elevation models, climate information, gravity fields, and descriptive text. Unlike traditional deep learning models that require users to fine-tune weights and run their own inference on clusters of high-end computers, AlphaEarth Foundations was designed to produce information-rich, 64-dimensional geospatial “embeddings” that are suitable for use with Earth Engine’s built-in machine learning classifiers and other pixel-based analysis.