\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

On March 13, Google De<\/a>e<\/a>pMind<\/a> announced the latest AI agent \"SIMA\" (Scalable Instructable Multiworld Agent) which can actively play games with you while following your commands. SIMA has been trained with a range of gaming skills to play more like a human than some typical AI. It can easily follow natural language instructions and perform tasks you assign across different games.<\/p>\n\n\n\n

This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

In response, Google proposed effective measures<\/a> in response to identified failings to solve this dispute which has gone too far.<\/p>\n","post_title":"French Regulators Fined Google $270M For Using News Publishers' Data","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"french-regulators-fined-google-270m-for-using-news-publishers-data","to_ping":"","pinged":"","post_modified":"2024-03-24 13:27:35","post_modified_gmt":"2024-03-24 02:27:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15993","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15899,"post_author":"20","post_date":"2024-03-16 05:54:52","post_date_gmt":"2024-03-15 18:54:52","post_content":"\n

On March 13, Google De<\/a>e<\/a>pMind<\/a> announced the latest AI agent \"SIMA\" (Scalable Instructable Multiworld Agent) which can actively play games with you while following your commands. SIMA has been trained with a range of gaming skills to play more like a human than some typical AI. It can easily follow natural language instructions and perform tasks you assign across different games.<\/p>\n\n\n\n

This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Google didn't regard the commitments and used publishers' data to train its AI chatbot Bard, currently known as Gemini. So Google failed to provide a proper solution for publishers, allowing them to object to using their content by Google. <\/p>\n\n\n\n

In response, Google proposed effective measures<\/a> in response to identified failings to solve this dispute which has gone too far.<\/p>\n","post_title":"French Regulators Fined Google $270M For Using News Publishers' Data","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"french-regulators-fined-google-270m-for-using-news-publishers-data","to_ping":"","pinged":"","post_modified":"2024-03-24 13:27:35","post_modified_gmt":"2024-03-24 02:27:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15993","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15899,"post_author":"20","post_date":"2024-03-16 05:54:52","post_date_gmt":"2024-03-15 18:54:52","post_content":"\n

On March 13, Google De<\/a>e<\/a>pMind<\/a> announced the latest AI agent \"SIMA\" (Scalable Instructable Multiworld Agent) which can actively play games with you while following your commands. SIMA has been trained with a range of gaming skills to play more like a human than some typical AI. It can easily follow natural language instructions and perform tasks you assign across different games.<\/p>\n\n\n\n

This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

In 2019 the EU introduced \"Neighboring Rights\" which made print media capable of demanding compensation for using their content and this was in trial phases in France. Google agreed to pay French Media for using their articles or news in searches. In 2022, a new commitment was made by Google, which says that Google should offer news publishers a transparent offer of payment within three months of receiving a copyright claim.<\/p>\n\n\n\n

Google didn't regard the commitments and used publishers' data to train its AI chatbot Bard, currently known as Gemini. So Google failed to provide a proper solution for publishers, allowing them to object to using their content by Google. <\/p>\n\n\n\n

In response, Google proposed effective measures<\/a> in response to identified failings to solve this dispute which has gone too far.<\/p>\n","post_title":"French Regulators Fined Google $270M For Using News Publishers' Data","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"french-regulators-fined-google-270m-for-using-news-publishers-data","to_ping":"","pinged":"","post_modified":"2024-03-24 13:27:35","post_modified_gmt":"2024-03-24 02:27:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15993","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15899,"post_author":"20","post_date":"2024-03-16 05:54:52","post_date_gmt":"2024-03-15 18:54:52","post_content":"\n

On March 13, Google De<\/a>e<\/a>pMind<\/a> announced the latest AI agent \"SIMA\" (Scalable Instructable Multiworld Agent) which can actively play games with you while following your commands. SIMA has been trained with a range of gaming skills to play more like a human than some typical AI. It can easily follow natural language instructions and perform tasks you assign across different games.<\/p>\n\n\n\n

This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Neighboring Rights And Commitments<\/h2>\n\n\n\n

In 2019 the EU introduced \"Neighboring Rights\" which made print media capable of demanding compensation for using their content and this was in trial phases in France. Google agreed to pay French Media for using their articles or news in searches. In 2022, a new commitment was made by Google, which says that Google should offer news publishers a transparent offer of payment within three months of receiving a copyright claim.<\/p>\n\n\n\n

Google didn't regard the commitments and used publishers' data to train its AI chatbot Bard, currently known as Gemini. So Google failed to provide a proper solution for publishers, allowing them to object to using their content by Google. <\/p>\n\n\n\n

In response, Google proposed effective measures<\/a> in response to identified failings to solve this dispute which has gone too far.<\/p>\n","post_title":"French Regulators Fined Google $270M For Using News Publishers' Data","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"french-regulators-fined-google-270m-for-using-news-publishers-data","to_ping":"","pinged":"","post_modified":"2024-03-24 13:27:35","post_modified_gmt":"2024-03-24 02:27:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15993","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15899,"post_author":"20","post_date":"2024-03-16 05:54:52","post_date_gmt":"2024-03-15 18:54:52","post_content":"\n

On March 13, Google De<\/a>e<\/a>pMind<\/a> announced the latest AI agent \"SIMA\" (Scalable Instructable Multiworld Agent) which can actively play games with you while following your commands. SIMA has been trained with a range of gaming skills to play more like a human than some typical AI. It can easily follow natural language instructions and perform tasks you assign across different games.<\/p>\n\n\n\n

This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related:<\/em><\/strong> Coinbase Approved As Virtual Asset Provider in France<\/a><\/p>\n\n\n\n

Neighboring Rights And Commitments<\/h2>\n\n\n\n

In 2019 the EU introduced \"Neighboring Rights\" which made print media capable of demanding compensation for using their content and this was in trial phases in France. Google agreed to pay French Media for using their articles or news in searches. In 2022, a new commitment was made by Google, which says that Google should offer news publishers a transparent offer of payment within three months of receiving a copyright claim.<\/p>\n\n\n\n

Google didn't regard the commitments and used publishers' data to train its AI chatbot Bard, currently known as Gemini. So Google failed to provide a proper solution for publishers, allowing them to object to using their content by Google. <\/p>\n\n\n\n

In response, Google proposed effective measures<\/a> in response to identified failings to solve this dispute which has gone too far.<\/p>\n","post_title":"French Regulators Fined Google $270M For Using News Publishers' Data","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"french-regulators-fined-google-270m-for-using-news-publishers-data","to_ping":"","pinged":"","post_modified":"2024-03-24 13:27:35","post_modified_gmt":"2024-03-24 02:27:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15993","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15899,"post_author":"20","post_date":"2024-03-16 05:54:52","post_date_gmt":"2024-03-15 18:54:52","post_content":"\n

On March 13, Google De<\/a>e<\/a>pMind<\/a> announced the latest AI agent \"SIMA\" (Scalable Instructable Multiworld Agent) which can actively play games with you while following your commands. SIMA has been trained with a range of gaming skills to play more like a human than some typical AI. It can easily follow natural language instructions and perform tasks you assign across different games.<\/p>\n\n\n\n

This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

The Competition Authority fined Google because it didn't follow four of the seven obligatory commitments under the decision 22-D -13 of June 21, 2022. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Coinbase Approved As Virtual Asset Provider in France<\/a><\/p>\n\n\n\n

Neighboring Rights And Commitments<\/h2>\n\n\n\n

In 2019 the EU introduced \"Neighboring Rights\" which made print media capable of demanding compensation for using their content and this was in trial phases in France. Google agreed to pay French Media for using their articles or news in searches. In 2022, a new commitment was made by Google, which says that Google should offer news publishers a transparent offer of payment within three months of receiving a copyright claim.<\/p>\n\n\n\n

Google didn't regard the commitments and used publishers' data to train its AI chatbot Bard, currently known as Gemini. So Google failed to provide a proper solution for publishers, allowing them to object to using their content by Google. <\/p>\n\n\n\n

In response, Google proposed effective measures<\/a> in response to identified failings to solve this dispute which has gone too far.<\/p>\n","post_title":"French Regulators Fined Google $270M For Using News Publishers' Data","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"french-regulators-fined-google-270m-for-using-news-publishers-data","to_ping":"","pinged":"","post_modified":"2024-03-24 13:27:35","post_modified_gmt":"2024-03-24 02:27:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15993","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15899,"post_author":"20","post_date":"2024-03-16 05:54:52","post_date_gmt":"2024-03-15 18:54:52","post_content":"\n

On March 13, Google De<\/a>e<\/a>pMind<\/a> announced the latest AI agent \"SIMA\" (Scalable Instructable Multiworld Agent) which can actively play games with you while following your commands. SIMA has been trained with a range of gaming skills to play more like a human than some typical AI. It can easily follow natural language instructions and perform tasks you assign across different games.<\/p>\n\n\n\n

This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Google France Blog mentioned \"We have compromised because it is time to turn the page and, as our numerous agreements with publishers prove, we want to focus on sustainable approaches to connect Internet users with quality content and work constructively with publishers.\u00a0\"<\/em><\/p>\n\n\n\n

The Competition Authority fined Google because it didn't follow four of the seven obligatory commitments under the decision 22-D -13 of June 21, 2022. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Coinbase Approved As Virtual Asset Provider in France<\/a><\/p>\n\n\n\n

Neighboring Rights And Commitments<\/h2>\n\n\n\n

In 2019 the EU introduced \"Neighboring Rights\" which made print media capable of demanding compensation for using their content and this was in trial phases in France. Google agreed to pay French Media for using their articles or news in searches. In 2022, a new commitment was made by Google, which says that Google should offer news publishers a transparent offer of payment within three months of receiving a copyright claim.<\/p>\n\n\n\n

Google didn't regard the commitments and used publishers' data to train its AI chatbot Bard, currently known as Gemini. So Google failed to provide a proper solution for publishers, allowing them to object to using their content by Google. <\/p>\n\n\n\n

In response, Google proposed effective measures<\/a> in response to identified failings to solve this dispute which has gone too far.<\/p>\n","post_title":"French Regulators Fined Google $270M For Using News Publishers' Data","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"french-regulators-fined-google-270m-for-using-news-publishers-data","to_ping":"","pinged":"","post_modified":"2024-03-24 13:27:35","post_modified_gmt":"2024-03-24 02:27:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15993","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15899,"post_author":"20","post_date":"2024-03-16 05:54:52","post_date_gmt":"2024-03-15 18:54:52","post_content":"\n

On March 13, Google De<\/a>e<\/a>pMind<\/a> announced the latest AI agent \"SIMA\" (Scalable Instructable Multiworld Agent) which can actively play games with you while following your commands. SIMA has been trained with a range of gaming skills to play more like a human than some typical AI. It can easily follow natural language instructions and perform tasks you assign across different games.<\/p>\n\n\n\n

This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Google was the only platform to sign licensing agreements with 280 French press publishers and almost 450 publications under the European Copyright Directive (EUCD)<\/a> paying them tens of millions of euros yearly to cover the copyrights. <\/p>\n\n\n\n

Google France Blog mentioned \"We have compromised because it is time to turn the page and, as our numerous agreements with publishers prove, we want to focus on sustainable approaches to connect Internet users with quality content and work constructively with publishers.\u00a0\"<\/em><\/p>\n\n\n\n

The Competition Authority fined Google because it didn't follow four of the seven obligatory commitments under the decision 22-D -13 of June 21, 2022. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Coinbase Approved As Virtual Asset Provider in France<\/a><\/p>\n\n\n\n

Neighboring Rights And Commitments<\/h2>\n\n\n\n

In 2019 the EU introduced \"Neighboring Rights\" which made print media capable of demanding compensation for using their content and this was in trial phases in France. Google agreed to pay French Media for using their articles or news in searches. In 2022, a new commitment was made by Google, which says that Google should offer news publishers a transparent offer of payment within three months of receiving a copyright claim.<\/p>\n\n\n\n

Google didn't regard the commitments and used publishers' data to train its AI chatbot Bard, currently known as Gemini. So Google failed to provide a proper solution for publishers, allowing them to object to using their content by Google. <\/p>\n\n\n\n

In response, Google proposed effective measures<\/a> in response to identified failings to solve this dispute which has gone too far.<\/p>\n","post_title":"French Regulators Fined Google $270M For Using News Publishers' Data","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"french-regulators-fined-google-270m-for-using-news-publishers-data","to_ping":"","pinged":"","post_modified":"2024-03-24 13:27:35","post_modified_gmt":"2024-03-24 02:27:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15993","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15899,"post_author":"20","post_date":"2024-03-16 05:54:52","post_date_gmt":"2024-03-15 18:54:52","post_content":"\n

On March 13, Google De<\/a>e<\/a>pMind<\/a> announced the latest AI agent \"SIMA\" (Scalable Instructable Multiworld Agent) which can actively play games with you while following your commands. SIMA has been trained with a range of gaming skills to play more like a human than some typical AI. It can easily follow natural language instructions and perform tasks you assign across different games.<\/p>\n\n\n\n

This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

French authorities have fined Google $270M(About 250M Euro) for breaking its commitment to paying media outlets to use their data in search results and references. A report also mentioned that Google used publishers' data to train Gemini without informing the owners.<\/p>\n\n\n\n

Google was the only platform to sign licensing agreements with 280 French press publishers and almost 450 publications under the European Copyright Directive (EUCD)<\/a> paying them tens of millions of euros yearly to cover the copyrights. <\/p>\n\n\n\n

Google France Blog mentioned \"We have compromised because it is time to turn the page and, as our numerous agreements with publishers prove, we want to focus on sustainable approaches to connect Internet users with quality content and work constructively with publishers.\u00a0\"<\/em><\/p>\n\n\n\n

The Competition Authority fined Google because it didn't follow four of the seven obligatory commitments under the decision 22-D -13 of June 21, 2022. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Coinbase Approved As Virtual Asset Provider in France<\/a><\/p>\n\n\n\n

Neighboring Rights And Commitments<\/h2>\n\n\n\n

In 2019 the EU introduced \"Neighboring Rights\" which made print media capable of demanding compensation for using their content and this was in trial phases in France. Google agreed to pay French Media for using their articles or news in searches. In 2022, a new commitment was made by Google, which says that Google should offer news publishers a transparent offer of payment within three months of receiving a copyright claim.<\/p>\n\n\n\n

Google didn't regard the commitments and used publishers' data to train its AI chatbot Bard, currently known as Gemini. So Google failed to provide a proper solution for publishers, allowing them to object to using their content by Google. <\/p>\n\n\n\n

In response, Google proposed effective measures<\/a> in response to identified failings to solve this dispute which has gone too far.<\/p>\n","post_title":"French Regulators Fined Google $270M For Using News Publishers' Data","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"french-regulators-fined-google-270m-for-using-news-publishers-data","to_ping":"","pinged":"","post_modified":"2024-03-24 13:27:35","post_modified_gmt":"2024-03-24 02:27:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15993","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15899,"post_author":"20","post_date":"2024-03-16 05:54:52","post_date_gmt":"2024-03-15 18:54:52","post_content":"\n

On March 13, Google De<\/a>e<\/a>pMind<\/a> announced the latest AI agent \"SIMA\" (Scalable Instructable Multiworld Agent) which can actively play games with you while following your commands. SIMA has been trained with a range of gaming skills to play more like a human than some typical AI. It can easily follow natural language instructions and perform tasks you assign across different games.<\/p>\n\n\n\n

This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

As of 2024, Google\u2019s hydrologic model covers 80 regions across Africa, Asia, Europe, and both South and Central America. The relevant data are available on the Flood Hub platform.<\/p>\n","post_title":"Google To Use AI In Forecasting Floods Worldwide","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-to-use-ai-in-forecasting-floods-worldwide","to_ping":"","pinged":"","post_modified":"2024-03-28 23:20:13","post_modified_gmt":"2024-03-28 12:20:13","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16038","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15993,"post_author":"20","post_date":"2024-03-24 13:27:02","post_date_gmt":"2024-03-24 02:27:02","post_content":"\n

French authorities have fined Google $270M(About 250M Euro) for breaking its commitment to paying media outlets to use their data in search results and references. A report also mentioned that Google used publishers' data to train Gemini without informing the owners.<\/p>\n\n\n\n

Google was the only platform to sign licensing agreements with 280 French press publishers and almost 450 publications under the European Copyright Directive (EUCD)<\/a> paying them tens of millions of euros yearly to cover the copyrights. <\/p>\n\n\n\n

Google France Blog mentioned \"We have compromised because it is time to turn the page and, as our numerous agreements with publishers prove, we want to focus on sustainable approaches to connect Internet users with quality content and work constructively with publishers.\u00a0\"<\/em><\/p>\n\n\n\n

The Competition Authority fined Google because it didn't follow four of the seven obligatory commitments under the decision 22-D -13 of June 21, 2022. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Coinbase Approved As Virtual Asset Provider in France<\/a><\/p>\n\n\n\n

Neighboring Rights And Commitments<\/h2>\n\n\n\n

In 2019 the EU introduced \"Neighboring Rights\" which made print media capable of demanding compensation for using their content and this was in trial phases in France. Google agreed to pay French Media for using their articles or news in searches. In 2022, a new commitment was made by Google, which says that Google should offer news publishers a transparent offer of payment within three months of receiving a copyright claim.<\/p>\n\n\n\n

Google didn't regard the commitments and used publishers' data to train its AI chatbot Bard, currently known as Gemini. So Google failed to provide a proper solution for publishers, allowing them to object to using their content by Google. <\/p>\n\n\n\n

In response, Google proposed effective measures<\/a> in response to identified failings to solve this dispute which has gone too far.<\/p>\n","post_title":"French Regulators Fined Google $270M For Using News Publishers' Data","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"french-regulators-fined-google-270m-for-using-news-publishers-data","to_ping":"","pinged":"","post_modified":"2024-03-24 13:27:35","post_modified_gmt":"2024-03-24 02:27:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15993","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15899,"post_author":"20","post_date":"2024-03-16 05:54:52","post_date_gmt":"2024-03-15 18:54:52","post_content":"\n

On March 13, Google De<\/a>e<\/a>pMind<\/a> announced the latest AI agent \"SIMA\" (Scalable Instructable Multiworld Agent) which can actively play games with you while following your commands. SIMA has been trained with a range of gaming skills to play more like a human than some typical AI. It can easily follow natural language instructions and perform tasks you assign across different games.<\/p>\n\n\n\n

This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

\u201cOur goal is to continue using our research capabilities and technology to further increase our coverage, as well as forecast other types of flood-related events and disasters, including flash floods and urban floods\u201d<\/em><\/strong>, Google stated.<\/p>\n\n\n\n

As of 2024, Google\u2019s hydrologic model covers 80 regions across Africa, Asia, Europe, and both South and Central America. The relevant data are available on the Flood Hub platform.<\/p>\n","post_title":"Google To Use AI In Forecasting Floods Worldwide","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-to-use-ai-in-forecasting-floods-worldwide","to_ping":"","pinged":"","post_modified":"2024-03-28 23:20:13","post_modified_gmt":"2024-03-28 12:20:13","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16038","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15993,"post_author":"20","post_date":"2024-03-24 13:27:02","post_date_gmt":"2024-03-24 02:27:02","post_content":"\n

French authorities have fined Google $270M(About 250M Euro) for breaking its commitment to paying media outlets to use their data in search results and references. A report also mentioned that Google used publishers' data to train Gemini without informing the owners.<\/p>\n\n\n\n

Google was the only platform to sign licensing agreements with 280 French press publishers and almost 450 publications under the European Copyright Directive (EUCD)<\/a> paying them tens of millions of euros yearly to cover the copyrights. <\/p>\n\n\n\n

Google France Blog mentioned \"We have compromised because it is time to turn the page and, as our numerous agreements with publishers prove, we want to focus on sustainable approaches to connect Internet users with quality content and work constructively with publishers.\u00a0\"<\/em><\/p>\n\n\n\n

The Competition Authority fined Google because it didn't follow four of the seven obligatory commitments under the decision 22-D -13 of June 21, 2022. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Coinbase Approved As Virtual Asset Provider in France<\/a><\/p>\n\n\n\n

Neighboring Rights And Commitments<\/h2>\n\n\n\n

In 2019 the EU introduced \"Neighboring Rights\" which made print media capable of demanding compensation for using their content and this was in trial phases in France. Google agreed to pay French Media for using their articles or news in searches. In 2022, a new commitment was made by Google, which says that Google should offer news publishers a transparent offer of payment within three months of receiving a copyright claim.<\/p>\n\n\n\n

Google didn't regard the commitments and used publishers' data to train its AI chatbot Bard, currently known as Gemini. So Google failed to provide a proper solution for publishers, allowing them to object to using their content by Google. <\/p>\n\n\n\n

In response, Google proposed effective measures<\/a> in response to identified failings to solve this dispute which has gone too far.<\/p>\n","post_title":"French Regulators Fined Google $270M For Using News Publishers' Data","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"french-regulators-fined-google-270m-for-using-news-publishers-data","to_ping":"","pinged":"","post_modified":"2024-03-24 13:27:35","post_modified_gmt":"2024-03-24 02:27:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15993","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15899,"post_author":"20","post_date":"2024-03-16 05:54:52","post_date_gmt":"2024-03-15 18:54:52","post_content":"\n

On March 13, Google De<\/a>e<\/a>pMind<\/a> announced the latest AI agent \"SIMA\" (Scalable Instructable Multiworld Agent) which can actively play games with you while following your commands. SIMA has been trained with a range of gaming skills to play more like a human than some typical AI. It can easily follow natural language instructions and perform tasks you assign across different games.<\/p>\n\n\n\n

This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

The hydrologic model has been trained using publicly available data such as soil attributes, streamflow gauges, and weather forecasts. It uses two Long Short Term Memory (LSTM) networks - a hindcast unit and a forecast unit. The hindcast unit analyzes geophysical data from over a year in the past and sends it to the forecast unit. The forecast LSTM then combines this data with the weather forecast for the next seven days to make highly accurate streamflow predictions. <\/p>\n\n\n\n

\u201cOur goal is to continue using our research capabilities and technology to further increase our coverage, as well as forecast other types of flood-related events and disasters, including flash floods and urban floods\u201d<\/em><\/strong>, Google stated.<\/p>\n\n\n\n

As of 2024, Google\u2019s hydrologic model covers 80 regions across Africa, Asia, Europe, and both South and Central America. The relevant data are available on the Flood Hub platform.<\/p>\n","post_title":"Google To Use AI In Forecasting Floods Worldwide","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-to-use-ai-in-forecasting-floods-worldwide","to_ping":"","pinged":"","post_modified":"2024-03-28 23:20:13","post_modified_gmt":"2024-03-28 12:20:13","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16038","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15993,"post_author":"20","post_date":"2024-03-24 13:27:02","post_date_gmt":"2024-03-24 02:27:02","post_content":"\n

French authorities have fined Google $270M(About 250M Euro) for breaking its commitment to paying media outlets to use their data in search results and references. A report also mentioned that Google used publishers' data to train Gemini without informing the owners.<\/p>\n\n\n\n

Google was the only platform to sign licensing agreements with 280 French press publishers and almost 450 publications under the European Copyright Directive (EUCD)<\/a> paying them tens of millions of euros yearly to cover the copyrights. <\/p>\n\n\n\n

Google France Blog mentioned \"We have compromised because it is time to turn the page and, as our numerous agreements with publishers prove, we want to focus on sustainable approaches to connect Internet users with quality content and work constructively with publishers.\u00a0\"<\/em><\/p>\n\n\n\n

The Competition Authority fined Google because it didn't follow four of the seven obligatory commitments under the decision 22-D -13 of June 21, 2022. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Coinbase Approved As Virtual Asset Provider in France<\/a><\/p>\n\n\n\n

Neighboring Rights And Commitments<\/h2>\n\n\n\n

In 2019 the EU introduced \"Neighboring Rights\" which made print media capable of demanding compensation for using their content and this was in trial phases in France. Google agreed to pay French Media for using their articles or news in searches. In 2022, a new commitment was made by Google, which says that Google should offer news publishers a transparent offer of payment within three months of receiving a copyright claim.<\/p>\n\n\n\n

Google didn't regard the commitments and used publishers' data to train its AI chatbot Bard, currently known as Gemini. So Google failed to provide a proper solution for publishers, allowing them to object to using their content by Google. <\/p>\n\n\n\n

In response, Google proposed effective measures<\/a> in response to identified failings to solve this dispute which has gone too far.<\/p>\n","post_title":"French Regulators Fined Google $270M For Using News Publishers' Data","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"french-regulators-fined-google-270m-for-using-news-publishers-data","to_ping":"","pinged":"","post_modified":"2024-03-24 13:27:35","post_modified_gmt":"2024-03-24 02:27:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15993","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15899,"post_author":"20","post_date":"2024-03-16 05:54:52","post_date_gmt":"2024-03-15 18:54:52","post_content":"\n

On March 13, Google De<\/a>e<\/a>pMind<\/a> announced the latest AI agent \"SIMA\" (Scalable Instructable Multiworld Agent) which can actively play games with you while following your commands. SIMA has been trained with a range of gaming skills to play more like a human than some typical AI. It can easily follow natural language instructions and perform tasks you assign across different games.<\/p>\n\n\n\n

This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

AI-based Hydrologic Technology<\/h2>\n\n\n\n

The hydrologic model has been trained using publicly available data such as soil attributes, streamflow gauges, and weather forecasts. It uses two Long Short Term Memory (LSTM) networks - a hindcast unit and a forecast unit. The hindcast unit analyzes geophysical data from over a year in the past and sends it to the forecast unit. The forecast LSTM then combines this data with the weather forecast for the next seven days to make highly accurate streamflow predictions. <\/p>\n\n\n\n

\u201cOur goal is to continue using our research capabilities and technology to further increase our coverage, as well as forecast other types of flood-related events and disasters, including flash floods and urban floods\u201d<\/em><\/strong>, Google stated.<\/p>\n\n\n\n

As of 2024, Google\u2019s hydrologic model covers 80 regions across Africa, Asia, Europe, and both South and Central America. The relevant data are available on the Flood Hub platform.<\/p>\n","post_title":"Google To Use AI In Forecasting Floods Worldwide","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-to-use-ai-in-forecasting-floods-worldwide","to_ping":"","pinged":"","post_modified":"2024-03-28 23:20:13","post_modified_gmt":"2024-03-28 12:20:13","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16038","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15993,"post_author":"20","post_date":"2024-03-24 13:27:02","post_date_gmt":"2024-03-24 02:27:02","post_content":"\n

French authorities have fined Google $270M(About 250M Euro) for breaking its commitment to paying media outlets to use their data in search results and references. A report also mentioned that Google used publishers' data to train Gemini without informing the owners.<\/p>\n\n\n\n

Google was the only platform to sign licensing agreements with 280 French press publishers and almost 450 publications under the European Copyright Directive (EUCD)<\/a> paying them tens of millions of euros yearly to cover the copyrights. <\/p>\n\n\n\n

Google France Blog mentioned \"We have compromised because it is time to turn the page and, as our numerous agreements with publishers prove, we want to focus on sustainable approaches to connect Internet users with quality content and work constructively with publishers.\u00a0\"<\/em><\/p>\n\n\n\n

The Competition Authority fined Google because it didn't follow four of the seven obligatory commitments under the decision 22-D -13 of June 21, 2022. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Coinbase Approved As Virtual Asset Provider in France<\/a><\/p>\n\n\n\n

Neighboring Rights And Commitments<\/h2>\n\n\n\n

In 2019 the EU introduced \"Neighboring Rights\" which made print media capable of demanding compensation for using their content and this was in trial phases in France. Google agreed to pay French Media for using their articles or news in searches. In 2022, a new commitment was made by Google, which says that Google should offer news publishers a transparent offer of payment within three months of receiving a copyright claim.<\/p>\n\n\n\n

Google didn't regard the commitments and used publishers' data to train its AI chatbot Bard, currently known as Gemini. So Google failed to provide a proper solution for publishers, allowing them to object to using their content by Google. <\/p>\n\n\n\n

In response, Google proposed effective measures<\/a> in response to identified failings to solve this dispute which has gone too far.<\/p>\n","post_title":"French Regulators Fined Google $270M For Using News Publishers' Data","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"french-regulators-fined-google-270m-for-using-news-publishers-data","to_ping":"","pinged":"","post_modified":"2024-03-24 13:27:35","post_modified_gmt":"2024-03-24 02:27:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15993","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15899,"post_author":"20","post_date":"2024-03-16 05:54:52","post_date_gmt":"2024-03-15 18:54:52","post_content":"\n

On March 13, Google De<\/a>e<\/a>pMind<\/a> announced the latest AI agent \"SIMA\" (Scalable Instructable Multiworld Agent) which can actively play games with you while following your commands. SIMA has been trained with a range of gaming skills to play more like a human than some typical AI. It can easily follow natural language instructions and perform tasks you assign across different games.<\/p>\n\n\n\n

This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related:<\/em><\/strong> Bank of England\u2019s Journey Towards Better Economic Foresight<\/a><\/p>\n\n\n\n

AI-based Hydrologic Technology<\/h2>\n\n\n\n

The hydrologic model has been trained using publicly available data such as soil attributes, streamflow gauges, and weather forecasts. It uses two Long Short Term Memory (LSTM) networks - a hindcast unit and a forecast unit. The hindcast unit analyzes geophysical data from over a year in the past and sends it to the forecast unit. The forecast LSTM then combines this data with the weather forecast for the next seven days to make highly accurate streamflow predictions. <\/p>\n\n\n\n

\u201cOur goal is to continue using our research capabilities and technology to further increase our coverage, as well as forecast other types of flood-related events and disasters, including flash floods and urban floods\u201d<\/em><\/strong>, Google stated.<\/p>\n\n\n\n

As of 2024, Google\u2019s hydrologic model covers 80 regions across Africa, Asia, Europe, and both South and Central America. The relevant data are available on the Flood Hub platform.<\/p>\n","post_title":"Google To Use AI In Forecasting Floods Worldwide","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-to-use-ai-in-forecasting-floods-worldwide","to_ping":"","pinged":"","post_modified":"2024-03-28 23:20:13","post_modified_gmt":"2024-03-28 12:20:13","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16038","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15993,"post_author":"20","post_date":"2024-03-24 13:27:02","post_date_gmt":"2024-03-24 02:27:02","post_content":"\n

French authorities have fined Google $270M(About 250M Euro) for breaking its commitment to paying media outlets to use their data in search results and references. A report also mentioned that Google used publishers' data to train Gemini without informing the owners.<\/p>\n\n\n\n

Google was the only platform to sign licensing agreements with 280 French press publishers and almost 450 publications under the European Copyright Directive (EUCD)<\/a> paying them tens of millions of euros yearly to cover the copyrights. <\/p>\n\n\n\n

Google France Blog mentioned \"We have compromised because it is time to turn the page and, as our numerous agreements with publishers prove, we want to focus on sustainable approaches to connect Internet users with quality content and work constructively with publishers.\u00a0\"<\/em><\/p>\n\n\n\n

The Competition Authority fined Google because it didn't follow four of the seven obligatory commitments under the decision 22-D -13 of June 21, 2022. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Coinbase Approved As Virtual Asset Provider in France<\/a><\/p>\n\n\n\n

Neighboring Rights And Commitments<\/h2>\n\n\n\n

In 2019 the EU introduced \"Neighboring Rights\" which made print media capable of demanding compensation for using their content and this was in trial phases in France. Google agreed to pay French Media for using their articles or news in searches. In 2022, a new commitment was made by Google, which says that Google should offer news publishers a transparent offer of payment within three months of receiving a copyright claim.<\/p>\n\n\n\n

Google didn't regard the commitments and used publishers' data to train its AI chatbot Bard, currently known as Gemini. So Google failed to provide a proper solution for publishers, allowing them to object to using their content by Google. <\/p>\n\n\n\n

In response, Google proposed effective measures<\/a> in response to identified failings to solve this dispute which has gone too far.<\/p>\n","post_title":"French Regulators Fined Google $270M For Using News Publishers' Data","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"french-regulators-fined-google-270m-for-using-news-publishers-data","to_ping":"","pinged":"","post_modified":"2024-03-24 13:27:35","post_modified_gmt":"2024-03-24 02:27:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15993","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15899,"post_author":"20","post_date":"2024-03-16 05:54:52","post_date_gmt":"2024-03-15 18:54:52","post_content":"\n

On March 13, Google De<\/a>e<\/a>pMind<\/a> announced the latest AI agent \"SIMA\" (Scalable Instructable Multiworld Agent) which can actively play games with you while following your commands. SIMA has been trained with a range of gaming skills to play more like a human than some typical AI. It can easily follow natural language instructions and perform tasks you assign across different games.<\/p>\n\n\n\n

This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

According to the paper, using AI-based hydrologic technologies can drastically improve flood forecasting even in areas where there is limited flood-related data. \u201cWe found that AI helped us to provide more accurate information on riverine floods up to 7 days in advance. This allowed us to provide flood forecasting in 80 countries in areas where 460 million people live\u201d<\/em><\/strong>, the paper claimed<\/a>.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Bank of England\u2019s Journey Towards Better Economic Foresight<\/a><\/p>\n\n\n\n

AI-based Hydrologic Technology<\/h2>\n\n\n\n

The hydrologic model has been trained using publicly available data such as soil attributes, streamflow gauges, and weather forecasts. It uses two Long Short Term Memory (LSTM) networks - a hindcast unit and a forecast unit. The hindcast unit analyzes geophysical data from over a year in the past and sends it to the forecast unit. The forecast LSTM then combines this data with the weather forecast for the next seven days to make highly accurate streamflow predictions. <\/p>\n\n\n\n

\u201cOur goal is to continue using our research capabilities and technology to further increase our coverage, as well as forecast other types of flood-related events and disasters, including flash floods and urban floods\u201d<\/em><\/strong>, Google stated.<\/p>\n\n\n\n

As of 2024, Google\u2019s hydrologic model covers 80 regions across Africa, Asia, Europe, and both South and Central America. The relevant data are available on the Flood Hub platform.<\/p>\n","post_title":"Google To Use AI In Forecasting Floods Worldwide","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-to-use-ai-in-forecasting-floods-worldwide","to_ping":"","pinged":"","post_modified":"2024-03-28 23:20:13","post_modified_gmt":"2024-03-28 12:20:13","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16038","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15993,"post_author":"20","post_date":"2024-03-24 13:27:02","post_date_gmt":"2024-03-24 02:27:02","post_content":"\n

French authorities have fined Google $270M(About 250M Euro) for breaking its commitment to paying media outlets to use their data in search results and references. A report also mentioned that Google used publishers' data to train Gemini without informing the owners.<\/p>\n\n\n\n

Google was the only platform to sign licensing agreements with 280 French press publishers and almost 450 publications under the European Copyright Directive (EUCD)<\/a> paying them tens of millions of euros yearly to cover the copyrights. <\/p>\n\n\n\n

Google France Blog mentioned \"We have compromised because it is time to turn the page and, as our numerous agreements with publishers prove, we want to focus on sustainable approaches to connect Internet users with quality content and work constructively with publishers.\u00a0\"<\/em><\/p>\n\n\n\n

The Competition Authority fined Google because it didn't follow four of the seven obligatory commitments under the decision 22-D -13 of June 21, 2022. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Coinbase Approved As Virtual Asset Provider in France<\/a><\/p>\n\n\n\n

Neighboring Rights And Commitments<\/h2>\n\n\n\n

In 2019 the EU introduced \"Neighboring Rights\" which made print media capable of demanding compensation for using their content and this was in trial phases in France. Google agreed to pay French Media for using their articles or news in searches. In 2022, a new commitment was made by Google, which says that Google should offer news publishers a transparent offer of payment within three months of receiving a copyright claim.<\/p>\n\n\n\n

Google didn't regard the commitments and used publishers' data to train its AI chatbot Bard, currently known as Gemini. So Google failed to provide a proper solution for publishers, allowing them to object to using their content by Google. <\/p>\n\n\n\n

In response, Google proposed effective measures<\/a> in response to identified failings to solve this dispute which has gone too far.<\/p>\n","post_title":"French Regulators Fined Google $270M For Using News Publishers' Data","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"french-regulators-fined-google-270m-for-using-news-publishers-data","to_ping":"","pinged":"","post_modified":"2024-03-24 13:27:35","post_modified_gmt":"2024-03-24 02:27:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15993","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15899,"post_author":"20","post_date":"2024-03-16 05:54:52","post_date_gmt":"2024-03-15 18:54:52","post_content":"\n

On March 13, Google De<\/a>e<\/a>pMind<\/a> announced the latest AI agent \"SIMA\" (Scalable Instructable Multiworld Agent) which can actively play games with you while following your commands. SIMA has been trained with a range of gaming skills to play more like a human than some typical AI. It can easily follow natural language instructions and perform tasks you assign across different games.<\/p>\n\n\n\n

This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

American tech giant Google has stepped forward with its initiative to utilize AI in forecasting floods on a global scale. The company published a research paper in the scientific journal Nature, highlighting AI's potential in saving lives and limiting damages in flood-affected areas. The AI models have been developed by the team at Google Research.<\/p>\n\n\n\n

According to the paper, using AI-based hydrologic technologies can drastically improve flood forecasting even in areas where there is limited flood-related data. \u201cWe found that AI helped us to provide more accurate information on riverine floods up to 7 days in advance. This allowed us to provide flood forecasting in 80 countries in areas where 460 million people live\u201d<\/em><\/strong>, the paper claimed<\/a>.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Bank of England\u2019s Journey Towards Better Economic Foresight<\/a><\/p>\n\n\n\n

AI-based Hydrologic Technology<\/h2>\n\n\n\n

The hydrologic model has been trained using publicly available data such as soil attributes, streamflow gauges, and weather forecasts. It uses two Long Short Term Memory (LSTM) networks - a hindcast unit and a forecast unit. The hindcast unit analyzes geophysical data from over a year in the past and sends it to the forecast unit. The forecast LSTM then combines this data with the weather forecast for the next seven days to make highly accurate streamflow predictions. <\/p>\n\n\n\n

\u201cOur goal is to continue using our research capabilities and technology to further increase our coverage, as well as forecast other types of flood-related events and disasters, including flash floods and urban floods\u201d<\/em><\/strong>, Google stated.<\/p>\n\n\n\n

As of 2024, Google\u2019s hydrologic model covers 80 regions across Africa, Asia, Europe, and both South and Central America. The relevant data are available on the Flood Hub platform.<\/p>\n","post_title":"Google To Use AI In Forecasting Floods Worldwide","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-to-use-ai-in-forecasting-floods-worldwide","to_ping":"","pinged":"","post_modified":"2024-03-28 23:20:13","post_modified_gmt":"2024-03-28 12:20:13","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16038","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15993,"post_author":"20","post_date":"2024-03-24 13:27:02","post_date_gmt":"2024-03-24 02:27:02","post_content":"\n

French authorities have fined Google $270M(About 250M Euro) for breaking its commitment to paying media outlets to use their data in search results and references. A report also mentioned that Google used publishers' data to train Gemini without informing the owners.<\/p>\n\n\n\n

Google was the only platform to sign licensing agreements with 280 French press publishers and almost 450 publications under the European Copyright Directive (EUCD)<\/a> paying them tens of millions of euros yearly to cover the copyrights. <\/p>\n\n\n\n

Google France Blog mentioned \"We have compromised because it is time to turn the page and, as our numerous agreements with publishers prove, we want to focus on sustainable approaches to connect Internet users with quality content and work constructively with publishers.\u00a0\"<\/em><\/p>\n\n\n\n

The Competition Authority fined Google because it didn't follow four of the seven obligatory commitments under the decision 22-D -13 of June 21, 2022. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Coinbase Approved As Virtual Asset Provider in France<\/a><\/p>\n\n\n\n

Neighboring Rights And Commitments<\/h2>\n\n\n\n

In 2019 the EU introduced \"Neighboring Rights\" which made print media capable of demanding compensation for using their content and this was in trial phases in France. Google agreed to pay French Media for using their articles or news in searches. In 2022, a new commitment was made by Google, which says that Google should offer news publishers a transparent offer of payment within three months of receiving a copyright claim.<\/p>\n\n\n\n

Google didn't regard the commitments and used publishers' data to train its AI chatbot Bard, currently known as Gemini. So Google failed to provide a proper solution for publishers, allowing them to object to using their content by Google. <\/p>\n\n\n\n

In response, Google proposed effective measures<\/a> in response to identified failings to solve this dispute which has gone too far.<\/p>\n","post_title":"French Regulators Fined Google $270M For Using News Publishers' Data","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"french-regulators-fined-google-270m-for-using-news-publishers-data","to_ping":"","pinged":"","post_modified":"2024-03-24 13:27:35","post_modified_gmt":"2024-03-24 02:27:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15993","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15899,"post_author":"20","post_date":"2024-03-16 05:54:52","post_date_gmt":"2024-03-15 18:54:52","post_content":"\n

On March 13, Google De<\/a>e<\/a>pMind<\/a> announced the latest AI agent \"SIMA\" (Scalable Instructable Multiworld Agent) which can actively play games with you while following your commands. SIMA has been trained with a range of gaming skills to play more like a human than some typical AI. It can easily follow natural language instructions and perform tasks you assign across different games.<\/p>\n\n\n\n

This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Meta also indicates that they will continue to improve these chips, stating, \u201cWe currently have several programs underway aimed at expanding the scope of MTIA, including support for GenAI workloads\u201d. <\/p>\n","post_title":"Meta Announces \u201cNext Generation\u201d AI Chip A Day After Intel And Google","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-announces-next-generation-ai-chip-a-day-after-intel-and-google","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/04\/introducing-our-next-generation-infrastructure-for-ai\/","post_modified":"2024-04-17 04:37:36","post_modified_gmt":"2024-04-16 18:37:36","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16423","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16038,"post_author":"17","post_date":"2024-03-28 23:20:07","post_date_gmt":"2024-03-28 12:20:07","post_content":"\n

American tech giant Google has stepped forward with its initiative to utilize AI in forecasting floods on a global scale. The company published a research paper in the scientific journal Nature, highlighting AI's potential in saving lives and limiting damages in flood-affected areas. The AI models have been developed by the team at Google Research.<\/p>\n\n\n\n

According to the paper, using AI-based hydrologic technologies can drastically improve flood forecasting even in areas where there is limited flood-related data. \u201cWe found that AI helped us to provide more accurate information on riverine floods up to 7 days in advance. This allowed us to provide flood forecasting in 80 countries in areas where 460 million people live\u201d<\/em><\/strong>, the paper claimed<\/a>.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Bank of England\u2019s Journey Towards Better Economic Foresight<\/a><\/p>\n\n\n\n

AI-based Hydrologic Technology<\/h2>\n\n\n\n

The hydrologic model has been trained using publicly available data such as soil attributes, streamflow gauges, and weather forecasts. It uses two Long Short Term Memory (LSTM) networks - a hindcast unit and a forecast unit. The hindcast unit analyzes geophysical data from over a year in the past and sends it to the forecast unit. The forecast LSTM then combines this data with the weather forecast for the next seven days to make highly accurate streamflow predictions. <\/p>\n\n\n\n

\u201cOur goal is to continue using our research capabilities and technology to further increase our coverage, as well as forecast other types of flood-related events and disasters, including flash floods and urban floods\u201d<\/em><\/strong>, Google stated.<\/p>\n\n\n\n

As of 2024, Google\u2019s hydrologic model covers 80 regions across Africa, Asia, Europe, and both South and Central America. The relevant data are available on the Flood Hub platform.<\/p>\n","post_title":"Google To Use AI In Forecasting Floods Worldwide","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-to-use-ai-in-forecasting-floods-worldwide","to_ping":"","pinged":"","post_modified":"2024-03-28 23:20:13","post_modified_gmt":"2024-03-28 12:20:13","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16038","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15993,"post_author":"20","post_date":"2024-03-24 13:27:02","post_date_gmt":"2024-03-24 02:27:02","post_content":"\n

French authorities have fined Google $270M(About 250M Euro) for breaking its commitment to paying media outlets to use their data in search results and references. A report also mentioned that Google used publishers' data to train Gemini without informing the owners.<\/p>\n\n\n\n

Google was the only platform to sign licensing agreements with 280 French press publishers and almost 450 publications under the European Copyright Directive (EUCD)<\/a> paying them tens of millions of euros yearly to cover the copyrights. <\/p>\n\n\n\n

Google France Blog mentioned \"We have compromised because it is time to turn the page and, as our numerous agreements with publishers prove, we want to focus on sustainable approaches to connect Internet users with quality content and work constructively with publishers.\u00a0\"<\/em><\/p>\n\n\n\n

The Competition Authority fined Google because it didn't follow four of the seven obligatory commitments under the decision 22-D -13 of June 21, 2022. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Coinbase Approved As Virtual Asset Provider in France<\/a><\/p>\n\n\n\n

Neighboring Rights And Commitments<\/h2>\n\n\n\n

In 2019 the EU introduced \"Neighboring Rights\" which made print media capable of demanding compensation for using their content and this was in trial phases in France. Google agreed to pay French Media for using their articles or news in searches. In 2022, a new commitment was made by Google, which says that Google should offer news publishers a transparent offer of payment within three months of receiving a copyright claim.<\/p>\n\n\n\n

Google didn't regard the commitments and used publishers' data to train its AI chatbot Bard, currently known as Gemini. So Google failed to provide a proper solution for publishers, allowing them to object to using their content by Google. <\/p>\n\n\n\n

In response, Google proposed effective measures<\/a> in response to identified failings to solve this dispute which has gone too far.<\/p>\n","post_title":"French Regulators Fined Google $270M For Using News Publishers' Data","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"french-regulators-fined-google-270m-for-using-news-publishers-data","to_ping":"","pinged":"","post_modified":"2024-03-24 13:27:35","post_modified_gmt":"2024-03-24 02:27:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15993","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15899,"post_author":"20","post_date":"2024-03-16 05:54:52","post_date_gmt":"2024-03-15 18:54:52","post_content":"\n

On March 13, Google De<\/a>e<\/a>pMind<\/a> announced the latest AI agent \"SIMA\" (Scalable Instructable Multiworld Agent) which can actively play games with you while following your commands. SIMA has been trained with a range of gaming skills to play more like a human than some typical AI. It can easily follow natural language instructions and perform tasks you assign across different games.<\/p>\n\n\n\n

This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Meta claims its latest chip has \u201cdouble the compute and memory bandwidth\u201d of previous versions. It offers more internal memory (124MB compared to 64MB) and higher clock speed (1.35GHz compared to 800MHz). The new chips are reported to be running in 16 <\/a>of Meta\u2019s data center regions. Although the chips are not exclusively meant for training generative AI models, the company believes this will pave the way for superior infrastructure and AI experience. <\/p>\n\n\n\n

Meta also indicates that they will continue to improve these chips, stating, \u201cWe currently have several programs underway aimed at expanding the scope of MTIA, including support for GenAI workloads\u201d. <\/p>\n","post_title":"Meta Announces \u201cNext Generation\u201d AI Chip A Day After Intel And Google","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-announces-next-generation-ai-chip-a-day-after-intel-and-google","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/04\/introducing-our-next-generation-infrastructure-for-ai\/","post_modified":"2024-04-17 04:37:36","post_modified_gmt":"2024-04-16 18:37:36","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16423","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16038,"post_author":"17","post_date":"2024-03-28 23:20:07","post_date_gmt":"2024-03-28 12:20:07","post_content":"\n

American tech giant Google has stepped forward with its initiative to utilize AI in forecasting floods on a global scale. The company published a research paper in the scientific journal Nature, highlighting AI's potential in saving lives and limiting damages in flood-affected areas. The AI models have been developed by the team at Google Research.<\/p>\n\n\n\n

According to the paper, using AI-based hydrologic technologies can drastically improve flood forecasting even in areas where there is limited flood-related data. \u201cWe found that AI helped us to provide more accurate information on riverine floods up to 7 days in advance. This allowed us to provide flood forecasting in 80 countries in areas where 460 million people live\u201d<\/em><\/strong>, the paper claimed<\/a>.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Bank of England\u2019s Journey Towards Better Economic Foresight<\/a><\/p>\n\n\n\n

AI-based Hydrologic Technology<\/h2>\n\n\n\n

The hydrologic model has been trained using publicly available data such as soil attributes, streamflow gauges, and weather forecasts. It uses two Long Short Term Memory (LSTM) networks - a hindcast unit and a forecast unit. The hindcast unit analyzes geophysical data from over a year in the past and sends it to the forecast unit. The forecast LSTM then combines this data with the weather forecast for the next seven days to make highly accurate streamflow predictions. <\/p>\n\n\n\n

\u201cOur goal is to continue using our research capabilities and technology to further increase our coverage, as well as forecast other types of flood-related events and disasters, including flash floods and urban floods\u201d<\/em><\/strong>, Google stated.<\/p>\n\n\n\n

As of 2024, Google\u2019s hydrologic model covers 80 regions across Africa, Asia, Europe, and both South and Central America. The relevant data are available on the Flood Hub platform.<\/p>\n","post_title":"Google To Use AI In Forecasting Floods Worldwide","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-to-use-ai-in-forecasting-floods-worldwide","to_ping":"","pinged":"","post_modified":"2024-03-28 23:20:13","post_modified_gmt":"2024-03-28 12:20:13","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16038","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15993,"post_author":"20","post_date":"2024-03-24 13:27:02","post_date_gmt":"2024-03-24 02:27:02","post_content":"\n

French authorities have fined Google $270M(About 250M Euro) for breaking its commitment to paying media outlets to use their data in search results and references. A report also mentioned that Google used publishers' data to train Gemini without informing the owners.<\/p>\n\n\n\n

Google was the only platform to sign licensing agreements with 280 French press publishers and almost 450 publications under the European Copyright Directive (EUCD)<\/a> paying them tens of millions of euros yearly to cover the copyrights. <\/p>\n\n\n\n

Google France Blog mentioned \"We have compromised because it is time to turn the page and, as our numerous agreements with publishers prove, we want to focus on sustainable approaches to connect Internet users with quality content and work constructively with publishers.\u00a0\"<\/em><\/p>\n\n\n\n

The Competition Authority fined Google because it didn't follow four of the seven obligatory commitments under the decision 22-D -13 of June 21, 2022. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Coinbase Approved As Virtual Asset Provider in France<\/a><\/p>\n\n\n\n

Neighboring Rights And Commitments<\/h2>\n\n\n\n

In 2019 the EU introduced \"Neighboring Rights\" which made print media capable of demanding compensation for using their content and this was in trial phases in France. Google agreed to pay French Media for using their articles or news in searches. In 2022, a new commitment was made by Google, which says that Google should offer news publishers a transparent offer of payment within three months of receiving a copyright claim.<\/p>\n\n\n\n

Google didn't regard the commitments and used publishers' data to train its AI chatbot Bard, currently known as Gemini. So Google failed to provide a proper solution for publishers, allowing them to object to using their content by Google. <\/p>\n\n\n\n

In response, Google proposed effective measures<\/a> in response to identified failings to solve this dispute which has gone too far.<\/p>\n","post_title":"French Regulators Fined Google $270M For Using News Publishers' Data","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"french-regulators-fined-google-270m-for-using-news-publishers-data","to_ping":"","pinged":"","post_modified":"2024-03-24 13:27:35","post_modified_gmt":"2024-03-24 02:27:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15993","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15899,"post_author":"20","post_date":"2024-03-16 05:54:52","post_date_gmt":"2024-03-15 18:54:52","post_content":"\n

On March 13, Google De<\/a>e<\/a>pMind<\/a> announced the latest AI agent \"SIMA\" (Scalable Instructable Multiworld Agent) which can actively play games with you while following your commands. SIMA has been trained with a range of gaming skills to play more like a human than some typical AI. It can easily follow natural language instructions and perform tasks you assign across different games.<\/p>\n\n\n\n

This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related:<\/em><\/strong> Meta Apes Launches on BNB Application Sidechain to Give Gamers the Best of Both Web2 and Web3 Gaming<\/a><\/p>\n\n\n\n

Meta claims its latest chip has \u201cdouble the compute and memory bandwidth\u201d of previous versions. It offers more internal memory (124MB compared to 64MB) and higher clock speed (1.35GHz compared to 800MHz). The new chips are reported to be running in 16 <\/a>of Meta\u2019s data center regions. Although the chips are not exclusively meant for training generative AI models, the company believes this will pave the way for superior infrastructure and AI experience. <\/p>\n\n\n\n

Meta also indicates that they will continue to improve these chips, stating, \u201cWe currently have several programs underway aimed at expanding the scope of MTIA, including support for GenAI workloads\u201d. <\/p>\n","post_title":"Meta Announces \u201cNext Generation\u201d AI Chip A Day After Intel And Google","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-announces-next-generation-ai-chip-a-day-after-intel-and-google","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/04\/introducing-our-next-generation-infrastructure-for-ai\/","post_modified":"2024-04-17 04:37:36","post_modified_gmt":"2024-04-16 18:37:36","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16423","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16038,"post_author":"17","post_date":"2024-03-28 23:20:07","post_date_gmt":"2024-03-28 12:20:07","post_content":"\n

American tech giant Google has stepped forward with its initiative to utilize AI in forecasting floods on a global scale. The company published a research paper in the scientific journal Nature, highlighting AI's potential in saving lives and limiting damages in flood-affected areas. The AI models have been developed by the team at Google Research.<\/p>\n\n\n\n

According to the paper, using AI-based hydrologic technologies can drastically improve flood forecasting even in areas where there is limited flood-related data. \u201cWe found that AI helped us to provide more accurate information on riverine floods up to 7 days in advance. This allowed us to provide flood forecasting in 80 countries in areas where 460 million people live\u201d<\/em><\/strong>, the paper claimed<\/a>.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Bank of England\u2019s Journey Towards Better Economic Foresight<\/a><\/p>\n\n\n\n

AI-based Hydrologic Technology<\/h2>\n\n\n\n

The hydrologic model has been trained using publicly available data such as soil attributes, streamflow gauges, and weather forecasts. It uses two Long Short Term Memory (LSTM) networks - a hindcast unit and a forecast unit. The hindcast unit analyzes geophysical data from over a year in the past and sends it to the forecast unit. The forecast LSTM then combines this data with the weather forecast for the next seven days to make highly accurate streamflow predictions. <\/p>\n\n\n\n

\u201cOur goal is to continue using our research capabilities and technology to further increase our coverage, as well as forecast other types of flood-related events and disasters, including flash floods and urban floods\u201d<\/em><\/strong>, Google stated.<\/p>\n\n\n\n

As of 2024, Google\u2019s hydrologic model covers 80 regions across Africa, Asia, Europe, and both South and Central America. The relevant data are available on the Flood Hub platform.<\/p>\n","post_title":"Google To Use AI In Forecasting Floods Worldwide","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-to-use-ai-in-forecasting-floods-worldwide","to_ping":"","pinged":"","post_modified":"2024-03-28 23:20:13","post_modified_gmt":"2024-03-28 12:20:13","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16038","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15993,"post_author":"20","post_date":"2024-03-24 13:27:02","post_date_gmt":"2024-03-24 02:27:02","post_content":"\n

French authorities have fined Google $270M(About 250M Euro) for breaking its commitment to paying media outlets to use their data in search results and references. A report also mentioned that Google used publishers' data to train Gemini without informing the owners.<\/p>\n\n\n\n

Google was the only platform to sign licensing agreements with 280 French press publishers and almost 450 publications under the European Copyright Directive (EUCD)<\/a> paying them tens of millions of euros yearly to cover the copyrights. <\/p>\n\n\n\n

Google France Blog mentioned \"We have compromised because it is time to turn the page and, as our numerous agreements with publishers prove, we want to focus on sustainable approaches to connect Internet users with quality content and work constructively with publishers.\u00a0\"<\/em><\/p>\n\n\n\n

The Competition Authority fined Google because it didn't follow four of the seven obligatory commitments under the decision 22-D -13 of June 21, 2022. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Coinbase Approved As Virtual Asset Provider in France<\/a><\/p>\n\n\n\n

Neighboring Rights And Commitments<\/h2>\n\n\n\n

In 2019 the EU introduced \"Neighboring Rights\" which made print media capable of demanding compensation for using their content and this was in trial phases in France. Google agreed to pay French Media for using their articles or news in searches. In 2022, a new commitment was made by Google, which says that Google should offer news publishers a transparent offer of payment within three months of receiving a copyright claim.<\/p>\n\n\n\n

Google didn't regard the commitments and used publishers' data to train its AI chatbot Bard, currently known as Gemini. So Google failed to provide a proper solution for publishers, allowing them to object to using their content by Google. <\/p>\n\n\n\n

In response, Google proposed effective measures<\/a> in response to identified failings to solve this dispute which has gone too far.<\/p>\n","post_title":"French Regulators Fined Google $270M For Using News Publishers' Data","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"french-regulators-fined-google-270m-for-using-news-publishers-data","to_ping":"","pinged":"","post_modified":"2024-03-24 13:27:35","post_modified_gmt":"2024-03-24 02:27:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15993","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15899,"post_author":"20","post_date":"2024-03-16 05:54:52","post_date_gmt":"2024-03-15 18:54:52","post_content":"\n

On March 13, Google De<\/a>e<\/a>pMind<\/a> announced the latest AI agent \"SIMA\" (Scalable Instructable Multiworld Agent) which can actively play games with you while following your commands. SIMA has been trained with a range of gaming skills to play more like a human than some typical AI. It can easily follow natural language instructions and perform tasks you assign across different games.<\/p>\n\n\n\n

This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

\u201cThe next generation of MTIA is part of our broader full-stack development program for custom, domain-specific silicon that addresses our unique workloads and systems\u201d<\/em>, the company states.\u00a0<\/p>\n\n\n\n

See Related:<\/em><\/strong> Meta Apes Launches on BNB Application Sidechain to Give Gamers the Best of Both Web2 and Web3 Gaming<\/a><\/p>\n\n\n\n

Meta claims its latest chip has \u201cdouble the compute and memory bandwidth\u201d of previous versions. It offers more internal memory (124MB compared to 64MB) and higher clock speed (1.35GHz compared to 800MHz). The new chips are reported to be running in 16 <\/a>of Meta\u2019s data center regions. Although the chips are not exclusively meant for training generative AI models, the company believes this will pave the way for superior infrastructure and AI experience. <\/p>\n\n\n\n

Meta also indicates that they will continue to improve these chips, stating, \u201cWe currently have several programs underway aimed at expanding the scope of MTIA, including support for GenAI workloads\u201d. <\/p>\n","post_title":"Meta Announces \u201cNext Generation\u201d AI Chip A Day After Intel And Google","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-announces-next-generation-ai-chip-a-day-after-intel-and-google","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/04\/introducing-our-next-generation-infrastructure-for-ai\/","post_modified":"2024-04-17 04:37:36","post_modified_gmt":"2024-04-16 18:37:36","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16423","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16038,"post_author":"17","post_date":"2024-03-28 23:20:07","post_date_gmt":"2024-03-28 12:20:07","post_content":"\n

American tech giant Google has stepped forward with its initiative to utilize AI in forecasting floods on a global scale. The company published a research paper in the scientific journal Nature, highlighting AI's potential in saving lives and limiting damages in flood-affected areas. The AI models have been developed by the team at Google Research.<\/p>\n\n\n\n

According to the paper, using AI-based hydrologic technologies can drastically improve flood forecasting even in areas where there is limited flood-related data. \u201cWe found that AI helped us to provide more accurate information on riverine floods up to 7 days in advance. This allowed us to provide flood forecasting in 80 countries in areas where 460 million people live\u201d<\/em><\/strong>, the paper claimed<\/a>.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Bank of England\u2019s Journey Towards Better Economic Foresight<\/a><\/p>\n\n\n\n

AI-based Hydrologic Technology<\/h2>\n\n\n\n

The hydrologic model has been trained using publicly available data such as soil attributes, streamflow gauges, and weather forecasts. It uses two Long Short Term Memory (LSTM) networks - a hindcast unit and a forecast unit. The hindcast unit analyzes geophysical data from over a year in the past and sends it to the forecast unit. The forecast LSTM then combines this data with the weather forecast for the next seven days to make highly accurate streamflow predictions. <\/p>\n\n\n\n

\u201cOur goal is to continue using our research capabilities and technology to further increase our coverage, as well as forecast other types of flood-related events and disasters, including flash floods and urban floods\u201d<\/em><\/strong>, Google stated.<\/p>\n\n\n\n

As of 2024, Google\u2019s hydrologic model covers 80 regions across Africa, Asia, Europe, and both South and Central America. The relevant data are available on the Flood Hub platform.<\/p>\n","post_title":"Google To Use AI In Forecasting Floods Worldwide","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-to-use-ai-in-forecasting-floods-worldwide","to_ping":"","pinged":"","post_modified":"2024-03-28 23:20:13","post_modified_gmt":"2024-03-28 12:20:13","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16038","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15993,"post_author":"20","post_date":"2024-03-24 13:27:02","post_date_gmt":"2024-03-24 02:27:02","post_content":"\n

French authorities have fined Google $270M(About 250M Euro) for breaking its commitment to paying media outlets to use their data in search results and references. A report also mentioned that Google used publishers' data to train Gemini without informing the owners.<\/p>\n\n\n\n

Google was the only platform to sign licensing agreements with 280 French press publishers and almost 450 publications under the European Copyright Directive (EUCD)<\/a> paying them tens of millions of euros yearly to cover the copyrights. <\/p>\n\n\n\n

Google France Blog mentioned \"We have compromised because it is time to turn the page and, as our numerous agreements with publishers prove, we want to focus on sustainable approaches to connect Internet users with quality content and work constructively with publishers.\u00a0\"<\/em><\/p>\n\n\n\n

The Competition Authority fined Google because it didn't follow four of the seven obligatory commitments under the decision 22-D -13 of June 21, 2022. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Coinbase Approved As Virtual Asset Provider in France<\/a><\/p>\n\n\n\n

Neighboring Rights And Commitments<\/h2>\n\n\n\n

In 2019 the EU introduced \"Neighboring Rights\" which made print media capable of demanding compensation for using their content and this was in trial phases in France. Google agreed to pay French Media for using their articles or news in searches. In 2022, a new commitment was made by Google, which says that Google should offer news publishers a transparent offer of payment within three months of receiving a copyright claim.<\/p>\n\n\n\n

Google didn't regard the commitments and used publishers' data to train its AI chatbot Bard, currently known as Gemini. So Google failed to provide a proper solution for publishers, allowing them to object to using their content by Google. <\/p>\n\n\n\n

In response, Google proposed effective measures<\/a> in response to identified failings to solve this dispute which has gone too far.<\/p>\n","post_title":"French Regulators Fined Google $270M For Using News Publishers' Data","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"french-regulators-fined-google-270m-for-using-news-publishers-data","to_ping":"","pinged":"","post_modified":"2024-03-24 13:27:35","post_modified_gmt":"2024-03-24 02:27:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15993","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15899,"post_author":"20","post_date":"2024-03-16 05:54:52","post_date_gmt":"2024-03-15 18:54:52","post_content":"\n

On March 13, Google De<\/a>e<\/a>pMind<\/a> announced the latest AI agent \"SIMA\" (Scalable Instructable Multiworld Agent) which can actively play games with you while following your commands. SIMA has been trained with a range of gaming skills to play more like a human than some typical AI. It can easily follow natural language instructions and perform tasks you assign across different games.<\/p>\n\n\n\n

This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

The first generation of Metas\u2019 AI chips was revealed last year and was called Meta Training and Inference Accelerator v1 (or MTIA v1). In a blog post<\/a>, the company reveals that the newer chips are simply titled \u201cnext generation\u201d MTIA. <\/p>\n\n\n\n

\u201cThe next generation of MTIA is part of our broader full-stack development program for custom, domain-specific silicon that addresses our unique workloads and systems\u201d<\/em>, the company states.\u00a0<\/p>\n\n\n\n

See Related:<\/em><\/strong> Meta Apes Launches on BNB Application Sidechain to Give Gamers the Best of Both Web2 and Web3 Gaming<\/a><\/p>\n\n\n\n

Meta claims its latest chip has \u201cdouble the compute and memory bandwidth\u201d of previous versions. It offers more internal memory (124MB compared to 64MB) and higher clock speed (1.35GHz compared to 800MHz). The new chips are reported to be running in 16 <\/a>of Meta\u2019s data center regions. Although the chips are not exclusively meant for training generative AI models, the company believes this will pave the way for superior infrastructure and AI experience. <\/p>\n\n\n\n

Meta also indicates that they will continue to improve these chips, stating, \u201cWe currently have several programs underway aimed at expanding the scope of MTIA, including support for GenAI workloads\u201d. <\/p>\n","post_title":"Meta Announces \u201cNext Generation\u201d AI Chip A Day After Intel And Google","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-announces-next-generation-ai-chip-a-day-after-intel-and-google","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/04\/introducing-our-next-generation-infrastructure-for-ai\/","post_modified":"2024-04-17 04:37:36","post_modified_gmt":"2024-04-16 18:37:36","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16423","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16038,"post_author":"17","post_date":"2024-03-28 23:20:07","post_date_gmt":"2024-03-28 12:20:07","post_content":"\n

American tech giant Google has stepped forward with its initiative to utilize AI in forecasting floods on a global scale. The company published a research paper in the scientific journal Nature, highlighting AI's potential in saving lives and limiting damages in flood-affected areas. The AI models have been developed by the team at Google Research.<\/p>\n\n\n\n

According to the paper, using AI-based hydrologic technologies can drastically improve flood forecasting even in areas where there is limited flood-related data. \u201cWe found that AI helped us to provide more accurate information on riverine floods up to 7 days in advance. This allowed us to provide flood forecasting in 80 countries in areas where 460 million people live\u201d<\/em><\/strong>, the paper claimed<\/a>.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Bank of England\u2019s Journey Towards Better Economic Foresight<\/a><\/p>\n\n\n\n

AI-based Hydrologic Technology<\/h2>\n\n\n\n

The hydrologic model has been trained using publicly available data such as soil attributes, streamflow gauges, and weather forecasts. It uses two Long Short Term Memory (LSTM) networks - a hindcast unit and a forecast unit. The hindcast unit analyzes geophysical data from over a year in the past and sends it to the forecast unit. The forecast LSTM then combines this data with the weather forecast for the next seven days to make highly accurate streamflow predictions. <\/p>\n\n\n\n

\u201cOur goal is to continue using our research capabilities and technology to further increase our coverage, as well as forecast other types of flood-related events and disasters, including flash floods and urban floods\u201d<\/em><\/strong>, Google stated.<\/p>\n\n\n\n

As of 2024, Google\u2019s hydrologic model covers 80 regions across Africa, Asia, Europe, and both South and Central America. The relevant data are available on the Flood Hub platform.<\/p>\n","post_title":"Google To Use AI In Forecasting Floods Worldwide","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-to-use-ai-in-forecasting-floods-worldwide","to_ping":"","pinged":"","post_modified":"2024-03-28 23:20:13","post_modified_gmt":"2024-03-28 12:20:13","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16038","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15993,"post_author":"20","post_date":"2024-03-24 13:27:02","post_date_gmt":"2024-03-24 02:27:02","post_content":"\n

French authorities have fined Google $270M(About 250M Euro) for breaking its commitment to paying media outlets to use their data in search results and references. A report also mentioned that Google used publishers' data to train Gemini without informing the owners.<\/p>\n\n\n\n

Google was the only platform to sign licensing agreements with 280 French press publishers and almost 450 publications under the European Copyright Directive (EUCD)<\/a> paying them tens of millions of euros yearly to cover the copyrights. <\/p>\n\n\n\n

Google France Blog mentioned \"We have compromised because it is time to turn the page and, as our numerous agreements with publishers prove, we want to focus on sustainable approaches to connect Internet users with quality content and work constructively with publishers.\u00a0\"<\/em><\/p>\n\n\n\n

The Competition Authority fined Google because it didn't follow four of the seven obligatory commitments under the decision 22-D -13 of June 21, 2022. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Coinbase Approved As Virtual Asset Provider in France<\/a><\/p>\n\n\n\n

Neighboring Rights And Commitments<\/h2>\n\n\n\n

In 2019 the EU introduced \"Neighboring Rights\" which made print media capable of demanding compensation for using their content and this was in trial phases in France. Google agreed to pay French Media for using their articles or news in searches. In 2022, a new commitment was made by Google, which says that Google should offer news publishers a transparent offer of payment within three months of receiving a copyright claim.<\/p>\n\n\n\n

Google didn't regard the commitments and used publishers' data to train its AI chatbot Bard, currently known as Gemini. So Google failed to provide a proper solution for publishers, allowing them to object to using their content by Google. <\/p>\n\n\n\n

In response, Google proposed effective measures<\/a> in response to identified failings to solve this dispute which has gone too far.<\/p>\n","post_title":"French Regulators Fined Google $270M For Using News Publishers' Data","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"french-regulators-fined-google-270m-for-using-news-publishers-data","to_ping":"","pinged":"","post_modified":"2024-03-24 13:27:35","post_modified_gmt":"2024-03-24 02:27:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15993","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15899,"post_author":"20","post_date":"2024-03-16 05:54:52","post_date_gmt":"2024-03-15 18:54:52","post_content":"\n

On March 13, Google De<\/a>e<\/a>pMind<\/a> announced the latest AI agent \"SIMA\" (Scalable Instructable Multiworld Agent) which can actively play games with you while following your commands. SIMA has been trained with a range of gaming skills to play more like a human than some typical AI. It can easily follow natural language instructions and perform tasks you assign across different games.<\/p>\n\n\n\n

This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

As Apple prepares for its Worldwide Developers Conference, anticipation is building around the unveiling of new AI software and services. With discussions ongoing with both OpenAI and Google, the path forward for Apple's AI endeavors remains dynamic.<\/p>\n","post_title":"Apple Engages OpenAI For AI Integration In iOS: Report","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"apple-engages-openai-for-ai-integration-in-ios-report","to_ping":"","pinged":"","post_modified":"2024-05-24 19:49:42","post_modified_gmt":"2024-05-24 09:49:42","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16625","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16423,"post_author":"17","post_date":"2024-04-17 04:37:30","post_date_gmt":"2024-04-16 18:37:30","post_content":"\n

The first generation of Metas\u2019 AI chips was revealed last year and was called Meta Training and Inference Accelerator v1 (or MTIA v1). In a blog post<\/a>, the company reveals that the newer chips are simply titled \u201cnext generation\u201d MTIA. <\/p>\n\n\n\n

\u201cThe next generation of MTIA is part of our broader full-stack development program for custom, domain-specific silicon that addresses our unique workloads and systems\u201d<\/em>, the company states.\u00a0<\/p>\n\n\n\n

See Related:<\/em><\/strong> Meta Apes Launches on BNB Application Sidechain to Give Gamers the Best of Both Web2 and Web3 Gaming<\/a><\/p>\n\n\n\n

Meta claims its latest chip has \u201cdouble the compute and memory bandwidth\u201d of previous versions. It offers more internal memory (124MB compared to 64MB) and higher clock speed (1.35GHz compared to 800MHz). The new chips are reported to be running in 16 <\/a>of Meta\u2019s data center regions. Although the chips are not exclusively meant for training generative AI models, the company believes this will pave the way for superior infrastructure and AI experience. <\/p>\n\n\n\n

Meta also indicates that they will continue to improve these chips, stating, \u201cWe currently have several programs underway aimed at expanding the scope of MTIA, including support for GenAI workloads\u201d. <\/p>\n","post_title":"Meta Announces \u201cNext Generation\u201d AI Chip A Day After Intel And Google","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-announces-next-generation-ai-chip-a-day-after-intel-and-google","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/04\/introducing-our-next-generation-infrastructure-for-ai\/","post_modified":"2024-04-17 04:37:36","post_modified_gmt":"2024-04-16 18:37:36","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16423","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16038,"post_author":"17","post_date":"2024-03-28 23:20:07","post_date_gmt":"2024-03-28 12:20:07","post_content":"\n

American tech giant Google has stepped forward with its initiative to utilize AI in forecasting floods on a global scale. The company published a research paper in the scientific journal Nature, highlighting AI's potential in saving lives and limiting damages in flood-affected areas. The AI models have been developed by the team at Google Research.<\/p>\n\n\n\n

According to the paper, using AI-based hydrologic technologies can drastically improve flood forecasting even in areas where there is limited flood-related data. \u201cWe found that AI helped us to provide more accurate information on riverine floods up to 7 days in advance. This allowed us to provide flood forecasting in 80 countries in areas where 460 million people live\u201d<\/em><\/strong>, the paper claimed<\/a>.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Bank of England\u2019s Journey Towards Better Economic Foresight<\/a><\/p>\n\n\n\n

AI-based Hydrologic Technology<\/h2>\n\n\n\n

The hydrologic model has been trained using publicly available data such as soil attributes, streamflow gauges, and weather forecasts. It uses two Long Short Term Memory (LSTM) networks - a hindcast unit and a forecast unit. The hindcast unit analyzes geophysical data from over a year in the past and sends it to the forecast unit. The forecast LSTM then combines this data with the weather forecast for the next seven days to make highly accurate streamflow predictions. <\/p>\n\n\n\n

\u201cOur goal is to continue using our research capabilities and technology to further increase our coverage, as well as forecast other types of flood-related events and disasters, including flash floods and urban floods\u201d<\/em><\/strong>, Google stated.<\/p>\n\n\n\n

As of 2024, Google\u2019s hydrologic model covers 80 regions across Africa, Asia, Europe, and both South and Central America. The relevant data are available on the Flood Hub platform.<\/p>\n","post_title":"Google To Use AI In Forecasting Floods Worldwide","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-to-use-ai-in-forecasting-floods-worldwide","to_ping":"","pinged":"","post_modified":"2024-03-28 23:20:13","post_modified_gmt":"2024-03-28 12:20:13","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16038","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15993,"post_author":"20","post_date":"2024-03-24 13:27:02","post_date_gmt":"2024-03-24 02:27:02","post_content":"\n

French authorities have fined Google $270M(About 250M Euro) for breaking its commitment to paying media outlets to use their data in search results and references. A report also mentioned that Google used publishers' data to train Gemini without informing the owners.<\/p>\n\n\n\n

Google was the only platform to sign licensing agreements with 280 French press publishers and almost 450 publications under the European Copyright Directive (EUCD)<\/a> paying them tens of millions of euros yearly to cover the copyrights. <\/p>\n\n\n\n

Google France Blog mentioned \"We have compromised because it is time to turn the page and, as our numerous agreements with publishers prove, we want to focus on sustainable approaches to connect Internet users with quality content and work constructively with publishers.\u00a0\"<\/em><\/p>\n\n\n\n

The Competition Authority fined Google because it didn't follow four of the seven obligatory commitments under the decision 22-D -13 of June 21, 2022. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Coinbase Approved As Virtual Asset Provider in France<\/a><\/p>\n\n\n\n

Neighboring Rights And Commitments<\/h2>\n\n\n\n

In 2019 the EU introduced \"Neighboring Rights\" which made print media capable of demanding compensation for using their content and this was in trial phases in France. Google agreed to pay French Media for using their articles or news in searches. In 2022, a new commitment was made by Google, which says that Google should offer news publishers a transparent offer of payment within three months of receiving a copyright claim.<\/p>\n\n\n\n

Google didn't regard the commitments and used publishers' data to train its AI chatbot Bard, currently known as Gemini. So Google failed to provide a proper solution for publishers, allowing them to object to using their content by Google. <\/p>\n\n\n\n

In response, Google proposed effective measures<\/a> in response to identified failings to solve this dispute which has gone too far.<\/p>\n","post_title":"French Regulators Fined Google $270M For Using News Publishers' Data","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"french-regulators-fined-google-270m-for-using-news-publishers-data","to_ping":"","pinged":"","post_modified":"2024-03-24 13:27:35","post_modified_gmt":"2024-03-24 02:27:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15993","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15899,"post_author":"20","post_date":"2024-03-16 05:54:52","post_date_gmt":"2024-03-15 18:54:52","post_content":"\n

On March 13, Google De<\/a>e<\/a>pMind<\/a> announced the latest AI agent \"SIMA\" (Scalable Instructable Multiworld Agent) which can actively play games with you while following your commands. SIMA has been trained with a range of gaming skills to play more like a human than some typical AI. It can easily follow natural language instructions and perform tasks you assign across different games.<\/p>\n\n\n\n

This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Privacy remains a top priority for Apple as it explores AI integration. The company aims to ensure that any AI feature introduced in iOS 18 prioritizes user privacy and data security. By partnering with the two established AI providers, Apple aims to deliver AI-powered functionalities while maintaining robust privacy protections for its users.<\/p>\n\n\n\n

As Apple prepares for its Worldwide Developers Conference, anticipation is building around the unveiling of new AI software and services. With discussions ongoing with both OpenAI and Google, the path forward for Apple's AI endeavors remains dynamic.<\/p>\n","post_title":"Apple Engages OpenAI For AI Integration In iOS: Report","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"apple-engages-openai-for-ai-integration-in-ios-report","to_ping":"","pinged":"","post_modified":"2024-05-24 19:49:42","post_modified_gmt":"2024-05-24 09:49:42","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16625","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16423,"post_author":"17","post_date":"2024-04-17 04:37:30","post_date_gmt":"2024-04-16 18:37:30","post_content":"\n

The first generation of Metas\u2019 AI chips was revealed last year and was called Meta Training and Inference Accelerator v1 (or MTIA v1). In a blog post<\/a>, the company reveals that the newer chips are simply titled \u201cnext generation\u201d MTIA. <\/p>\n\n\n\n

\u201cThe next generation of MTIA is part of our broader full-stack development program for custom, domain-specific silicon that addresses our unique workloads and systems\u201d<\/em>, the company states.\u00a0<\/p>\n\n\n\n

See Related:<\/em><\/strong> Meta Apes Launches on BNB Application Sidechain to Give Gamers the Best of Both Web2 and Web3 Gaming<\/a><\/p>\n\n\n\n

Meta claims its latest chip has \u201cdouble the compute and memory bandwidth\u201d of previous versions. It offers more internal memory (124MB compared to 64MB) and higher clock speed (1.35GHz compared to 800MHz). The new chips are reported to be running in 16 <\/a>of Meta\u2019s data center regions. Although the chips are not exclusively meant for training generative AI models, the company believes this will pave the way for superior infrastructure and AI experience. <\/p>\n\n\n\n

Meta also indicates that they will continue to improve these chips, stating, \u201cWe currently have several programs underway aimed at expanding the scope of MTIA, including support for GenAI workloads\u201d. <\/p>\n","post_title":"Meta Announces \u201cNext Generation\u201d AI Chip A Day After Intel And Google","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-announces-next-generation-ai-chip-a-day-after-intel-and-google","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/04\/introducing-our-next-generation-infrastructure-for-ai\/","post_modified":"2024-04-17 04:37:36","post_modified_gmt":"2024-04-16 18:37:36","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16423","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16038,"post_author":"17","post_date":"2024-03-28 23:20:07","post_date_gmt":"2024-03-28 12:20:07","post_content":"\n

American tech giant Google has stepped forward with its initiative to utilize AI in forecasting floods on a global scale. The company published a research paper in the scientific journal Nature, highlighting AI's potential in saving lives and limiting damages in flood-affected areas. The AI models have been developed by the team at Google Research.<\/p>\n\n\n\n

According to the paper, using AI-based hydrologic technologies can drastically improve flood forecasting even in areas where there is limited flood-related data. \u201cWe found that AI helped us to provide more accurate information on riverine floods up to 7 days in advance. This allowed us to provide flood forecasting in 80 countries in areas where 460 million people live\u201d<\/em><\/strong>, the paper claimed<\/a>.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Bank of England\u2019s Journey Towards Better Economic Foresight<\/a><\/p>\n\n\n\n

AI-based Hydrologic Technology<\/h2>\n\n\n\n

The hydrologic model has been trained using publicly available data such as soil attributes, streamflow gauges, and weather forecasts. It uses two Long Short Term Memory (LSTM) networks - a hindcast unit and a forecast unit. The hindcast unit analyzes geophysical data from over a year in the past and sends it to the forecast unit. The forecast LSTM then combines this data with the weather forecast for the next seven days to make highly accurate streamflow predictions. <\/p>\n\n\n\n

\u201cOur goal is to continue using our research capabilities and technology to further increase our coverage, as well as forecast other types of flood-related events and disasters, including flash floods and urban floods\u201d<\/em><\/strong>, Google stated.<\/p>\n\n\n\n

As of 2024, Google\u2019s hydrologic model covers 80 regions across Africa, Asia, Europe, and both South and Central America. The relevant data are available on the Flood Hub platform.<\/p>\n","post_title":"Google To Use AI In Forecasting Floods Worldwide","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-to-use-ai-in-forecasting-floods-worldwide","to_ping":"","pinged":"","post_modified":"2024-03-28 23:20:13","post_modified_gmt":"2024-03-28 12:20:13","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16038","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15993,"post_author":"20","post_date":"2024-03-24 13:27:02","post_date_gmt":"2024-03-24 02:27:02","post_content":"\n

French authorities have fined Google $270M(About 250M Euro) for breaking its commitment to paying media outlets to use their data in search results and references. A report also mentioned that Google used publishers' data to train Gemini without informing the owners.<\/p>\n\n\n\n

Google was the only platform to sign licensing agreements with 280 French press publishers and almost 450 publications under the European Copyright Directive (EUCD)<\/a> paying them tens of millions of euros yearly to cover the copyrights. <\/p>\n\n\n\n

Google France Blog mentioned \"We have compromised because it is time to turn the page and, as our numerous agreements with publishers prove, we want to focus on sustainable approaches to connect Internet users with quality content and work constructively with publishers.\u00a0\"<\/em><\/p>\n\n\n\n

The Competition Authority fined Google because it didn't follow four of the seven obligatory commitments under the decision 22-D -13 of June 21, 2022. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Coinbase Approved As Virtual Asset Provider in France<\/a><\/p>\n\n\n\n

Neighboring Rights And Commitments<\/h2>\n\n\n\n

In 2019 the EU introduced \"Neighboring Rights\" which made print media capable of demanding compensation for using their content and this was in trial phases in France. Google agreed to pay French Media for using their articles or news in searches. In 2022, a new commitment was made by Google, which says that Google should offer news publishers a transparent offer of payment within three months of receiving a copyright claim.<\/p>\n\n\n\n

Google didn't regard the commitments and used publishers' data to train its AI chatbot Bard, currently known as Gemini. So Google failed to provide a proper solution for publishers, allowing them to object to using their content by Google. <\/p>\n\n\n\n

In response, Google proposed effective measures<\/a> in response to identified failings to solve this dispute which has gone too far.<\/p>\n","post_title":"French Regulators Fined Google $270M For Using News Publishers' Data","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"french-regulators-fined-google-270m-for-using-news-publishers-data","to_ping":"","pinged":"","post_modified":"2024-03-24 13:27:35","post_modified_gmt":"2024-03-24 02:27:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15993","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15899,"post_author":"20","post_date":"2024-03-16 05:54:52","post_date_gmt":"2024-03-15 18:54:52","post_content":"\n

On March 13, Google De<\/a>e<\/a>pMind<\/a> announced the latest AI agent \"SIMA\" (Scalable Instructable Multiworld Agent) which can actively play games with you while following your commands. SIMA has been trained with a range of gaming skills to play more like a human than some typical AI. It can easily follow natural language instructions and perform tasks you assign across different games.<\/p>\n\n\n\n

This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Apple's upcoming iOS 18 update is expected to introduce several new features leveraging Apple's in-house large language model. Besides that, the company is seeking partners to power a chatbot-like feature similar to OpenAI's ChatGPT to offer users a more conversational experience.<\/p>\n\n\n\n

Privacy remains a top priority for Apple as it explores AI integration. The company aims to ensure that any AI feature introduced in iOS 18 prioritizes user privacy and data security. By partnering with the two established AI providers, Apple aims to deliver AI-powered functionalities while maintaining robust privacy protections for its users.<\/p>\n\n\n\n

As Apple prepares for its Worldwide Developers Conference, anticipation is building around the unveiling of new AI software and services. With discussions ongoing with both OpenAI and Google, the path forward for Apple's AI endeavors remains dynamic.<\/p>\n","post_title":"Apple Engages OpenAI For AI Integration In iOS: Report","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"apple-engages-openai-for-ai-integration-in-ios-report","to_ping":"","pinged":"","post_modified":"2024-05-24 19:49:42","post_modified_gmt":"2024-05-24 09:49:42","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16625","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16423,"post_author":"17","post_date":"2024-04-17 04:37:30","post_date_gmt":"2024-04-16 18:37:30","post_content":"\n

The first generation of Metas\u2019 AI chips was revealed last year and was called Meta Training and Inference Accelerator v1 (or MTIA v1). In a blog post<\/a>, the company reveals that the newer chips are simply titled \u201cnext generation\u201d MTIA. <\/p>\n\n\n\n

\u201cThe next generation of MTIA is part of our broader full-stack development program for custom, domain-specific silicon that addresses our unique workloads and systems\u201d<\/em>, the company states.\u00a0<\/p>\n\n\n\n

See Related:<\/em><\/strong> Meta Apes Launches on BNB Application Sidechain to Give Gamers the Best of Both Web2 and Web3 Gaming<\/a><\/p>\n\n\n\n

Meta claims its latest chip has \u201cdouble the compute and memory bandwidth\u201d of previous versions. It offers more internal memory (124MB compared to 64MB) and higher clock speed (1.35GHz compared to 800MHz). The new chips are reported to be running in 16 <\/a>of Meta\u2019s data center regions. Although the chips are not exclusively meant for training generative AI models, the company believes this will pave the way for superior infrastructure and AI experience. <\/p>\n\n\n\n

Meta also indicates that they will continue to improve these chips, stating, \u201cWe currently have several programs underway aimed at expanding the scope of MTIA, including support for GenAI workloads\u201d. <\/p>\n","post_title":"Meta Announces \u201cNext Generation\u201d AI Chip A Day After Intel And Google","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-announces-next-generation-ai-chip-a-day-after-intel-and-google","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/04\/introducing-our-next-generation-infrastructure-for-ai\/","post_modified":"2024-04-17 04:37:36","post_modified_gmt":"2024-04-16 18:37:36","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16423","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16038,"post_author":"17","post_date":"2024-03-28 23:20:07","post_date_gmt":"2024-03-28 12:20:07","post_content":"\n

American tech giant Google has stepped forward with its initiative to utilize AI in forecasting floods on a global scale. The company published a research paper in the scientific journal Nature, highlighting AI's potential in saving lives and limiting damages in flood-affected areas. The AI models have been developed by the team at Google Research.<\/p>\n\n\n\n

According to the paper, using AI-based hydrologic technologies can drastically improve flood forecasting even in areas where there is limited flood-related data. \u201cWe found that AI helped us to provide more accurate information on riverine floods up to 7 days in advance. This allowed us to provide flood forecasting in 80 countries in areas where 460 million people live\u201d<\/em><\/strong>, the paper claimed<\/a>.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Bank of England\u2019s Journey Towards Better Economic Foresight<\/a><\/p>\n\n\n\n

AI-based Hydrologic Technology<\/h2>\n\n\n\n

The hydrologic model has been trained using publicly available data such as soil attributes, streamflow gauges, and weather forecasts. It uses two Long Short Term Memory (LSTM) networks - a hindcast unit and a forecast unit. The hindcast unit analyzes geophysical data from over a year in the past and sends it to the forecast unit. The forecast LSTM then combines this data with the weather forecast for the next seven days to make highly accurate streamflow predictions. <\/p>\n\n\n\n

\u201cOur goal is to continue using our research capabilities and technology to further increase our coverage, as well as forecast other types of flood-related events and disasters, including flash floods and urban floods\u201d<\/em><\/strong>, Google stated.<\/p>\n\n\n\n

As of 2024, Google\u2019s hydrologic model covers 80 regions across Africa, Asia, Europe, and both South and Central America. The relevant data are available on the Flood Hub platform.<\/p>\n","post_title":"Google To Use AI In Forecasting Floods Worldwide","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-to-use-ai-in-forecasting-floods-worldwide","to_ping":"","pinged":"","post_modified":"2024-03-28 23:20:13","post_modified_gmt":"2024-03-28 12:20:13","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16038","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15993,"post_author":"20","post_date":"2024-03-24 13:27:02","post_date_gmt":"2024-03-24 02:27:02","post_content":"\n

French authorities have fined Google $270M(About 250M Euro) for breaking its commitment to paying media outlets to use their data in search results and references. A report also mentioned that Google used publishers' data to train Gemini without informing the owners.<\/p>\n\n\n\n

Google was the only platform to sign licensing agreements with 280 French press publishers and almost 450 publications under the European Copyright Directive (EUCD)<\/a> paying them tens of millions of euros yearly to cover the copyrights. <\/p>\n\n\n\n

Google France Blog mentioned \"We have compromised because it is time to turn the page and, as our numerous agreements with publishers prove, we want to focus on sustainable approaches to connect Internet users with quality content and work constructively with publishers.\u00a0\"<\/em><\/p>\n\n\n\n

The Competition Authority fined Google because it didn't follow four of the seven obligatory commitments under the decision 22-D -13 of June 21, 2022. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Coinbase Approved As Virtual Asset Provider in France<\/a><\/p>\n\n\n\n

Neighboring Rights And Commitments<\/h2>\n\n\n\n

In 2019 the EU introduced \"Neighboring Rights\" which made print media capable of demanding compensation for using their content and this was in trial phases in France. Google agreed to pay French Media for using their articles or news in searches. In 2022, a new commitment was made by Google, which says that Google should offer news publishers a transparent offer of payment within three months of receiving a copyright claim.<\/p>\n\n\n\n

Google didn't regard the commitments and used publishers' data to train its AI chatbot Bard, currently known as Gemini. So Google failed to provide a proper solution for publishers, allowing them to object to using their content by Google. <\/p>\n\n\n\n

In response, Google proposed effective measures<\/a> in response to identified failings to solve this dispute which has gone too far.<\/p>\n","post_title":"French Regulators Fined Google $270M For Using News Publishers' Data","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"french-regulators-fined-google-270m-for-using-news-publishers-data","to_ping":"","pinged":"","post_modified":"2024-03-24 13:27:35","post_modified_gmt":"2024-03-24 02:27:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15993","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15899,"post_author":"20","post_date":"2024-03-16 05:54:52","post_date_gmt":"2024-03-15 18:54:52","post_content":"\n

On March 13, Google De<\/a>e<\/a>pMind<\/a> announced the latest AI agent \"SIMA\" (Scalable Instructable Multiworld Agent) which can actively play games with you while following your commands. SIMA has been trained with a range of gaming skills to play more like a human than some typical AI. It can easily follow natural language instructions and perform tasks you assign across different games.<\/p>\n\n\n\n

This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

iOS 18 Update Latest Features<\/h2>\n\n\n\n

Apple's upcoming iOS 18 update is expected to introduce several new features leveraging Apple's in-house large language model. Besides that, the company is seeking partners to power a chatbot-like feature similar to OpenAI's ChatGPT to offer users a more conversational experience.<\/p>\n\n\n\n

Privacy remains a top priority for Apple as it explores AI integration. The company aims to ensure that any AI feature introduced in iOS 18 prioritizes user privacy and data security. By partnering with the two established AI providers, Apple aims to deliver AI-powered functionalities while maintaining robust privacy protections for its users.<\/p>\n\n\n\n

As Apple prepares for its Worldwide Developers Conference, anticipation is building around the unveiling of new AI software and services. With discussions ongoing with both OpenAI and Google, the path forward for Apple's AI endeavors remains dynamic.<\/p>\n","post_title":"Apple Engages OpenAI For AI Integration In iOS: Report","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"apple-engages-openai-for-ai-integration-in-ios-report","to_ping":"","pinged":"","post_modified":"2024-05-24 19:49:42","post_modified_gmt":"2024-05-24 09:49:42","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16625","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16423,"post_author":"17","post_date":"2024-04-17 04:37:30","post_date_gmt":"2024-04-16 18:37:30","post_content":"\n

The first generation of Metas\u2019 AI chips was revealed last year and was called Meta Training and Inference Accelerator v1 (or MTIA v1). In a blog post<\/a>, the company reveals that the newer chips are simply titled \u201cnext generation\u201d MTIA. <\/p>\n\n\n\n

\u201cThe next generation of MTIA is part of our broader full-stack development program for custom, domain-specific silicon that addresses our unique workloads and systems\u201d<\/em>, the company states.\u00a0<\/p>\n\n\n\n

See Related:<\/em><\/strong> Meta Apes Launches on BNB Application Sidechain to Give Gamers the Best of Both Web2 and Web3 Gaming<\/a><\/p>\n\n\n\n

Meta claims its latest chip has \u201cdouble the compute and memory bandwidth\u201d of previous versions. It offers more internal memory (124MB compared to 64MB) and higher clock speed (1.35GHz compared to 800MHz). The new chips are reported to be running in 16 <\/a>of Meta\u2019s data center regions. Although the chips are not exclusively meant for training generative AI models, the company believes this will pave the way for superior infrastructure and AI experience. <\/p>\n\n\n\n

Meta also indicates that they will continue to improve these chips, stating, \u201cWe currently have several programs underway aimed at expanding the scope of MTIA, including support for GenAI workloads\u201d. <\/p>\n","post_title":"Meta Announces \u201cNext Generation\u201d AI Chip A Day After Intel And Google","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-announces-next-generation-ai-chip-a-day-after-intel-and-google","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/04\/introducing-our-next-generation-infrastructure-for-ai\/","post_modified":"2024-04-17 04:37:36","post_modified_gmt":"2024-04-16 18:37:36","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16423","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16038,"post_author":"17","post_date":"2024-03-28 23:20:07","post_date_gmt":"2024-03-28 12:20:07","post_content":"\n

American tech giant Google has stepped forward with its initiative to utilize AI in forecasting floods on a global scale. The company published a research paper in the scientific journal Nature, highlighting AI's potential in saving lives and limiting damages in flood-affected areas. The AI models have been developed by the team at Google Research.<\/p>\n\n\n\n

According to the paper, using AI-based hydrologic technologies can drastically improve flood forecasting even in areas where there is limited flood-related data. \u201cWe found that AI helped us to provide more accurate information on riverine floods up to 7 days in advance. This allowed us to provide flood forecasting in 80 countries in areas where 460 million people live\u201d<\/em><\/strong>, the paper claimed<\/a>.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Bank of England\u2019s Journey Towards Better Economic Foresight<\/a><\/p>\n\n\n\n

AI-based Hydrologic Technology<\/h2>\n\n\n\n

The hydrologic model has been trained using publicly available data such as soil attributes, streamflow gauges, and weather forecasts. It uses two Long Short Term Memory (LSTM) networks - a hindcast unit and a forecast unit. The hindcast unit analyzes geophysical data from over a year in the past and sends it to the forecast unit. The forecast LSTM then combines this data with the weather forecast for the next seven days to make highly accurate streamflow predictions. <\/p>\n\n\n\n

\u201cOur goal is to continue using our research capabilities and technology to further increase our coverage, as well as forecast other types of flood-related events and disasters, including flash floods and urban floods\u201d<\/em><\/strong>, Google stated.<\/p>\n\n\n\n

As of 2024, Google\u2019s hydrologic model covers 80 regions across Africa, Asia, Europe, and both South and Central America. The relevant data are available on the Flood Hub platform.<\/p>\n","post_title":"Google To Use AI In Forecasting Floods Worldwide","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-to-use-ai-in-forecasting-floods-worldwide","to_ping":"","pinged":"","post_modified":"2024-03-28 23:20:13","post_modified_gmt":"2024-03-28 12:20:13","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16038","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15993,"post_author":"20","post_date":"2024-03-24 13:27:02","post_date_gmt":"2024-03-24 02:27:02","post_content":"\n

French authorities have fined Google $270M(About 250M Euro) for breaking its commitment to paying media outlets to use their data in search results and references. A report also mentioned that Google used publishers' data to train Gemini without informing the owners.<\/p>\n\n\n\n

Google was the only platform to sign licensing agreements with 280 French press publishers and almost 450 publications under the European Copyright Directive (EUCD)<\/a> paying them tens of millions of euros yearly to cover the copyrights. <\/p>\n\n\n\n

Google France Blog mentioned \"We have compromised because it is time to turn the page and, as our numerous agreements with publishers prove, we want to focus on sustainable approaches to connect Internet users with quality content and work constructively with publishers.\u00a0\"<\/em><\/p>\n\n\n\n

The Competition Authority fined Google because it didn't follow four of the seven obligatory commitments under the decision 22-D -13 of June 21, 2022. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Coinbase Approved As Virtual Asset Provider in France<\/a><\/p>\n\n\n\n

Neighboring Rights And Commitments<\/h2>\n\n\n\n

In 2019 the EU introduced \"Neighboring Rights\" which made print media capable of demanding compensation for using their content and this was in trial phases in France. Google agreed to pay French Media for using their articles or news in searches. In 2022, a new commitment was made by Google, which says that Google should offer news publishers a transparent offer of payment within three months of receiving a copyright claim.<\/p>\n\n\n\n

Google didn't regard the commitments and used publishers' data to train its AI chatbot Bard, currently known as Gemini. So Google failed to provide a proper solution for publishers, allowing them to object to using their content by Google. <\/p>\n\n\n\n

In response, Google proposed effective measures<\/a> in response to identified failings to solve this dispute which has gone too far.<\/p>\n","post_title":"French Regulators Fined Google $270M For Using News Publishers' Data","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"french-regulators-fined-google-270m-for-using-news-publishers-data","to_ping":"","pinged":"","post_modified":"2024-03-24 13:27:35","post_modified_gmt":"2024-03-24 02:27:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15993","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15899,"post_author":"20","post_date":"2024-03-16 05:54:52","post_date_gmt":"2024-03-15 18:54:52","post_content":"\n

On March 13, Google De<\/a>e<\/a>pMind<\/a> announced the latest AI agent \"SIMA\" (Scalable Instructable Multiworld Agent) which can actively play games with you while following your commands. SIMA has been trained with a range of gaming skills to play more like a human than some typical AI. It can easily follow natural language instructions and perform tasks you assign across different games.<\/p>\n\n\n\n

This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related: <\/em><\/strong>Apple Launches High-Yield Savings Account In Partnership With Goldman Sachs<\/a><\/p>\n\n\n\n

iOS 18 Update Latest Features<\/h2>\n\n\n\n

Apple's upcoming iOS 18 update is expected to introduce several new features leveraging Apple's in-house large language model. Besides that, the company is seeking partners to power a chatbot-like feature similar to OpenAI's ChatGPT to offer users a more conversational experience.<\/p>\n\n\n\n

Privacy remains a top priority for Apple as it explores AI integration. The company aims to ensure that any AI feature introduced in iOS 18 prioritizes user privacy and data security. By partnering with the two established AI providers, Apple aims to deliver AI-powered functionalities while maintaining robust privacy protections for its users.<\/p>\n\n\n\n

As Apple prepares for its Worldwide Developers Conference, anticipation is building around the unveiling of new AI software and services. With discussions ongoing with both OpenAI and Google, the path forward for Apple's AI endeavors remains dynamic.<\/p>\n","post_title":"Apple Engages OpenAI For AI Integration In iOS: Report","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"apple-engages-openai-for-ai-integration-in-ios-report","to_ping":"","pinged":"","post_modified":"2024-05-24 19:49:42","post_modified_gmt":"2024-05-24 09:49:42","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16625","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16423,"post_author":"17","post_date":"2024-04-17 04:37:30","post_date_gmt":"2024-04-16 18:37:30","post_content":"\n

The first generation of Metas\u2019 AI chips was revealed last year and was called Meta Training and Inference Accelerator v1 (or MTIA v1). In a blog post<\/a>, the company reveals that the newer chips are simply titled \u201cnext generation\u201d MTIA. <\/p>\n\n\n\n

\u201cThe next generation of MTIA is part of our broader full-stack development program for custom, domain-specific silicon that addresses our unique workloads and systems\u201d<\/em>, the company states.\u00a0<\/p>\n\n\n\n

See Related:<\/em><\/strong> Meta Apes Launches on BNB Application Sidechain to Give Gamers the Best of Both Web2 and Web3 Gaming<\/a><\/p>\n\n\n\n

Meta claims its latest chip has \u201cdouble the compute and memory bandwidth\u201d of previous versions. It offers more internal memory (124MB compared to 64MB) and higher clock speed (1.35GHz compared to 800MHz). The new chips are reported to be running in 16 <\/a>of Meta\u2019s data center regions. Although the chips are not exclusively meant for training generative AI models, the company believes this will pave the way for superior infrastructure and AI experience. <\/p>\n\n\n\n

Meta also indicates that they will continue to improve these chips, stating, \u201cWe currently have several programs underway aimed at expanding the scope of MTIA, including support for GenAI workloads\u201d. <\/p>\n","post_title":"Meta Announces \u201cNext Generation\u201d AI Chip A Day After Intel And Google","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-announces-next-generation-ai-chip-a-day-after-intel-and-google","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/04\/introducing-our-next-generation-infrastructure-for-ai\/","post_modified":"2024-04-17 04:37:36","post_modified_gmt":"2024-04-16 18:37:36","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16423","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16038,"post_author":"17","post_date":"2024-03-28 23:20:07","post_date_gmt":"2024-03-28 12:20:07","post_content":"\n

American tech giant Google has stepped forward with its initiative to utilize AI in forecasting floods on a global scale. The company published a research paper in the scientific journal Nature, highlighting AI's potential in saving lives and limiting damages in flood-affected areas. The AI models have been developed by the team at Google Research.<\/p>\n\n\n\n

According to the paper, using AI-based hydrologic technologies can drastically improve flood forecasting even in areas where there is limited flood-related data. \u201cWe found that AI helped us to provide more accurate information on riverine floods up to 7 days in advance. This allowed us to provide flood forecasting in 80 countries in areas where 460 million people live\u201d<\/em><\/strong>, the paper claimed<\/a>.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Bank of England\u2019s Journey Towards Better Economic Foresight<\/a><\/p>\n\n\n\n

AI-based Hydrologic Technology<\/h2>\n\n\n\n

The hydrologic model has been trained using publicly available data such as soil attributes, streamflow gauges, and weather forecasts. It uses two Long Short Term Memory (LSTM) networks - a hindcast unit and a forecast unit. The hindcast unit analyzes geophysical data from over a year in the past and sends it to the forecast unit. The forecast LSTM then combines this data with the weather forecast for the next seven days to make highly accurate streamflow predictions. <\/p>\n\n\n\n

\u201cOur goal is to continue using our research capabilities and technology to further increase our coverage, as well as forecast other types of flood-related events and disasters, including flash floods and urban floods\u201d<\/em><\/strong>, Google stated.<\/p>\n\n\n\n

As of 2024, Google\u2019s hydrologic model covers 80 regions across Africa, Asia, Europe, and both South and Central America. The relevant data are available on the Flood Hub platform.<\/p>\n","post_title":"Google To Use AI In Forecasting Floods Worldwide","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-to-use-ai-in-forecasting-floods-worldwide","to_ping":"","pinged":"","post_modified":"2024-03-28 23:20:13","post_modified_gmt":"2024-03-28 12:20:13","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16038","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15993,"post_author":"20","post_date":"2024-03-24 13:27:02","post_date_gmt":"2024-03-24 02:27:02","post_content":"\n

French authorities have fined Google $270M(About 250M Euro) for breaking its commitment to paying media outlets to use their data in search results and references. A report also mentioned that Google used publishers' data to train Gemini without informing the owners.<\/p>\n\n\n\n

Google was the only platform to sign licensing agreements with 280 French press publishers and almost 450 publications under the European Copyright Directive (EUCD)<\/a> paying them tens of millions of euros yearly to cover the copyrights. <\/p>\n\n\n\n

Google France Blog mentioned \"We have compromised because it is time to turn the page and, as our numerous agreements with publishers prove, we want to focus on sustainable approaches to connect Internet users with quality content and work constructively with publishers.\u00a0\"<\/em><\/p>\n\n\n\n

The Competition Authority fined Google because it didn't follow four of the seven obligatory commitments under the decision 22-D -13 of June 21, 2022. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Coinbase Approved As Virtual Asset Provider in France<\/a><\/p>\n\n\n\n

Neighboring Rights And Commitments<\/h2>\n\n\n\n

In 2019 the EU introduced \"Neighboring Rights\" which made print media capable of demanding compensation for using their content and this was in trial phases in France. Google agreed to pay French Media for using their articles or news in searches. In 2022, a new commitment was made by Google, which says that Google should offer news publishers a transparent offer of payment within three months of receiving a copyright claim.<\/p>\n\n\n\n

Google didn't regard the commitments and used publishers' data to train its AI chatbot Bard, currently known as Gemini. So Google failed to provide a proper solution for publishers, allowing them to object to using their content by Google. <\/p>\n\n\n\n

In response, Google proposed effective measures<\/a> in response to identified failings to solve this dispute which has gone too far.<\/p>\n","post_title":"French Regulators Fined Google $270M For Using News Publishers' Data","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"french-regulators-fined-google-270m-for-using-news-publishers-data","to_ping":"","pinged":"","post_modified":"2024-03-24 13:27:35","post_modified_gmt":"2024-03-24 02:27:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15993","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15899,"post_author":"20","post_date":"2024-03-16 05:54:52","post_date_gmt":"2024-03-15 18:54:52","post_content":"\n

On March 13, Google De<\/a>e<\/a>pMind<\/a> announced the latest AI agent \"SIMA\" (Scalable Instructable Multiworld Agent) which can actively play games with you while following your commands. SIMA has been trained with a range of gaming skills to play more like a human than some typical AI. It can easily follow natural language instructions and perform tasks you assign across different games.<\/p>\n\n\n\n

This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

In addition to discussions with OpenAI, Apple is engaging with Google to explore the possibility of licensing its Gemini chatbot technology. However, no final decision has been made regarding which partner or technology will be chosen for integration into iOS 18.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Apple Launches High-Yield Savings Account In Partnership With Goldman Sachs<\/a><\/p>\n\n\n\n

iOS 18 Update Latest Features<\/h2>\n\n\n\n

Apple's upcoming iOS 18 update is expected to introduce several new features leveraging Apple's in-house large language model. Besides that, the company is seeking partners to power a chatbot-like feature similar to OpenAI's ChatGPT to offer users a more conversational experience.<\/p>\n\n\n\n

Privacy remains a top priority for Apple as it explores AI integration. The company aims to ensure that any AI feature introduced in iOS 18 prioritizes user privacy and data security. By partnering with the two established AI providers, Apple aims to deliver AI-powered functionalities while maintaining robust privacy protections for its users.<\/p>\n\n\n\n

As Apple prepares for its Worldwide Developers Conference, anticipation is building around the unveiling of new AI software and services. With discussions ongoing with both OpenAI and Google, the path forward for Apple's AI endeavors remains dynamic.<\/p>\n","post_title":"Apple Engages OpenAI For AI Integration In iOS: Report","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"apple-engages-openai-for-ai-integration-in-ios-report","to_ping":"","pinged":"","post_modified":"2024-05-24 19:49:42","post_modified_gmt":"2024-05-24 09:49:42","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16625","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16423,"post_author":"17","post_date":"2024-04-17 04:37:30","post_date_gmt":"2024-04-16 18:37:30","post_content":"\n

The first generation of Metas\u2019 AI chips was revealed last year and was called Meta Training and Inference Accelerator v1 (or MTIA v1). In a blog post<\/a>, the company reveals that the newer chips are simply titled \u201cnext generation\u201d MTIA. <\/p>\n\n\n\n

\u201cThe next generation of MTIA is part of our broader full-stack development program for custom, domain-specific silicon that addresses our unique workloads and systems\u201d<\/em>, the company states.\u00a0<\/p>\n\n\n\n

See Related:<\/em><\/strong> Meta Apes Launches on BNB Application Sidechain to Give Gamers the Best of Both Web2 and Web3 Gaming<\/a><\/p>\n\n\n\n

Meta claims its latest chip has \u201cdouble the compute and memory bandwidth\u201d of previous versions. It offers more internal memory (124MB compared to 64MB) and higher clock speed (1.35GHz compared to 800MHz). The new chips are reported to be running in 16 <\/a>of Meta\u2019s data center regions. Although the chips are not exclusively meant for training generative AI models, the company believes this will pave the way for superior infrastructure and AI experience. <\/p>\n\n\n\n

Meta also indicates that they will continue to improve these chips, stating, \u201cWe currently have several programs underway aimed at expanding the scope of MTIA, including support for GenAI workloads\u201d. <\/p>\n","post_title":"Meta Announces \u201cNext Generation\u201d AI Chip A Day After Intel And Google","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-announces-next-generation-ai-chip-a-day-after-intel-and-google","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/04\/introducing-our-next-generation-infrastructure-for-ai\/","post_modified":"2024-04-17 04:37:36","post_modified_gmt":"2024-04-16 18:37:36","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16423","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16038,"post_author":"17","post_date":"2024-03-28 23:20:07","post_date_gmt":"2024-03-28 12:20:07","post_content":"\n

American tech giant Google has stepped forward with its initiative to utilize AI in forecasting floods on a global scale. The company published a research paper in the scientific journal Nature, highlighting AI's potential in saving lives and limiting damages in flood-affected areas. The AI models have been developed by the team at Google Research.<\/p>\n\n\n\n

According to the paper, using AI-based hydrologic technologies can drastically improve flood forecasting even in areas where there is limited flood-related data. \u201cWe found that AI helped us to provide more accurate information on riverine floods up to 7 days in advance. This allowed us to provide flood forecasting in 80 countries in areas where 460 million people live\u201d<\/em><\/strong>, the paper claimed<\/a>.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Bank of England\u2019s Journey Towards Better Economic Foresight<\/a><\/p>\n\n\n\n

AI-based Hydrologic Technology<\/h2>\n\n\n\n

The hydrologic model has been trained using publicly available data such as soil attributes, streamflow gauges, and weather forecasts. It uses two Long Short Term Memory (LSTM) networks - a hindcast unit and a forecast unit. The hindcast unit analyzes geophysical data from over a year in the past and sends it to the forecast unit. The forecast LSTM then combines this data with the weather forecast for the next seven days to make highly accurate streamflow predictions. <\/p>\n\n\n\n

\u201cOur goal is to continue using our research capabilities and technology to further increase our coverage, as well as forecast other types of flood-related events and disasters, including flash floods and urban floods\u201d<\/em><\/strong>, Google stated.<\/p>\n\n\n\n

As of 2024, Google\u2019s hydrologic model covers 80 regions across Africa, Asia, Europe, and both South and Central America. The relevant data are available on the Flood Hub platform.<\/p>\n","post_title":"Google To Use AI In Forecasting Floods Worldwide","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-to-use-ai-in-forecasting-floods-worldwide","to_ping":"","pinged":"","post_modified":"2024-03-28 23:20:13","post_modified_gmt":"2024-03-28 12:20:13","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16038","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15993,"post_author":"20","post_date":"2024-03-24 13:27:02","post_date_gmt":"2024-03-24 02:27:02","post_content":"\n

French authorities have fined Google $270M(About 250M Euro) for breaking its commitment to paying media outlets to use their data in search results and references. A report also mentioned that Google used publishers' data to train Gemini without informing the owners.<\/p>\n\n\n\n

Google was the only platform to sign licensing agreements with 280 French press publishers and almost 450 publications under the European Copyright Directive (EUCD)<\/a> paying them tens of millions of euros yearly to cover the copyrights. <\/p>\n\n\n\n

Google France Blog mentioned \"We have compromised because it is time to turn the page and, as our numerous agreements with publishers prove, we want to focus on sustainable approaches to connect Internet users with quality content and work constructively with publishers.\u00a0\"<\/em><\/p>\n\n\n\n

The Competition Authority fined Google because it didn't follow four of the seven obligatory commitments under the decision 22-D -13 of June 21, 2022. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Coinbase Approved As Virtual Asset Provider in France<\/a><\/p>\n\n\n\n

Neighboring Rights And Commitments<\/h2>\n\n\n\n

In 2019 the EU introduced \"Neighboring Rights\" which made print media capable of demanding compensation for using their content and this was in trial phases in France. Google agreed to pay French Media for using their articles or news in searches. In 2022, a new commitment was made by Google, which says that Google should offer news publishers a transparent offer of payment within three months of receiving a copyright claim.<\/p>\n\n\n\n

Google didn't regard the commitments and used publishers' data to train its AI chatbot Bard, currently known as Gemini. So Google failed to provide a proper solution for publishers, allowing them to object to using their content by Google. <\/p>\n\n\n\n

In response, Google proposed effective measures<\/a> in response to identified failings to solve this dispute which has gone too far.<\/p>\n","post_title":"French Regulators Fined Google $270M For Using News Publishers' Data","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"french-regulators-fined-google-270m-for-using-news-publishers-data","to_ping":"","pinged":"","post_modified":"2024-03-24 13:27:35","post_modified_gmt":"2024-03-24 02:27:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15993","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15899,"post_author":"20","post_date":"2024-03-16 05:54:52","post_date_gmt":"2024-03-15 18:54:52","post_content":"\n

On March 13, Google De<\/a>e<\/a>pMind<\/a> announced the latest AI agent \"SIMA\" (Scalable Instructable Multiworld Agent) which can actively play games with you while following your commands. SIMA has been trained with a range of gaming skills to play more like a human than some typical AI. It can easily follow natural language instructions and perform tasks you assign across different games.<\/p>\n\n\n\n

This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

The tech giant has reopened discussions<\/a> with OpenAI about utilizing its technology for new features set to debut in iOS 18. Negotiations are underway to determine the terms of a possible agreement and how OpenAI's features would be integrated into the operating system.<\/p>\n\n\n\n

In addition to discussions with OpenAI, Apple is engaging with Google to explore the possibility of licensing its Gemini chatbot technology. However, no final decision has been made regarding which partner or technology will be chosen for integration into iOS 18.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Apple Launches High-Yield Savings Account In Partnership With Goldman Sachs<\/a><\/p>\n\n\n\n

iOS 18 Update Latest Features<\/h2>\n\n\n\n

Apple's upcoming iOS 18 update is expected to introduce several new features leveraging Apple's in-house large language model. Besides that, the company is seeking partners to power a chatbot-like feature similar to OpenAI's ChatGPT to offer users a more conversational experience.<\/p>\n\n\n\n

Privacy remains a top priority for Apple as it explores AI integration. The company aims to ensure that any AI feature introduced in iOS 18 prioritizes user privacy and data security. By partnering with the two established AI providers, Apple aims to deliver AI-powered functionalities while maintaining robust privacy protections for its users.<\/p>\n\n\n\n

As Apple prepares for its Worldwide Developers Conference, anticipation is building around the unveiling of new AI software and services. With discussions ongoing with both OpenAI and Google, the path forward for Apple's AI endeavors remains dynamic.<\/p>\n","post_title":"Apple Engages OpenAI For AI Integration In iOS: Report","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"apple-engages-openai-for-ai-integration-in-ios-report","to_ping":"","pinged":"","post_modified":"2024-05-24 19:49:42","post_modified_gmt":"2024-05-24 09:49:42","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16625","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16423,"post_author":"17","post_date":"2024-04-17 04:37:30","post_date_gmt":"2024-04-16 18:37:30","post_content":"\n

The first generation of Metas\u2019 AI chips was revealed last year and was called Meta Training and Inference Accelerator v1 (or MTIA v1). In a blog post<\/a>, the company reveals that the newer chips are simply titled \u201cnext generation\u201d MTIA. <\/p>\n\n\n\n

\u201cThe next generation of MTIA is part of our broader full-stack development program for custom, domain-specific silicon that addresses our unique workloads and systems\u201d<\/em>, the company states.\u00a0<\/p>\n\n\n\n

See Related:<\/em><\/strong> Meta Apes Launches on BNB Application Sidechain to Give Gamers the Best of Both Web2 and Web3 Gaming<\/a><\/p>\n\n\n\n

Meta claims its latest chip has \u201cdouble the compute and memory bandwidth\u201d of previous versions. It offers more internal memory (124MB compared to 64MB) and higher clock speed (1.35GHz compared to 800MHz). The new chips are reported to be running in 16 <\/a>of Meta\u2019s data center regions. Although the chips are not exclusively meant for training generative AI models, the company believes this will pave the way for superior infrastructure and AI experience. <\/p>\n\n\n\n

Meta also indicates that they will continue to improve these chips, stating, \u201cWe currently have several programs underway aimed at expanding the scope of MTIA, including support for GenAI workloads\u201d. <\/p>\n","post_title":"Meta Announces \u201cNext Generation\u201d AI Chip A Day After Intel And Google","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-announces-next-generation-ai-chip-a-day-after-intel-and-google","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/04\/introducing-our-next-generation-infrastructure-for-ai\/","post_modified":"2024-04-17 04:37:36","post_modified_gmt":"2024-04-16 18:37:36","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16423","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16038,"post_author":"17","post_date":"2024-03-28 23:20:07","post_date_gmt":"2024-03-28 12:20:07","post_content":"\n

American tech giant Google has stepped forward with its initiative to utilize AI in forecasting floods on a global scale. The company published a research paper in the scientific journal Nature, highlighting AI's potential in saving lives and limiting damages in flood-affected areas. The AI models have been developed by the team at Google Research.<\/p>\n\n\n\n

According to the paper, using AI-based hydrologic technologies can drastically improve flood forecasting even in areas where there is limited flood-related data. \u201cWe found that AI helped us to provide more accurate information on riverine floods up to 7 days in advance. This allowed us to provide flood forecasting in 80 countries in areas where 460 million people live\u201d<\/em><\/strong>, the paper claimed<\/a>.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Bank of England\u2019s Journey Towards Better Economic Foresight<\/a><\/p>\n\n\n\n

AI-based Hydrologic Technology<\/h2>\n\n\n\n

The hydrologic model has been trained using publicly available data such as soil attributes, streamflow gauges, and weather forecasts. It uses two Long Short Term Memory (LSTM) networks - a hindcast unit and a forecast unit. The hindcast unit analyzes geophysical data from over a year in the past and sends it to the forecast unit. The forecast LSTM then combines this data with the weather forecast for the next seven days to make highly accurate streamflow predictions. <\/p>\n\n\n\n

\u201cOur goal is to continue using our research capabilities and technology to further increase our coverage, as well as forecast other types of flood-related events and disasters, including flash floods and urban floods\u201d<\/em><\/strong>, Google stated.<\/p>\n\n\n\n

As of 2024, Google\u2019s hydrologic model covers 80 regions across Africa, Asia, Europe, and both South and Central America. The relevant data are available on the Flood Hub platform.<\/p>\n","post_title":"Google To Use AI In Forecasting Floods Worldwide","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-to-use-ai-in-forecasting-floods-worldwide","to_ping":"","pinged":"","post_modified":"2024-03-28 23:20:13","post_modified_gmt":"2024-03-28 12:20:13","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16038","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15993,"post_author":"20","post_date":"2024-03-24 13:27:02","post_date_gmt":"2024-03-24 02:27:02","post_content":"\n

French authorities have fined Google $270M(About 250M Euro) for breaking its commitment to paying media outlets to use their data in search results and references. A report also mentioned that Google used publishers' data to train Gemini without informing the owners.<\/p>\n\n\n\n

Google was the only platform to sign licensing agreements with 280 French press publishers and almost 450 publications under the European Copyright Directive (EUCD)<\/a> paying them tens of millions of euros yearly to cover the copyrights. <\/p>\n\n\n\n

Google France Blog mentioned \"We have compromised because it is time to turn the page and, as our numerous agreements with publishers prove, we want to focus on sustainable approaches to connect Internet users with quality content and work constructively with publishers.\u00a0\"<\/em><\/p>\n\n\n\n

The Competition Authority fined Google because it didn't follow four of the seven obligatory commitments under the decision 22-D -13 of June 21, 2022. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Coinbase Approved As Virtual Asset Provider in France<\/a><\/p>\n\n\n\n

Neighboring Rights And Commitments<\/h2>\n\n\n\n

In 2019 the EU introduced \"Neighboring Rights\" which made print media capable of demanding compensation for using their content and this was in trial phases in France. Google agreed to pay French Media for using their articles or news in searches. In 2022, a new commitment was made by Google, which says that Google should offer news publishers a transparent offer of payment within three months of receiving a copyright claim.<\/p>\n\n\n\n

Google didn't regard the commitments and used publishers' data to train its AI chatbot Bard, currently known as Gemini. So Google failed to provide a proper solution for publishers, allowing them to object to using their content by Google. <\/p>\n\n\n\n

In response, Google proposed effective measures<\/a> in response to identified failings to solve this dispute which has gone too far.<\/p>\n","post_title":"French Regulators Fined Google $270M For Using News Publishers' Data","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"french-regulators-fined-google-270m-for-using-news-publishers-data","to_ping":"","pinged":"","post_modified":"2024-03-24 13:27:35","post_modified_gmt":"2024-03-24 02:27:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15993","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15899,"post_author":"20","post_date":"2024-03-16 05:54:52","post_date_gmt":"2024-03-15 18:54:52","post_content":"\n

On March 13, Google De<\/a>e<\/a>pMind<\/a> announced the latest AI agent \"SIMA\" (Scalable Instructable Multiworld Agent) which can actively play games with you while following your commands. SIMA has been trained with a range of gaming skills to play more like a human than some typical AI. It can easily follow natural language instructions and perform tasks you assign across different games.<\/p>\n\n\n\n

This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Apple is in talks with OpenAI and Google about potential partnerships to bring new AI features to iPhone's upcoming update, Bloomberg<\/em> reported. These discussions signify Apple's interest in using AI to enhance user experience.<\/p>\n\n\n\n

The tech giant has reopened discussions<\/a> with OpenAI about utilizing its technology for new features set to debut in iOS 18. Negotiations are underway to determine the terms of a possible agreement and how OpenAI's features would be integrated into the operating system.<\/p>\n\n\n\n

In addition to discussions with OpenAI, Apple is engaging with Google to explore the possibility of licensing its Gemini chatbot technology. However, no final decision has been made regarding which partner or technology will be chosen for integration into iOS 18.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Apple Launches High-Yield Savings Account In Partnership With Goldman Sachs<\/a><\/p>\n\n\n\n

iOS 18 Update Latest Features<\/h2>\n\n\n\n

Apple's upcoming iOS 18 update is expected to introduce several new features leveraging Apple's in-house large language model. Besides that, the company is seeking partners to power a chatbot-like feature similar to OpenAI's ChatGPT to offer users a more conversational experience.<\/p>\n\n\n\n

Privacy remains a top priority for Apple as it explores AI integration. The company aims to ensure that any AI feature introduced in iOS 18 prioritizes user privacy and data security. By partnering with the two established AI providers, Apple aims to deliver AI-powered functionalities while maintaining robust privacy protections for its users.<\/p>\n\n\n\n

As Apple prepares for its Worldwide Developers Conference, anticipation is building around the unveiling of new AI software and services. With discussions ongoing with both OpenAI and Google, the path forward for Apple's AI endeavors remains dynamic.<\/p>\n","post_title":"Apple Engages OpenAI For AI Integration In iOS: Report","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"apple-engages-openai-for-ai-integration-in-ios-report","to_ping":"","pinged":"","post_modified":"2024-05-24 19:49:42","post_modified_gmt":"2024-05-24 09:49:42","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16625","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16423,"post_author":"17","post_date":"2024-04-17 04:37:30","post_date_gmt":"2024-04-16 18:37:30","post_content":"\n

The first generation of Metas\u2019 AI chips was revealed last year and was called Meta Training and Inference Accelerator v1 (or MTIA v1). In a blog post<\/a>, the company reveals that the newer chips are simply titled \u201cnext generation\u201d MTIA. <\/p>\n\n\n\n

\u201cThe next generation of MTIA is part of our broader full-stack development program for custom, domain-specific silicon that addresses our unique workloads and systems\u201d<\/em>, the company states.\u00a0<\/p>\n\n\n\n

See Related:<\/em><\/strong> Meta Apes Launches on BNB Application Sidechain to Give Gamers the Best of Both Web2 and Web3 Gaming<\/a><\/p>\n\n\n\n

Meta claims its latest chip has \u201cdouble the compute and memory bandwidth\u201d of previous versions. It offers more internal memory (124MB compared to 64MB) and higher clock speed (1.35GHz compared to 800MHz). The new chips are reported to be running in 16 <\/a>of Meta\u2019s data center regions. Although the chips are not exclusively meant for training generative AI models, the company believes this will pave the way for superior infrastructure and AI experience. <\/p>\n\n\n\n

Meta also indicates that they will continue to improve these chips, stating, \u201cWe currently have several programs underway aimed at expanding the scope of MTIA, including support for GenAI workloads\u201d. <\/p>\n","post_title":"Meta Announces \u201cNext Generation\u201d AI Chip A Day After Intel And Google","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-announces-next-generation-ai-chip-a-day-after-intel-and-google","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/04\/introducing-our-next-generation-infrastructure-for-ai\/","post_modified":"2024-04-17 04:37:36","post_modified_gmt":"2024-04-16 18:37:36","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16423","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16038,"post_author":"17","post_date":"2024-03-28 23:20:07","post_date_gmt":"2024-03-28 12:20:07","post_content":"\n

American tech giant Google has stepped forward with its initiative to utilize AI in forecasting floods on a global scale. The company published a research paper in the scientific journal Nature, highlighting AI's potential in saving lives and limiting damages in flood-affected areas. The AI models have been developed by the team at Google Research.<\/p>\n\n\n\n

According to the paper, using AI-based hydrologic technologies can drastically improve flood forecasting even in areas where there is limited flood-related data. \u201cWe found that AI helped us to provide more accurate information on riverine floods up to 7 days in advance. This allowed us to provide flood forecasting in 80 countries in areas where 460 million people live\u201d<\/em><\/strong>, the paper claimed<\/a>.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Bank of England\u2019s Journey Towards Better Economic Foresight<\/a><\/p>\n\n\n\n

AI-based Hydrologic Technology<\/h2>\n\n\n\n

The hydrologic model has been trained using publicly available data such as soil attributes, streamflow gauges, and weather forecasts. It uses two Long Short Term Memory (LSTM) networks - a hindcast unit and a forecast unit. The hindcast unit analyzes geophysical data from over a year in the past and sends it to the forecast unit. The forecast LSTM then combines this data with the weather forecast for the next seven days to make highly accurate streamflow predictions. <\/p>\n\n\n\n

\u201cOur goal is to continue using our research capabilities and technology to further increase our coverage, as well as forecast other types of flood-related events and disasters, including flash floods and urban floods\u201d<\/em><\/strong>, Google stated.<\/p>\n\n\n\n

As of 2024, Google\u2019s hydrologic model covers 80 regions across Africa, Asia, Europe, and both South and Central America. The relevant data are available on the Flood Hub platform.<\/p>\n","post_title":"Google To Use AI In Forecasting Floods Worldwide","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-to-use-ai-in-forecasting-floods-worldwide","to_ping":"","pinged":"","post_modified":"2024-03-28 23:20:13","post_modified_gmt":"2024-03-28 12:20:13","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16038","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15993,"post_author":"20","post_date":"2024-03-24 13:27:02","post_date_gmt":"2024-03-24 02:27:02","post_content":"\n

French authorities have fined Google $270M(About 250M Euro) for breaking its commitment to paying media outlets to use their data in search results and references. A report also mentioned that Google used publishers' data to train Gemini without informing the owners.<\/p>\n\n\n\n

Google was the only platform to sign licensing agreements with 280 French press publishers and almost 450 publications under the European Copyright Directive (EUCD)<\/a> paying them tens of millions of euros yearly to cover the copyrights. <\/p>\n\n\n\n

Google France Blog mentioned \"We have compromised because it is time to turn the page and, as our numerous agreements with publishers prove, we want to focus on sustainable approaches to connect Internet users with quality content and work constructively with publishers.\u00a0\"<\/em><\/p>\n\n\n\n

The Competition Authority fined Google because it didn't follow four of the seven obligatory commitments under the decision 22-D -13 of June 21, 2022. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Coinbase Approved As Virtual Asset Provider in France<\/a><\/p>\n\n\n\n

Neighboring Rights And Commitments<\/h2>\n\n\n\n

In 2019 the EU introduced \"Neighboring Rights\" which made print media capable of demanding compensation for using their content and this was in trial phases in France. Google agreed to pay French Media for using their articles or news in searches. In 2022, a new commitment was made by Google, which says that Google should offer news publishers a transparent offer of payment within three months of receiving a copyright claim.<\/p>\n\n\n\n

Google didn't regard the commitments and used publishers' data to train its AI chatbot Bard, currently known as Gemini. So Google failed to provide a proper solution for publishers, allowing them to object to using their content by Google. <\/p>\n\n\n\n

In response, Google proposed effective measures<\/a> in response to identified failings to solve this dispute which has gone too far.<\/p>\n","post_title":"French Regulators Fined Google $270M For Using News Publishers' Data","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"french-regulators-fined-google-270m-for-using-news-publishers-data","to_ping":"","pinged":"","post_modified":"2024-03-24 13:27:35","post_modified_gmt":"2024-03-24 02:27:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15993","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15899,"post_author":"20","post_date":"2024-03-16 05:54:52","post_date_gmt":"2024-03-15 18:54:52","post_content":"\n

On March 13, Google De<\/a>e<\/a>pMind<\/a> announced the latest AI agent \"SIMA\" (Scalable Instructable Multiworld Agent) which can actively play games with you while following your commands. SIMA has been trained with a range of gaming skills to play more like a human than some typical AI. It can easily follow natural language instructions and perform tasks you assign across different games.<\/p>\n\n\n\n

This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

SIMA Gaming Skills<\/h2>\n\n\n\n

\"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

\u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

Sizes In Gemini 1.0<\/h2>\n\n\n\n

The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n
  • Privacy remains a top priority for Apple in its AI integration efforts.<\/li>\n<\/ul>\n\n\n\n

    Apple is in talks with OpenAI and Google about potential partnerships to bring new AI features to iPhone's upcoming update, Bloomberg<\/em> reported. These discussions signify Apple's interest in using AI to enhance user experience.<\/p>\n\n\n\n

    The tech giant has reopened discussions<\/a> with OpenAI about utilizing its technology for new features set to debut in iOS 18. Negotiations are underway to determine the terms of a possible agreement and how OpenAI's features would be integrated into the operating system.<\/p>\n\n\n\n

    In addition to discussions with OpenAI, Apple is engaging with Google to explore the possibility of licensing its Gemini chatbot technology. However, no final decision has been made regarding which partner or technology will be chosen for integration into iOS 18.<\/p>\n\n\n\n

    See Related: <\/em><\/strong>Apple Launches High-Yield Savings Account In Partnership With Goldman Sachs<\/a><\/p>\n\n\n\n

    iOS 18 Update Latest Features<\/h2>\n\n\n\n

    Apple's upcoming iOS 18 update is expected to introduce several new features leveraging Apple's in-house large language model. Besides that, the company is seeking partners to power a chatbot-like feature similar to OpenAI's ChatGPT to offer users a more conversational experience.<\/p>\n\n\n\n

    Privacy remains a top priority for Apple as it explores AI integration. The company aims to ensure that any AI feature introduced in iOS 18 prioritizes user privacy and data security. By partnering with the two established AI providers, Apple aims to deliver AI-powered functionalities while maintaining robust privacy protections for its users.<\/p>\n\n\n\n

    As Apple prepares for its Worldwide Developers Conference, anticipation is building around the unveiling of new AI software and services. With discussions ongoing with both OpenAI and Google, the path forward for Apple's AI endeavors remains dynamic.<\/p>\n","post_title":"Apple Engages OpenAI For AI Integration In iOS: Report","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"apple-engages-openai-for-ai-integration-in-ios-report","to_ping":"","pinged":"","post_modified":"2024-05-24 19:49:42","post_modified_gmt":"2024-05-24 09:49:42","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16625","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16423,"post_author":"17","post_date":"2024-04-17 04:37:30","post_date_gmt":"2024-04-16 18:37:30","post_content":"\n

    The first generation of Metas\u2019 AI chips was revealed last year and was called Meta Training and Inference Accelerator v1 (or MTIA v1). In a blog post<\/a>, the company reveals that the newer chips are simply titled \u201cnext generation\u201d MTIA. <\/p>\n\n\n\n

    \u201cThe next generation of MTIA is part of our broader full-stack development program for custom, domain-specific silicon that addresses our unique workloads and systems\u201d<\/em>, the company states.\u00a0<\/p>\n\n\n\n

    See Related:<\/em><\/strong> Meta Apes Launches on BNB Application Sidechain to Give Gamers the Best of Both Web2 and Web3 Gaming<\/a><\/p>\n\n\n\n

    Meta claims its latest chip has \u201cdouble the compute and memory bandwidth\u201d of previous versions. It offers more internal memory (124MB compared to 64MB) and higher clock speed (1.35GHz compared to 800MHz). The new chips are reported to be running in 16 <\/a>of Meta\u2019s data center regions. Although the chips are not exclusively meant for training generative AI models, the company believes this will pave the way for superior infrastructure and AI experience. <\/p>\n\n\n\n

    Meta also indicates that they will continue to improve these chips, stating, \u201cWe currently have several programs underway aimed at expanding the scope of MTIA, including support for GenAI workloads\u201d. <\/p>\n","post_title":"Meta Announces \u201cNext Generation\u201d AI Chip A Day After Intel And Google","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-announces-next-generation-ai-chip-a-day-after-intel-and-google","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/04\/introducing-our-next-generation-infrastructure-for-ai\/","post_modified":"2024-04-17 04:37:36","post_modified_gmt":"2024-04-16 18:37:36","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16423","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16038,"post_author":"17","post_date":"2024-03-28 23:20:07","post_date_gmt":"2024-03-28 12:20:07","post_content":"\n

    American tech giant Google has stepped forward with its initiative to utilize AI in forecasting floods on a global scale. The company published a research paper in the scientific journal Nature, highlighting AI's potential in saving lives and limiting damages in flood-affected areas. The AI models have been developed by the team at Google Research.<\/p>\n\n\n\n

    According to the paper, using AI-based hydrologic technologies can drastically improve flood forecasting even in areas where there is limited flood-related data. \u201cWe found that AI helped us to provide more accurate information on riverine floods up to 7 days in advance. This allowed us to provide flood forecasting in 80 countries in areas where 460 million people live\u201d<\/em><\/strong>, the paper claimed<\/a>.<\/p>\n\n\n\n

    See Related:<\/em><\/strong> Bank of England\u2019s Journey Towards Better Economic Foresight<\/a><\/p>\n\n\n\n

    AI-based Hydrologic Technology<\/h2>\n\n\n\n

    The hydrologic model has been trained using publicly available data such as soil attributes, streamflow gauges, and weather forecasts. It uses two Long Short Term Memory (LSTM) networks - a hindcast unit and a forecast unit. The hindcast unit analyzes geophysical data from over a year in the past and sends it to the forecast unit. The forecast LSTM then combines this data with the weather forecast for the next seven days to make highly accurate streamflow predictions. <\/p>\n\n\n\n

    \u201cOur goal is to continue using our research capabilities and technology to further increase our coverage, as well as forecast other types of flood-related events and disasters, including flash floods and urban floods\u201d<\/em><\/strong>, Google stated.<\/p>\n\n\n\n

    As of 2024, Google\u2019s hydrologic model covers 80 regions across Africa, Asia, Europe, and both South and Central America. The relevant data are available on the Flood Hub platform.<\/p>\n","post_title":"Google To Use AI In Forecasting Floods Worldwide","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-to-use-ai-in-forecasting-floods-worldwide","to_ping":"","pinged":"","post_modified":"2024-03-28 23:20:13","post_modified_gmt":"2024-03-28 12:20:13","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16038","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15993,"post_author":"20","post_date":"2024-03-24 13:27:02","post_date_gmt":"2024-03-24 02:27:02","post_content":"\n

    French authorities have fined Google $270M(About 250M Euro) for breaking its commitment to paying media outlets to use their data in search results and references. A report also mentioned that Google used publishers' data to train Gemini without informing the owners.<\/p>\n\n\n\n

    Google was the only platform to sign licensing agreements with 280 French press publishers and almost 450 publications under the European Copyright Directive (EUCD)<\/a> paying them tens of millions of euros yearly to cover the copyrights. <\/p>\n\n\n\n

    Google France Blog mentioned \"We have compromised because it is time to turn the page and, as our numerous agreements with publishers prove, we want to focus on sustainable approaches to connect Internet users with quality content and work constructively with publishers.\u00a0\"<\/em><\/p>\n\n\n\n

    The Competition Authority fined Google because it didn't follow four of the seven obligatory commitments under the decision 22-D -13 of June 21, 2022. <\/p>\n\n\n\n

    See Related:<\/em><\/strong> Coinbase Approved As Virtual Asset Provider in France<\/a><\/p>\n\n\n\n

    Neighboring Rights And Commitments<\/h2>\n\n\n\n

    In 2019 the EU introduced \"Neighboring Rights\" which made print media capable of demanding compensation for using their content and this was in trial phases in France. Google agreed to pay French Media for using their articles or news in searches. In 2022, a new commitment was made by Google, which says that Google should offer news publishers a transparent offer of payment within three months of receiving a copyright claim.<\/p>\n\n\n\n

    Google didn't regard the commitments and used publishers' data to train its AI chatbot Bard, currently known as Gemini. So Google failed to provide a proper solution for publishers, allowing them to object to using their content by Google. <\/p>\n\n\n\n

    In response, Google proposed effective measures<\/a> in response to identified failings to solve this dispute which has gone too far.<\/p>\n","post_title":"French Regulators Fined Google $270M For Using News Publishers' Data","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"french-regulators-fined-google-270m-for-using-news-publishers-data","to_ping":"","pinged":"","post_modified":"2024-03-24 13:27:35","post_modified_gmt":"2024-03-24 02:27:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15993","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15899,"post_author":"20","post_date":"2024-03-16 05:54:52","post_date_gmt":"2024-03-15 18:54:52","post_content":"\n

    On March 13, Google De<\/a>e<\/a>pMind<\/a> announced the latest AI agent \"SIMA\" (Scalable Instructable Multiworld Agent) which can actively play games with you while following your commands. SIMA has been trained with a range of gaming skills to play more like a human than some typical AI. It can easily follow natural language instructions and perform tasks you assign across different games.<\/p>\n\n\n\n

    This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

    Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

    See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

    SIMA Gaming Skills<\/h2>\n\n\n\n

    \"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

    Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

    SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

    American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

    \u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

    Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

    Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

    See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

    Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

    Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

    The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

    Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

    Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

    This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

    See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

    \u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

    Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

    Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

    Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

    Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

    See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

    Capabilities Of Lumiere<\/h2>\n\n\n\n

    As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

    Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

    In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

    At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

    Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

    According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

    According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

    See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

    Sizes In Gemini 1.0<\/h2>\n\n\n\n

    The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

    Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

    Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

    Most Read

    Subscribe To Our Newsletter

    By subscribing, you agree with our privacy and terms.

    Follow The Distributed

    ADVERTISEMENT
    \n
  • Negotiations are ongoing to determine how AI features would be integrated into the iOS 18 operating system.<\/li>\n\n\n\n
  • Privacy remains a top priority for Apple in its AI integration efforts.<\/li>\n<\/ul>\n\n\n\n

    Apple is in talks with OpenAI and Google about potential partnerships to bring new AI features to iPhone's upcoming update, Bloomberg<\/em> reported. These discussions signify Apple's interest in using AI to enhance user experience.<\/p>\n\n\n\n

    The tech giant has reopened discussions<\/a> with OpenAI about utilizing its technology for new features set to debut in iOS 18. Negotiations are underway to determine the terms of a possible agreement and how OpenAI's features would be integrated into the operating system.<\/p>\n\n\n\n

    In addition to discussions with OpenAI, Apple is engaging with Google to explore the possibility of licensing its Gemini chatbot technology. However, no final decision has been made regarding which partner or technology will be chosen for integration into iOS 18.<\/p>\n\n\n\n

    See Related: <\/em><\/strong>Apple Launches High-Yield Savings Account In Partnership With Goldman Sachs<\/a><\/p>\n\n\n\n

    iOS 18 Update Latest Features<\/h2>\n\n\n\n

    Apple's upcoming iOS 18 update is expected to introduce several new features leveraging Apple's in-house large language model. Besides that, the company is seeking partners to power a chatbot-like feature similar to OpenAI's ChatGPT to offer users a more conversational experience.<\/p>\n\n\n\n

    Privacy remains a top priority for Apple as it explores AI integration. The company aims to ensure that any AI feature introduced in iOS 18 prioritizes user privacy and data security. By partnering with the two established AI providers, Apple aims to deliver AI-powered functionalities while maintaining robust privacy protections for its users.<\/p>\n\n\n\n

    As Apple prepares for its Worldwide Developers Conference, anticipation is building around the unveiling of new AI software and services. With discussions ongoing with both OpenAI and Google, the path forward for Apple's AI endeavors remains dynamic.<\/p>\n","post_title":"Apple Engages OpenAI For AI Integration In iOS: Report","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"apple-engages-openai-for-ai-integration-in-ios-report","to_ping":"","pinged":"","post_modified":"2024-05-24 19:49:42","post_modified_gmt":"2024-05-24 09:49:42","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16625","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16423,"post_author":"17","post_date":"2024-04-17 04:37:30","post_date_gmt":"2024-04-16 18:37:30","post_content":"\n

    The first generation of Metas\u2019 AI chips was revealed last year and was called Meta Training and Inference Accelerator v1 (or MTIA v1). In a blog post<\/a>, the company reveals that the newer chips are simply titled \u201cnext generation\u201d MTIA. <\/p>\n\n\n\n

    \u201cThe next generation of MTIA is part of our broader full-stack development program for custom, domain-specific silicon that addresses our unique workloads and systems\u201d<\/em>, the company states.\u00a0<\/p>\n\n\n\n

    See Related:<\/em><\/strong> Meta Apes Launches on BNB Application Sidechain to Give Gamers the Best of Both Web2 and Web3 Gaming<\/a><\/p>\n\n\n\n

    Meta claims its latest chip has \u201cdouble the compute and memory bandwidth\u201d of previous versions. It offers more internal memory (124MB compared to 64MB) and higher clock speed (1.35GHz compared to 800MHz). The new chips are reported to be running in 16 <\/a>of Meta\u2019s data center regions. Although the chips are not exclusively meant for training generative AI models, the company believes this will pave the way for superior infrastructure and AI experience. <\/p>\n\n\n\n

    Meta also indicates that they will continue to improve these chips, stating, \u201cWe currently have several programs underway aimed at expanding the scope of MTIA, including support for GenAI workloads\u201d. <\/p>\n","post_title":"Meta Announces \u201cNext Generation\u201d AI Chip A Day After Intel And Google","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-announces-next-generation-ai-chip-a-day-after-intel-and-google","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/04\/introducing-our-next-generation-infrastructure-for-ai\/","post_modified":"2024-04-17 04:37:36","post_modified_gmt":"2024-04-16 18:37:36","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16423","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16038,"post_author":"17","post_date":"2024-03-28 23:20:07","post_date_gmt":"2024-03-28 12:20:07","post_content":"\n

    American tech giant Google has stepped forward with its initiative to utilize AI in forecasting floods on a global scale. The company published a research paper in the scientific journal Nature, highlighting AI's potential in saving lives and limiting damages in flood-affected areas. The AI models have been developed by the team at Google Research.<\/p>\n\n\n\n

    According to the paper, using AI-based hydrologic technologies can drastically improve flood forecasting even in areas where there is limited flood-related data. \u201cWe found that AI helped us to provide more accurate information on riverine floods up to 7 days in advance. This allowed us to provide flood forecasting in 80 countries in areas where 460 million people live\u201d<\/em><\/strong>, the paper claimed<\/a>.<\/p>\n\n\n\n

    See Related:<\/em><\/strong> Bank of England\u2019s Journey Towards Better Economic Foresight<\/a><\/p>\n\n\n\n

    AI-based Hydrologic Technology<\/h2>\n\n\n\n

    The hydrologic model has been trained using publicly available data such as soil attributes, streamflow gauges, and weather forecasts. It uses two Long Short Term Memory (LSTM) networks - a hindcast unit and a forecast unit. The hindcast unit analyzes geophysical data from over a year in the past and sends it to the forecast unit. The forecast LSTM then combines this data with the weather forecast for the next seven days to make highly accurate streamflow predictions. <\/p>\n\n\n\n

    \u201cOur goal is to continue using our research capabilities and technology to further increase our coverage, as well as forecast other types of flood-related events and disasters, including flash floods and urban floods\u201d<\/em><\/strong>, Google stated.<\/p>\n\n\n\n

    As of 2024, Google\u2019s hydrologic model covers 80 regions across Africa, Asia, Europe, and both South and Central America. The relevant data are available on the Flood Hub platform.<\/p>\n","post_title":"Google To Use AI In Forecasting Floods Worldwide","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-to-use-ai-in-forecasting-floods-worldwide","to_ping":"","pinged":"","post_modified":"2024-03-28 23:20:13","post_modified_gmt":"2024-03-28 12:20:13","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16038","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15993,"post_author":"20","post_date":"2024-03-24 13:27:02","post_date_gmt":"2024-03-24 02:27:02","post_content":"\n

    French authorities have fined Google $270M(About 250M Euro) for breaking its commitment to paying media outlets to use their data in search results and references. A report also mentioned that Google used publishers' data to train Gemini without informing the owners.<\/p>\n\n\n\n

    Google was the only platform to sign licensing agreements with 280 French press publishers and almost 450 publications under the European Copyright Directive (EUCD)<\/a> paying them tens of millions of euros yearly to cover the copyrights. <\/p>\n\n\n\n

    Google France Blog mentioned \"We have compromised because it is time to turn the page and, as our numerous agreements with publishers prove, we want to focus on sustainable approaches to connect Internet users with quality content and work constructively with publishers.\u00a0\"<\/em><\/p>\n\n\n\n

    The Competition Authority fined Google because it didn't follow four of the seven obligatory commitments under the decision 22-D -13 of June 21, 2022. <\/p>\n\n\n\n

    See Related:<\/em><\/strong> Coinbase Approved As Virtual Asset Provider in France<\/a><\/p>\n\n\n\n

    Neighboring Rights And Commitments<\/h2>\n\n\n\n

    In 2019 the EU introduced \"Neighboring Rights\" which made print media capable of demanding compensation for using their content and this was in trial phases in France. Google agreed to pay French Media for using their articles or news in searches. In 2022, a new commitment was made by Google, which says that Google should offer news publishers a transparent offer of payment within three months of receiving a copyright claim.<\/p>\n\n\n\n

    Google didn't regard the commitments and used publishers' data to train its AI chatbot Bard, currently known as Gemini. So Google failed to provide a proper solution for publishers, allowing them to object to using their content by Google. <\/p>\n\n\n\n

    In response, Google proposed effective measures<\/a> in response to identified failings to solve this dispute which has gone too far.<\/p>\n","post_title":"French Regulators Fined Google $270M For Using News Publishers' Data","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"french-regulators-fined-google-270m-for-using-news-publishers-data","to_ping":"","pinged":"","post_modified":"2024-03-24 13:27:35","post_modified_gmt":"2024-03-24 02:27:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15993","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15899,"post_author":"20","post_date":"2024-03-16 05:54:52","post_date_gmt":"2024-03-15 18:54:52","post_content":"\n

    On March 13, Google De<\/a>e<\/a>pMind<\/a> announced the latest AI agent \"SIMA\" (Scalable Instructable Multiworld Agent) which can actively play games with you while following your commands. SIMA has been trained with a range of gaming skills to play more like a human than some typical AI. It can easily follow natural language instructions and perform tasks you assign across different games.<\/p>\n\n\n\n

    This is the first research of its kind, as Google DeepMind claims.\" This research marks the first time an agent has demonstrated it can understand a broad range of gaming worlds, and follow natural-language instructions to carry out tasks within them, as a human might\"<\/em><\/p>\n\n\n\n

    Google collaborated with 8 game developers who plugged SIMA into games like No Man\u2019s Sky, Teardown, Valheim,\u00a0and\u00a0Goat Simulator 3\u00a0to train this AI agent and then test its capability. Google DeepMind mentioned that SIMA is not like other AI models like ChatGPT and Gemini. Although trained on large datasets, these models still require human assistance. While SIMA is trained to operate on its own without any particular human assistance.<\/p>\n\n\n\n

    See Related:<\/em><\/strong> Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race(Opens in a new browser tab)<\/a><\/p>\n\n\n\n

    SIMA Gaming Skills<\/h2>\n\n\n\n

    \"The current version of SIMA is evaluated across 600 basic skills, spanning navigation (e.g. \"turn left\"), object interaction (\"climb the ladder\"), and menu use (\"open the map\"). We\u2019ve trained SIMA to perform simple tasks that can be completed within about 10 seconds\" <\/em>DeepMind mentioned in their blog.<\/p>\n\n\n\n

    Google has evaluated SIMA's ability to perform almost 1500 in-game tasks. SIMA consists of a learning system with pre-trained vision models and a memory that supports keyboard and mouse outputs. <\/p>\n\n\n\n

    SIMA is confidently progressing towards mastering game playing and adapting to new ones, although the prospect of it eventually learning to talk, like AI NPCs, remains a possibility.<\/p>\n","post_title":"Google's Latest AI Can Play Video Games With You While Following Your Commands","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"googles-latest-ai-can-play-video-games-with-you-while-following-your-commands","to_ping":"","pinged":"","post_modified":"2024-03-16 05:54:59","post_modified_gmt":"2024-03-15 18:54:59","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15899","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15647,"post_author":"17","post_date":"2024-02-29 22:32:26","post_date_gmt":"2024-02-29 11:32:26","post_content":"\n

    American tech giant Google has recently unveiled Gemma, a \u201cfamily of lightweight, state-of-the-art open models<\/a>\u201d. The models were developed by Google DeepMind with the help of multiple teams at Google.<\/p>\n\n\n\n

    \u201cToday, we\u2019re excited to introduce a new generation of open models from Google to assist developers and researchers in building AI responsibly\u201d<\/em><\/strong>, the company stated<\/a> in a press release.<\/p>\n\n\n\n

    Gemma is built on the same technology as Gemini, Google\u2019s\u201d largest and most capable AI model\u201d. The models come in two weight sizes: Gemma 2B and Gemma 7B with each size implementing pre-trained and instruction-tuned variants.<\/p>\n\n\n\n

    Additionally, the company has also released several tools to help developers innovate new AI applications. Gemma comes packaged with \u201cReady-to-use Colab and Kaggle notebooks\u201d. The model also provides extensive cross-device compatibility as it works on laptops, desktops, IoT, mobile, and cloud.<\/p>\n\n\n\n

    See Related:<\/em><\/strong> Polygon Teams Up With Google Cloud To Advance Web 3<\/a><\/p>\n\n\n\n

    Google\u2019s Collaboration With NVIDIA<\/h2>\n\n\n\n

    Another notable aspect of Gemma is its optimization for NVIDIA GPUs as part of Google\u2019s collaboration with NVIDIA.<\/p>\n\n\n\n

    The rapid advancement of generative AI has given rise to many safety and ethical concerns. Google has addressed this issue by stating, \u201cWe\u2019re also releasing a new Responsible Generative AI Toolkit together with Gemma to help developers and researchers prioritize building safe and responsible AI applications\u201d<\/em><\/strong>. The toolkit includes powerful safety classifiers, a debugging tool, and general guidelines for building responsible AI applications. <\/p>\n","post_title":"Google Gemma: Google's New Family of State-of-the-Art Open Models","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-gemma-googles-new-family-of-state-of-the-art-open-models","to_ping":"","pinged":"","post_modified":"2024-02-29 22:32:31","post_modified_gmt":"2024-02-29 11:32:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15647","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15408,"post_author":"17","post_date":"2024-02-16 22:20:00","post_date_gmt":"2024-02-16 11:20:00","post_content":"\n

    Google has decided to rebrand its flagship chatbot. Previously known as Bard, this chatbot as well as Google Assistant will both be incorporated into Gemini, Google\u2019s most powerful series of AI models to date.<\/p>\n\n\n\n

    Gemini is a series of multimodal large language models (LLM) that were released late last year. Gemini was announced with 3 different models - Gemini Mini, Gemini Pro, and Gemini Ultra. Google already released Gemini Pro 1.0 last year. Now Bard will be integrated into Gemini Ultra version 1.0.<\/p>\n\n\n\n

    This latest iteration of Gemini Ultra is also called Gemini Advanced and Google claims it is the company\u2019s \u201clargest and most capable state-of-the-art AI model\u201d.<\/p>\n\n\n\n

    See Related: <\/em><\/strong>Bard Enhances YouTube Experience Through Video Comprehension Capabilities<\/a><\/p>\n\n\n\n

    \u201cToday we\u2019re launching Gemini Advanced \u2014 a new experience that gives you access to Ultra 1.0, our largest and most capable state-of-the-art AI model. In blind evaluations with our third-party raters, Gemini Advanced with Ultra 1.0 is now the most preferred chatbot compared to leading alternatives\u201d<\/em>,\u00a0stated Sissie Hsiao<\/a>, Vice President and General Manager, of Google Assistant and Gemini Experiences (formerly known as Bard).<\/p>\n\n\n\n

    Gemini Advanced can help users with complex codes, detailed instructions, and logical reasoning. Google says it will continue to implement new features as it accelerates its AI research.<\/p>\n\n\n\n

    Gemini Advanced is available both on Android and iOS platforms. Google has rolled out Gemini in English in over 150 regions with plans to expand it to multiple languages.<\/p>\n","post_title":"Google Rebrands Its Flagship Chatbot Bard Into Gemini: Here Is What To Expect","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-rebrands-its-flagship-chatbot-bard-into-gemini-here-is-what-to-expect","to_ping":"","pinged":"","post_modified":"2024-02-16 22:20:04","post_modified_gmt":"2024-02-16 11:20:04","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15408","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

    Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

    Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

    See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

    Capabilities Of Lumiere<\/h2>\n\n\n\n

    As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

    Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

    In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

    At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":14802,"post_author":"17","post_date":"2023-12-29 23:01:53","post_date_gmt":"2023-12-29 12:01:53","post_content":"\n

    Google has recently unveiled its latest and most ambitious AI endeavor yet. Designated as \u201cGemini\u201d, it is \u201cthe most capable and general model\u201d built by the company. <\/p>\n\n\n\n

    According to Demis Hassabis<\/a>, CEO and Co-Founder of Google DeepMind, \u201cGemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research.\u201d. <\/em><\/strong>Google first announced the project back in May 2023 during Google I\/O. Since then, Gemini has garnered plenty of attention as a suitable competitor to OpenAI\u2019s GPT-4.<\/p>\n\n\n\n

    According to Hassabis, Gemini\u00a0\u201cwas built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image, and video.\u201d.<\/em><\/strong><\/p>\n\n\n\n

    See Related:<\/em><\/strong> Lightning Network Upgrades Coming To El Salvador Bitcoin ATMs<\/a><\/p>\n\n\n\n

    Sizes In Gemini 1.0<\/h2>\n\n\n\n

    The first generation of Gemini (called Gemini 1.0) comes in 3 different sizes: Gemini Ultra, Gemini Pro, and Gemini Mini. Google claims their new MLLM (multimodal large language models) exceeds the performance of other similar models on most academic benchmarks such as MMLU, GSM8K, etc.<\/p>\n\n\n\n

    Speaking positively on the impact Gemini will make in the AI industry and the potential it holds, Google CEO Sundar Pichai said, \"This new era of models represents one of the biggest science and engineering efforts we\u2019ve undertaken as a company\u201d<\/em><\/strong>.<\/p>\n\n\n\n

    Currently, Google is integrating Gemini Pro in many of its products, including Bard and Google Pixel. Gemini Ultra is only available to selected individuals and experts \u201cfor early experimentation and feedback\u201d.<\/em><\/strong><\/p>\n","post_title":"Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-its-largest-and-most-capable-ai-model-yet-google-gemini","to_ping":"","pinged":"","post_modified":"2023-12-29 23:01:58","post_modified_gmt":"2023-12-29 12:01:58","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=14802","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

    Most Read

    Subscribe To Our Newsletter

    By subscribing, you agree with our privacy and terms.

    Follow The Distributed

    ADVERTISEMENT
    \n