\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Speaking on the impact this deal can have on AI, Michael Terrell, senior director of Energy and Climate at Google, said, \u201cThe grid needs new electricity sources to support AI technologies that are powering major scientific advances. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone\u201d<\/em><\/strong>. <\/p>\n\n\n\n

Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

The recent advancement in generative AI technology has caused a surge in electricity demand. Other tech companies such as Microsoft have also turned to nuclear as a solution for a clean, round-the-clock power source. Google hopes this project will allow 24\/7 carbon-free energy to further its AI technologies and data centers. <\/p>\n\n\n\n

Speaking on the impact this deal can have on AI, Michael Terrell, senior director of Energy and Climate at Google, said, \u201cThe grid needs new electricity sources to support AI technologies that are powering major scientific advances. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone\u201d<\/em><\/strong>. <\/p>\n\n\n\n

Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related: <\/em><\/strong>Using AI To Create A Sustainable Future: Microsoft Teams Up With Leading Energy Company<\/a><\/p>\n\n\n\n

The recent advancement in generative AI technology has caused a surge in electricity demand. Other tech companies such as Microsoft have also turned to nuclear as a solution for a clean, round-the-clock power source. Google hopes this project will allow 24\/7 carbon-free energy to further its AI technologies and data centers. <\/p>\n\n\n\n

Speaking on the impact this deal can have on AI, Michael Terrell, senior director of Energy and Climate at Google, said, \u201cThe grid needs new electricity sources to support AI technologies that are powering major scientific advances. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone\u201d<\/em><\/strong>. <\/p>\n\n\n\n

Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Kairos Powers is a nuclear energy company based in California, USA. As part of the agreement, the company will build Google multiple small modular reactors (SMRs). These reactors utilize a molten-salt cooling system and graphite-pebble fuel to transport heat to a steam turbine, generating electrical energy. The first of these reactors is planned to be online by 2030 with the rest set to be active by 2035. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Using AI To Create A Sustainable Future: Microsoft Teams Up With Leading Energy Company<\/a><\/p>\n\n\n\n

The recent advancement in generative AI technology has caused a surge in electricity demand. Other tech companies such as Microsoft have also turned to nuclear as a solution for a clean, round-the-clock power source. Google hopes this project will allow 24\/7 carbon-free energy to further its AI technologies and data centers. <\/p>\n\n\n\n

Speaking on the impact this deal can have on AI, Michael Terrell, senior director of Energy and Climate at Google, said, \u201cThe grid needs new electricity sources to support AI technologies that are powering major scientific advances. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone\u201d<\/em><\/strong>. <\/p>\n\n\n\n

Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Google has recently announced a partnership with energy company Kairos Powers. The deal will see the tech company buy nuclear energy to power artificial intelligence development. \u201cToday, we\u2019re building on these efforts by signing the world\u2019s first corporate agreement to purchase nuclear energy from multiple small modular reactors (SMRs) to be developed by Kairos Power\u201d<\/em><\/strong>, the company confirmed in a blog post<\/a>.<\/p>\n\n\n\n

Kairos Powers is a nuclear energy company based in California, USA. As part of the agreement, the company will build Google multiple small modular reactors (SMRs). These reactors utilize a molten-salt cooling system and graphite-pebble fuel to transport heat to a steam turbine, generating electrical energy. The first of these reactors is planned to be online by 2030 with the rest set to be active by 2035. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Using AI To Create A Sustainable Future: Microsoft Teams Up With Leading Energy Company<\/a><\/p>\n\n\n\n

The recent advancement in generative AI technology has caused a surge in electricity demand. Other tech companies such as Microsoft have also turned to nuclear as a solution for a clean, round-the-clock power source. Google hopes this project will allow 24\/7 carbon-free energy to further its AI technologies and data centers. <\/p>\n\n\n\n

Speaking on the impact this deal can have on AI, Michael Terrell, senior director of Energy and Climate at Google, said, \u201cThe grid needs new electricity sources to support AI technologies that are powering major scientific advances. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone\u201d<\/em><\/strong>. <\/p>\n\n\n\n

Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

\u201cWe've seen how AI can help social impact organizations accelerate and scale their work. The funding announced today will help organizations create AI tools that will benefit not only communities across Africa but across the globe.\u201d,<\/em> said Jen Carter Google.org Head of Tech and Volunteering.<\/a><\/p>\n","post_title":"A Look At Google\u2019s $5.8 Million Commitment To AI In Africa","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-look-at-googles-5-8-million-commitment-to-ai-in-africa","to_ping":"","pinged":"","post_modified":"2024-11-02 05:34:35","post_modified_gmt":"2024-11-01 18:34:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19327","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19266,"post_author":"17","post_date":"2024-10-26 22:16:11","post_date_gmt":"2024-10-26 11:16:11","post_content":"\n

Google has recently announced a partnership with energy company Kairos Powers. The deal will see the tech company buy nuclear energy to power artificial intelligence development. \u201cToday, we\u2019re building on these efforts by signing the world\u2019s first corporate agreement to purchase nuclear energy from multiple small modular reactors (SMRs) to be developed by Kairos Power\u201d<\/em><\/strong>, the company confirmed in a blog post<\/a>.<\/p>\n\n\n\n

Kairos Powers is a nuclear energy company based in California, USA. As part of the agreement, the company will build Google multiple small modular reactors (SMRs). These reactors utilize a molten-salt cooling system and graphite-pebble fuel to transport heat to a steam turbine, generating electrical energy. The first of these reactors is planned to be online by 2030 with the rest set to be active by 2035. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Using AI To Create A Sustainable Future: Microsoft Teams Up With Leading Energy Company<\/a><\/p>\n\n\n\n

The recent advancement in generative AI technology has caused a surge in electricity demand. Other tech companies such as Microsoft have also turned to nuclear as a solution for a clean, round-the-clock power source. Google hopes this project will allow 24\/7 carbon-free energy to further its AI technologies and data centers. <\/p>\n\n\n\n

Speaking on the impact this deal can have on AI, Michael Terrell, senior director of Energy and Climate at Google, said, \u201cThe grid needs new electricity sources to support AI technologies that are powering major scientific advances. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone\u201d<\/em><\/strong>. <\/p>\n\n\n\n

Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Google believes that Africa has a massive potential for developing AI. The company\u2019s report on the \u201cDigital Opportunity of Africa\u201d estimated that AI could contribute up to $30 billion to the Sub-saharan economy by 2030. With this in mind, the company is aiming to equip people with the skills and resources they need to build and use AI responsibly and effectively.<\/p>\n\n\n\n

\u201cWe've seen how AI can help social impact organizations accelerate and scale their work. The funding announced today will help organizations create AI tools that will benefit not only communities across Africa but across the globe.\u201d,<\/em> said Jen Carter Google.org Head of Tech and Volunteering.<\/a><\/p>\n","post_title":"A Look At Google\u2019s $5.8 Million Commitment To AI In Africa","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-look-at-googles-5-8-million-commitment-to-ai-in-africa","to_ping":"","pinged":"","post_modified":"2024-11-02 05:34:35","post_modified_gmt":"2024-11-01 18:34:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19327","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19266,"post_author":"17","post_date":"2024-10-26 22:16:11","post_date_gmt":"2024-10-26 11:16:11","post_content":"\n

Google has recently announced a partnership with energy company Kairos Powers. The deal will see the tech company buy nuclear energy to power artificial intelligence development. \u201cToday, we\u2019re building on these efforts by signing the world\u2019s first corporate agreement to purchase nuclear energy from multiple small modular reactors (SMRs) to be developed by Kairos Power\u201d<\/em><\/strong>, the company confirmed in a blog post<\/a>.<\/p>\n\n\n\n

Kairos Powers is a nuclear energy company based in California, USA. As part of the agreement, the company will build Google multiple small modular reactors (SMRs). These reactors utilize a molten-salt cooling system and graphite-pebble fuel to transport heat to a steam turbine, generating electrical energy. The first of these reactors is planned to be online by 2030 with the rest set to be active by 2035. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Using AI To Create A Sustainable Future: Microsoft Teams Up With Leading Energy Company<\/a><\/p>\n\n\n\n

The recent advancement in generative AI technology has caused a surge in electricity demand. Other tech companies such as Microsoft have also turned to nuclear as a solution for a clean, round-the-clock power source. Google hopes this project will allow 24\/7 carbon-free energy to further its AI technologies and data centers. <\/p>\n\n\n\n

Speaking on the impact this deal can have on AI, Michael Terrell, senior director of Energy and Climate at Google, said, \u201cThe grid needs new electricity sources to support AI technologies that are powering major scientific advances. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone\u201d<\/em><\/strong>. <\/p>\n\n\n\n

Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related: <\/em><\/strong>Venom To Launch A Blockchain Hub With Kenyan Government<\/a><\/p>\n\n\n\n

Google believes that Africa has a massive potential for developing AI. The company\u2019s report on the \u201cDigital Opportunity of Africa\u201d estimated that AI could contribute up to $30 billion to the Sub-saharan economy by 2030. With this in mind, the company is aiming to equip people with the skills and resources they need to build and use AI responsibly and effectively.<\/p>\n\n\n\n

\u201cWe've seen how AI can help social impact organizations accelerate and scale their work. The funding announced today will help organizations create AI tools that will benefit not only communities across Africa but across the globe.\u201d,<\/em> said Jen Carter Google.org Head of Tech and Volunteering.<\/a><\/p>\n","post_title":"A Look At Google\u2019s $5.8 Million Commitment To AI In Africa","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-look-at-googles-5-8-million-commitment-to-ai-in-africa","to_ping":"","pinged":"","post_modified":"2024-11-02 05:34:35","post_modified_gmt":"2024-11-01 18:34:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19327","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19266,"post_author":"17","post_date":"2024-10-26 22:16:11","post_date_gmt":"2024-10-26 11:16:11","post_content":"\n

Google has recently announced a partnership with energy company Kairos Powers. The deal will see the tech company buy nuclear energy to power artificial intelligence development. \u201cToday, we\u2019re building on these efforts by signing the world\u2019s first corporate agreement to purchase nuclear energy from multiple small modular reactors (SMRs) to be developed by Kairos Power\u201d<\/em><\/strong>, the company confirmed in a blog post<\/a>.<\/p>\n\n\n\n

Kairos Powers is a nuclear energy company based in California, USA. As part of the agreement, the company will build Google multiple small modular reactors (SMRs). These reactors utilize a molten-salt cooling system and graphite-pebble fuel to transport heat to a steam turbine, generating electrical energy. The first of these reactors is planned to be online by 2030 with the rest set to be active by 2035. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Using AI To Create A Sustainable Future: Microsoft Teams Up With Leading Energy Company<\/a><\/p>\n\n\n\n

The recent advancement in generative AI technology has caused a surge in electricity demand. Other tech companies such as Microsoft have also turned to nuclear as a solution for a clean, round-the-clock power source. Google hopes this project will allow 24\/7 carbon-free energy to further its AI technologies and data centers. <\/p>\n\n\n\n

Speaking on the impact this deal can have on AI, Michael Terrell, senior director of Energy and Climate at Google, said, \u201cThe grid needs new electricity sources to support AI technologies that are powering major scientific advances. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone\u201d<\/em><\/strong>. <\/p>\n\n\n\n

Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

This commitment further solidifies Google\u2019s larger ambition of accelerating Africa\u2019s digital transformation. In 2023, Google announced a $1 billion project over 5 years in Africa. The goal of this investment was to support a range of initiatives, from improved connectivity to investment in startups, to help boost Africa\u2019s digital transformation.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Venom To Launch A Blockchain Hub With Kenyan Government<\/a><\/p>\n\n\n\n

Google believes that Africa has a massive potential for developing AI. The company\u2019s report on the \u201cDigital Opportunity of Africa\u201d estimated that AI could contribute up to $30 billion to the Sub-saharan economy by 2030. With this in mind, the company is aiming to equip people with the skills and resources they need to build and use AI responsibly and effectively.<\/p>\n\n\n\n

\u201cWe've seen how AI can help social impact organizations accelerate and scale their work. The funding announced today will help organizations create AI tools that will benefit not only communities across Africa but across the globe.\u201d,<\/em> said Jen Carter Google.org Head of Tech and Volunteering.<\/a><\/p>\n","post_title":"A Look At Google\u2019s $5.8 Million Commitment To AI In Africa","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-look-at-googles-5-8-million-commitment-to-ai-in-africa","to_ping":"","pinged":"","post_modified":"2024-11-02 05:34:35","post_modified_gmt":"2024-11-01 18:34:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19327","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19266,"post_author":"17","post_date":"2024-10-26 22:16:11","post_date_gmt":"2024-10-26 11:16:11","post_content":"\n

Google has recently announced a partnership with energy company Kairos Powers. The deal will see the tech company buy nuclear energy to power artificial intelligence development. \u201cToday, we\u2019re building on these efforts by signing the world\u2019s first corporate agreement to purchase nuclear energy from multiple small modular reactors (SMRs) to be developed by Kairos Power\u201d<\/em><\/strong>, the company confirmed in a blog post<\/a>.<\/p>\n\n\n\n

Kairos Powers is a nuclear energy company based in California, USA. As part of the agreement, the company will build Google multiple small modular reactors (SMRs). These reactors utilize a molten-salt cooling system and graphite-pebble fuel to transport heat to a steam turbine, generating electrical energy. The first of these reactors is planned to be online by 2030 with the rest set to be active by 2035. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Using AI To Create A Sustainable Future: Microsoft Teams Up With Leading Energy Company<\/a><\/p>\n\n\n\n

The recent advancement in generative AI technology has caused a surge in electricity demand. Other tech companies such as Microsoft have also turned to nuclear as a solution for a clean, round-the-clock power source. Google hopes this project will allow 24\/7 carbon-free energy to further its AI technologies and data centers. <\/p>\n\n\n\n

Speaking on the impact this deal can have on AI, Michael Terrell, senior director of Energy and Climate at Google, said, \u201cThe grid needs new electricity sources to support AI technologies that are powering major scientific advances. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone\u201d<\/em><\/strong>. <\/p>\n\n\n\n

Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

American tech company Google recently announced a $5.8 million investment to facilitate AI growth in Africa. The company confirmed<\/a> on Monday that the initiative would extend across sub-Saharan Africa to \u201cempower individuals and organizations to leverage AI for economic growth and social impact.\u201d<\/em><\/strong> <\/p>\n\n\n\n

This commitment further solidifies Google\u2019s larger ambition of accelerating Africa\u2019s digital transformation. In 2023, Google announced a $1 billion project over 5 years in Africa. The goal of this investment was to support a range of initiatives, from improved connectivity to investment in startups, to help boost Africa\u2019s digital transformation.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Venom To Launch A Blockchain Hub With Kenyan Government<\/a><\/p>\n\n\n\n

Google believes that Africa has a massive potential for developing AI. The company\u2019s report on the \u201cDigital Opportunity of Africa\u201d estimated that AI could contribute up to $30 billion to the Sub-saharan economy by 2030. With this in mind, the company is aiming to equip people with the skills and resources they need to build and use AI responsibly and effectively.<\/p>\n\n\n\n

\u201cWe've seen how AI can help social impact organizations accelerate and scale their work. The funding announced today will help organizations create AI tools that will benefit not only communities across Africa but across the globe.\u201d,<\/em> said Jen Carter Google.org Head of Tech and Volunteering.<\/a><\/p>\n","post_title":"A Look At Google\u2019s $5.8 Million Commitment To AI In Africa","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-look-at-googles-5-8-million-commitment-to-ai-in-africa","to_ping":"","pinged":"","post_modified":"2024-11-02 05:34:35","post_modified_gmt":"2024-11-01 18:34:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19327","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19266,"post_author":"17","post_date":"2024-10-26 22:16:11","post_date_gmt":"2024-10-26 11:16:11","post_content":"\n

Google has recently announced a partnership with energy company Kairos Powers. The deal will see the tech company buy nuclear energy to power artificial intelligence development. \u201cToday, we\u2019re building on these efforts by signing the world\u2019s first corporate agreement to purchase nuclear energy from multiple small modular reactors (SMRs) to be developed by Kairos Power\u201d<\/em><\/strong>, the company confirmed in a blog post<\/a>.<\/p>\n\n\n\n

Kairos Powers is a nuclear energy company based in California, USA. As part of the agreement, the company will build Google multiple small modular reactors (SMRs). These reactors utilize a molten-salt cooling system and graphite-pebble fuel to transport heat to a steam turbine, generating electrical energy. The first of these reactors is planned to be online by 2030 with the rest set to be active by 2035. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Using AI To Create A Sustainable Future: Microsoft Teams Up With Leading Energy Company<\/a><\/p>\n\n\n\n

The recent advancement in generative AI technology has caused a surge in electricity demand. Other tech companies such as Microsoft have also turned to nuclear as a solution for a clean, round-the-clock power source. Google hopes this project will allow 24\/7 carbon-free energy to further its AI technologies and data centers. <\/p>\n\n\n\n

Speaking on the impact this deal can have on AI, Michael Terrell, senior director of Energy and Climate at Google, said, \u201cThe grid needs new electricity sources to support AI technologies that are powering major scientific advances. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone\u201d<\/em><\/strong>. <\/p>\n\n\n\n

Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Currently, an experimental version of Gemini 2.0 Flash is available to all Gemini users. Users can also access a chat-optimized version of Gemini 2.0 Flash on desktop and mobile web. This will be made available to Gemini App users next year. Google is also testing Gemini 2.0 in AI Overviews with plans of widespread rollouts early next year.<\/p>\n","post_title":"Introducing Gemini 2.0: Google\u2019s Most Capable Model That Can Power AI Agents","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-2-0-googles-most-capable-model-that-can-power-ai-agents","to_ping":"","pinged":"","post_modified":"2024-12-19 21:51:21","post_modified_gmt":"2024-12-19 10:51:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19917","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19327,"post_author":"17","post_date":"2024-11-02 05:34:27","post_date_gmt":"2024-11-01 18:34:27","post_content":"\n

American tech company Google recently announced a $5.8 million investment to facilitate AI growth in Africa. The company confirmed<\/a> on Monday that the initiative would extend across sub-Saharan Africa to \u201cempower individuals and organizations to leverage AI for economic growth and social impact.\u201d<\/em><\/strong> <\/p>\n\n\n\n

This commitment further solidifies Google\u2019s larger ambition of accelerating Africa\u2019s digital transformation. In 2023, Google announced a $1 billion project over 5 years in Africa. The goal of this investment was to support a range of initiatives, from improved connectivity to investment in startups, to help boost Africa\u2019s digital transformation.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Venom To Launch A Blockchain Hub With Kenyan Government<\/a><\/p>\n\n\n\n

Google believes that Africa has a massive potential for developing AI. The company\u2019s report on the \u201cDigital Opportunity of Africa\u201d estimated that AI could contribute up to $30 billion to the Sub-saharan economy by 2030. With this in mind, the company is aiming to equip people with the skills and resources they need to build and use AI responsibly and effectively.<\/p>\n\n\n\n

\u201cWe've seen how AI can help social impact organizations accelerate and scale their work. The funding announced today will help organizations create AI tools that will benefit not only communities across Africa but across the globe.\u201d,<\/em> said Jen Carter Google.org Head of Tech and Volunteering.<\/a><\/p>\n","post_title":"A Look At Google\u2019s $5.8 Million Commitment To AI In Africa","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-look-at-googles-5-8-million-commitment-to-ai-in-africa","to_ping":"","pinged":"","post_modified":"2024-11-02 05:34:35","post_modified_gmt":"2024-11-01 18:34:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19327","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19266,"post_author":"17","post_date":"2024-10-26 22:16:11","post_date_gmt":"2024-10-26 11:16:11","post_content":"\n

Google has recently announced a partnership with energy company Kairos Powers. The deal will see the tech company buy nuclear energy to power artificial intelligence development. \u201cToday, we\u2019re building on these efforts by signing the world\u2019s first corporate agreement to purchase nuclear energy from multiple small modular reactors (SMRs) to be developed by Kairos Power\u201d<\/em><\/strong>, the company confirmed in a blog post<\/a>.<\/p>\n\n\n\n

Kairos Powers is a nuclear energy company based in California, USA. As part of the agreement, the company will build Google multiple small modular reactors (SMRs). These reactors utilize a molten-salt cooling system and graphite-pebble fuel to transport heat to a steam turbine, generating electrical energy. The first of these reactors is planned to be online by 2030 with the rest set to be active by 2035. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Using AI To Create A Sustainable Future: Microsoft Teams Up With Leading Energy Company<\/a><\/p>\n\n\n\n

The recent advancement in generative AI technology has caused a surge in electricity demand. Other tech companies such as Microsoft have also turned to nuclear as a solution for a clean, round-the-clock power source. Google hopes this project will allow 24\/7 carbon-free energy to further its AI technologies and data centers. <\/p>\n\n\n\n

Speaking on the impact this deal can have on AI, Michael Terrell, senior director of Energy and Climate at Google, said, \u201cThe grid needs new electricity sources to support AI technologies that are powering major scientific advances. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone\u201d<\/em><\/strong>. <\/p>\n\n\n\n

Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Additionally, Google is implementing Gemini 2.0 into several of its products. This includes Project Mariner, an experimental Chrome extension. This AI agent can browse the internet and complete online tasks (such as shopping) for the user. There is also Jules, an AI agent that can help programmers with debugging codes. Gemini 2.0 is also set to help gamers by generating strategies in real-time conversations. It should be noted that most of these agents are still in the development stage and are accessible to select users only.<\/p>\n\n\n\n

Currently, an experimental version of Gemini 2.0 Flash is available to all Gemini users. Users can also access a chat-optimized version of Gemini 2.0 Flash on desktop and mobile web. This will be made available to Gemini App users next year. Google is also testing Gemini 2.0 in AI Overviews with plans of widespread rollouts early next year.<\/p>\n","post_title":"Introducing Gemini 2.0: Google\u2019s Most Capable Model That Can Power AI Agents","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-2-0-googles-most-capable-model-that-can-power-ai-agents","to_ping":"","pinged":"","post_modified":"2024-12-19 21:51:21","post_modified_gmt":"2024-12-19 10:51:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19917","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19327,"post_author":"17","post_date":"2024-11-02 05:34:27","post_date_gmt":"2024-11-01 18:34:27","post_content":"\n

American tech company Google recently announced a $5.8 million investment to facilitate AI growth in Africa. The company confirmed<\/a> on Monday that the initiative would extend across sub-Saharan Africa to \u201cempower individuals and organizations to leverage AI for economic growth and social impact.\u201d<\/em><\/strong> <\/p>\n\n\n\n

This commitment further solidifies Google\u2019s larger ambition of accelerating Africa\u2019s digital transformation. In 2023, Google announced a $1 billion project over 5 years in Africa. The goal of this investment was to support a range of initiatives, from improved connectivity to investment in startups, to help boost Africa\u2019s digital transformation.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Venom To Launch A Blockchain Hub With Kenyan Government<\/a><\/p>\n\n\n\n

Google believes that Africa has a massive potential for developing AI. The company\u2019s report on the \u201cDigital Opportunity of Africa\u201d estimated that AI could contribute up to $30 billion to the Sub-saharan economy by 2030. With this in mind, the company is aiming to equip people with the skills and resources they need to build and use AI responsibly and effectively.<\/p>\n\n\n\n

\u201cWe've seen how AI can help social impact organizations accelerate and scale their work. The funding announced today will help organizations create AI tools that will benefit not only communities across Africa but across the globe.\u201d,<\/em> said Jen Carter Google.org Head of Tech and Volunteering.<\/a><\/p>\n","post_title":"A Look At Google\u2019s $5.8 Million Commitment To AI In Africa","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-look-at-googles-5-8-million-commitment-to-ai-in-africa","to_ping":"","pinged":"","post_modified":"2024-11-02 05:34:35","post_modified_gmt":"2024-11-01 18:34:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19327","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19266,"post_author":"17","post_date":"2024-10-26 22:16:11","post_date_gmt":"2024-10-26 11:16:11","post_content":"\n

Google has recently announced a partnership with energy company Kairos Powers. The deal will see the tech company buy nuclear energy to power artificial intelligence development. \u201cToday, we\u2019re building on these efforts by signing the world\u2019s first corporate agreement to purchase nuclear energy from multiple small modular reactors (SMRs) to be developed by Kairos Power\u201d<\/em><\/strong>, the company confirmed in a blog post<\/a>.<\/p>\n\n\n\n

Kairos Powers is a nuclear energy company based in California, USA. As part of the agreement, the company will build Google multiple small modular reactors (SMRs). These reactors utilize a molten-salt cooling system and graphite-pebble fuel to transport heat to a steam turbine, generating electrical energy. The first of these reactors is planned to be online by 2030 with the rest set to be active by 2035. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Using AI To Create A Sustainable Future: Microsoft Teams Up With Leading Energy Company<\/a><\/p>\n\n\n\n

The recent advancement in generative AI technology has caused a surge in electricity demand. Other tech companies such as Microsoft have also turned to nuclear as a solution for a clean, round-the-clock power source. Google hopes this project will allow 24\/7 carbon-free energy to further its AI technologies and data centers. <\/p>\n\n\n\n

Speaking on the impact this deal can have on AI, Michael Terrell, senior director of Energy and Climate at Google, said, \u201cThe grid needs new electricity sources to support AI technologies that are powering major scientific advances. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone\u201d<\/em><\/strong>. <\/p>\n\n\n\n

Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Google released a blog post<\/a> that highlights Gemini 2.0\u2019s enhanced capabilities. Gemini 2.0 outperforms older models such as Gemini 1.5 Flash and Gemini 1.5 Pro in several key benchmarks. This new model can support both multimodal inputs and multimodal outputs.<\/p>\n\n\n\n

Additionally, Google is implementing Gemini 2.0 into several of its products. This includes Project Mariner, an experimental Chrome extension. This AI agent can browse the internet and complete online tasks (such as shopping) for the user. There is also Jules, an AI agent that can help programmers with debugging codes. Gemini 2.0 is also set to help gamers by generating strategies in real-time conversations. It should be noted that most of these agents are still in the development stage and are accessible to select users only.<\/p>\n\n\n\n

Currently, an experimental version of Gemini 2.0 Flash is available to all Gemini users. Users can also access a chat-optimized version of Gemini 2.0 Flash on desktop and mobile web. This will be made available to Gemini App users next year. Google is also testing Gemini 2.0 in AI Overviews with plans of widespread rollouts early next year.<\/p>\n","post_title":"Introducing Gemini 2.0: Google\u2019s Most Capable Model That Can Power AI Agents","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-2-0-googles-most-capable-model-that-can-power-ai-agents","to_ping":"","pinged":"","post_modified":"2024-12-19 21:51:21","post_modified_gmt":"2024-12-19 10:51:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19917","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19327,"post_author":"17","post_date":"2024-11-02 05:34:27","post_date_gmt":"2024-11-01 18:34:27","post_content":"\n

American tech company Google recently announced a $5.8 million investment to facilitate AI growth in Africa. The company confirmed<\/a> on Monday that the initiative would extend across sub-Saharan Africa to \u201cempower individuals and organizations to leverage AI for economic growth and social impact.\u201d<\/em><\/strong> <\/p>\n\n\n\n

This commitment further solidifies Google\u2019s larger ambition of accelerating Africa\u2019s digital transformation. In 2023, Google announced a $1 billion project over 5 years in Africa. The goal of this investment was to support a range of initiatives, from improved connectivity to investment in startups, to help boost Africa\u2019s digital transformation.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Venom To Launch A Blockchain Hub With Kenyan Government<\/a><\/p>\n\n\n\n

Google believes that Africa has a massive potential for developing AI. The company\u2019s report on the \u201cDigital Opportunity of Africa\u201d estimated that AI could contribute up to $30 billion to the Sub-saharan economy by 2030. With this in mind, the company is aiming to equip people with the skills and resources they need to build and use AI responsibly and effectively.<\/p>\n\n\n\n

\u201cWe've seen how AI can help social impact organizations accelerate and scale their work. The funding announced today will help organizations create AI tools that will benefit not only communities across Africa but across the globe.\u201d,<\/em> said Jen Carter Google.org Head of Tech and Volunteering.<\/a><\/p>\n","post_title":"A Look At Google\u2019s $5.8 Million Commitment To AI In Africa","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-look-at-googles-5-8-million-commitment-to-ai-in-africa","to_ping":"","pinged":"","post_modified":"2024-11-02 05:34:35","post_modified_gmt":"2024-11-01 18:34:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19327","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19266,"post_author":"17","post_date":"2024-10-26 22:16:11","post_date_gmt":"2024-10-26 11:16:11","post_content":"\n

Google has recently announced a partnership with energy company Kairos Powers. The deal will see the tech company buy nuclear energy to power artificial intelligence development. \u201cToday, we\u2019re building on these efforts by signing the world\u2019s first corporate agreement to purchase nuclear energy from multiple small modular reactors (SMRs) to be developed by Kairos Power\u201d<\/em><\/strong>, the company confirmed in a blog post<\/a>.<\/p>\n\n\n\n

Kairos Powers is a nuclear energy company based in California, USA. As part of the agreement, the company will build Google multiple small modular reactors (SMRs). These reactors utilize a molten-salt cooling system and graphite-pebble fuel to transport heat to a steam turbine, generating electrical energy. The first of these reactors is planned to be online by 2030 with the rest set to be active by 2035. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Using AI To Create A Sustainable Future: Microsoft Teams Up With Leading Energy Company<\/a><\/p>\n\n\n\n

The recent advancement in generative AI technology has caused a surge in electricity demand. Other tech companies such as Microsoft have also turned to nuclear as a solution for a clean, round-the-clock power source. Google hopes this project will allow 24\/7 carbon-free energy to further its AI technologies and data centers. <\/p>\n\n\n\n

Speaking on the impact this deal can have on AI, Michael Terrell, senior director of Energy and Climate at Google, said, \u201cThe grid needs new electricity sources to support AI technologies that are powering major scientific advances. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone\u201d<\/em><\/strong>. <\/p>\n\n\n\n

Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Gemini 2.0 Highlights<\/strong><\/h2>\n\n\n\n

Google released a blog post<\/a> that highlights Gemini 2.0\u2019s enhanced capabilities. Gemini 2.0 outperforms older models such as Gemini 1.5 Flash and Gemini 1.5 Pro in several key benchmarks. This new model can support both multimodal inputs and multimodal outputs.<\/p>\n\n\n\n

Additionally, Google is implementing Gemini 2.0 into several of its products. This includes Project Mariner, an experimental Chrome extension. This AI agent can browse the internet and complete online tasks (such as shopping) for the user. There is also Jules, an AI agent that can help programmers with debugging codes. Gemini 2.0 is also set to help gamers by generating strategies in real-time conversations. It should be noted that most of these agents are still in the development stage and are accessible to select users only.<\/p>\n\n\n\n

Currently, an experimental version of Gemini 2.0 Flash is available to all Gemini users. Users can also access a chat-optimized version of Gemini 2.0 Flash on desktop and mobile web. This will be made available to Gemini App users next year. Google is also testing Gemini 2.0 in AI Overviews with plans of widespread rollouts early next year.<\/p>\n","post_title":"Introducing Gemini 2.0: Google\u2019s Most Capable Model That Can Power AI Agents","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-2-0-googles-most-capable-model-that-can-power-ai-agents","to_ping":"","pinged":"","post_modified":"2024-12-19 21:51:21","post_modified_gmt":"2024-12-19 10:51:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19917","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19327,"post_author":"17","post_date":"2024-11-02 05:34:27","post_date_gmt":"2024-11-01 18:34:27","post_content":"\n

American tech company Google recently announced a $5.8 million investment to facilitate AI growth in Africa. The company confirmed<\/a> on Monday that the initiative would extend across sub-Saharan Africa to \u201cempower individuals and organizations to leverage AI for economic growth and social impact.\u201d<\/em><\/strong> <\/p>\n\n\n\n

This commitment further solidifies Google\u2019s larger ambition of accelerating Africa\u2019s digital transformation. In 2023, Google announced a $1 billion project over 5 years in Africa. The goal of this investment was to support a range of initiatives, from improved connectivity to investment in startups, to help boost Africa\u2019s digital transformation.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Venom To Launch A Blockchain Hub With Kenyan Government<\/a><\/p>\n\n\n\n

Google believes that Africa has a massive potential for developing AI. The company\u2019s report on the \u201cDigital Opportunity of Africa\u201d estimated that AI could contribute up to $30 billion to the Sub-saharan economy by 2030. With this in mind, the company is aiming to equip people with the skills and resources they need to build and use AI responsibly and effectively.<\/p>\n\n\n\n

\u201cWe've seen how AI can help social impact organizations accelerate and scale their work. The funding announced today will help organizations create AI tools that will benefit not only communities across Africa but across the globe.\u201d,<\/em> said Jen Carter Google.org Head of Tech and Volunteering.<\/a><\/p>\n","post_title":"A Look At Google\u2019s $5.8 Million Commitment To AI In Africa","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-look-at-googles-5-8-million-commitment-to-ai-in-africa","to_ping":"","pinged":"","post_modified":"2024-11-02 05:34:35","post_modified_gmt":"2024-11-01 18:34:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19327","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19266,"post_author":"17","post_date":"2024-10-26 22:16:11","post_date_gmt":"2024-10-26 11:16:11","post_content":"\n

Google has recently announced a partnership with energy company Kairos Powers. The deal will see the tech company buy nuclear energy to power artificial intelligence development. \u201cToday, we\u2019re building on these efforts by signing the world\u2019s first corporate agreement to purchase nuclear energy from multiple small modular reactors (SMRs) to be developed by Kairos Power\u201d<\/em><\/strong>, the company confirmed in a blog post<\/a>.<\/p>\n\n\n\n

Kairos Powers is a nuclear energy company based in California, USA. As part of the agreement, the company will build Google multiple small modular reactors (SMRs). These reactors utilize a molten-salt cooling system and graphite-pebble fuel to transport heat to a steam turbine, generating electrical energy. The first of these reactors is planned to be online by 2030 with the rest set to be active by 2035. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Using AI To Create A Sustainable Future: Microsoft Teams Up With Leading Energy Company<\/a><\/p>\n\n\n\n

The recent advancement in generative AI technology has caused a surge in electricity demand. Other tech companies such as Microsoft have also turned to nuclear as a solution for a clean, round-the-clock power source. Google hopes this project will allow 24\/7 carbon-free energy to further its AI technologies and data centers. <\/p>\n\n\n\n

Speaking on the impact this deal can have on AI, Michael Terrell, senior director of Energy and Climate at Google, said, \u201cThe grid needs new electricity sources to support AI technologies that are powering major scientific advances. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone\u201d<\/em><\/strong>. <\/p>\n\n\n\n

Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related: <\/em><\/strong>Google Announces Gemini Flash As It Attempts To Top The Generative AI Race<\/a><\/p>\n\n\n\n

Gemini 2.0 Highlights<\/strong><\/h2>\n\n\n\n

Google released a blog post<\/a> that highlights Gemini 2.0\u2019s enhanced capabilities. Gemini 2.0 outperforms older models such as Gemini 1.5 Flash and Gemini 1.5 Pro in several key benchmarks. This new model can support both multimodal inputs and multimodal outputs.<\/p>\n\n\n\n

Additionally, Google is implementing Gemini 2.0 into several of its products. This includes Project Mariner, an experimental Chrome extension. This AI agent can browse the internet and complete online tasks (such as shopping) for the user. There is also Jules, an AI agent that can help programmers with debugging codes. Gemini 2.0 is also set to help gamers by generating strategies in real-time conversations. It should be noted that most of these agents are still in the development stage and are accessible to select users only.<\/p>\n\n\n\n

Currently, an experimental version of Gemini 2.0 Flash is available to all Gemini users. Users can also access a chat-optimized version of Gemini 2.0 Flash on desktop and mobile web. This will be made available to Gemini App users next year. Google is also testing Gemini 2.0 in AI Overviews with plans of widespread rollouts early next year.<\/p>\n","post_title":"Introducing Gemini 2.0: Google\u2019s Most Capable Model That Can Power AI Agents","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-2-0-googles-most-capable-model-that-can-power-ai-agents","to_ping":"","pinged":"","post_modified":"2024-12-19 21:51:21","post_modified_gmt":"2024-12-19 10:51:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19917","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19327,"post_author":"17","post_date":"2024-11-02 05:34:27","post_date_gmt":"2024-11-01 18:34:27","post_content":"\n

American tech company Google recently announced a $5.8 million investment to facilitate AI growth in Africa. The company confirmed<\/a> on Monday that the initiative would extend across sub-Saharan Africa to \u201cempower individuals and organizations to leverage AI for economic growth and social impact.\u201d<\/em><\/strong> <\/p>\n\n\n\n

This commitment further solidifies Google\u2019s larger ambition of accelerating Africa\u2019s digital transformation. In 2023, Google announced a $1 billion project over 5 years in Africa. The goal of this investment was to support a range of initiatives, from improved connectivity to investment in startups, to help boost Africa\u2019s digital transformation.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Venom To Launch A Blockchain Hub With Kenyan Government<\/a><\/p>\n\n\n\n

Google believes that Africa has a massive potential for developing AI. The company\u2019s report on the \u201cDigital Opportunity of Africa\u201d estimated that AI could contribute up to $30 billion to the Sub-saharan economy by 2030. With this in mind, the company is aiming to equip people with the skills and resources they need to build and use AI responsibly and effectively.<\/p>\n\n\n\n

\u201cWe've seen how AI can help social impact organizations accelerate and scale their work. The funding announced today will help organizations create AI tools that will benefit not only communities across Africa but across the globe.\u201d,<\/em> said Jen Carter Google.org Head of Tech and Volunteering.<\/a><\/p>\n","post_title":"A Look At Google\u2019s $5.8 Million Commitment To AI In Africa","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-look-at-googles-5-8-million-commitment-to-ai-in-africa","to_ping":"","pinged":"","post_modified":"2024-11-02 05:34:35","post_modified_gmt":"2024-11-01 18:34:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19327","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19266,"post_author":"17","post_date":"2024-10-26 22:16:11","post_date_gmt":"2024-10-26 11:16:11","post_content":"\n

Google has recently announced a partnership with energy company Kairos Powers. The deal will see the tech company buy nuclear energy to power artificial intelligence development. \u201cToday, we\u2019re building on these efforts by signing the world\u2019s first corporate agreement to purchase nuclear energy from multiple small modular reactors (SMRs) to be developed by Kairos Power\u201d<\/em><\/strong>, the company confirmed in a blog post<\/a>.<\/p>\n\n\n\n

Kairos Powers is a nuclear energy company based in California, USA. As part of the agreement, the company will build Google multiple small modular reactors (SMRs). These reactors utilize a molten-salt cooling system and graphite-pebble fuel to transport heat to a steam turbine, generating electrical energy. The first of these reactors is planned to be online by 2030 with the rest set to be active by 2035. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Using AI To Create A Sustainable Future: Microsoft Teams Up With Leading Energy Company<\/a><\/p>\n\n\n\n

The recent advancement in generative AI technology has caused a surge in electricity demand. Other tech companies such as Microsoft have also turned to nuclear as a solution for a clean, round-the-clock power source. Google hopes this project will allow 24\/7 carbon-free energy to further its AI technologies and data centers. <\/p>\n\n\n\n

Speaking on the impact this deal can have on AI, Michael Terrell, senior director of Energy and Climate at Google, said, \u201cThe grid needs new electricity sources to support AI technologies that are powering major scientific advances. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone\u201d<\/em><\/strong>. <\/p>\n\n\n\n

Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

With Gemini 2.0 Google is entering what it refers to as the \u201cera of AI agents\u201d. According to Demis Hassabi, CEO of Google DeepMind, \u201cGemini 2.0 Flash\u2019s native user interface action-capabilities, along with other improvements, all work in concert to enable a new class of agentic experiences\u201d<\/em><\/strong>. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces Gemini Flash As It Attempts To Top The Generative AI Race<\/a><\/p>\n\n\n\n

Gemini 2.0 Highlights<\/strong><\/h2>\n\n\n\n

Google released a blog post<\/a> that highlights Gemini 2.0\u2019s enhanced capabilities. Gemini 2.0 outperforms older models such as Gemini 1.5 Flash and Gemini 1.5 Pro in several key benchmarks. This new model can support both multimodal inputs and multimodal outputs.<\/p>\n\n\n\n

Additionally, Google is implementing Gemini 2.0 into several of its products. This includes Project Mariner, an experimental Chrome extension. This AI agent can browse the internet and complete online tasks (such as shopping) for the user. There is also Jules, an AI agent that can help programmers with debugging codes. Gemini 2.0 is also set to help gamers by generating strategies in real-time conversations. It should be noted that most of these agents are still in the development stage and are accessible to select users only.<\/p>\n\n\n\n

Currently, an experimental version of Gemini 2.0 Flash is available to all Gemini users. Users can also access a chat-optimized version of Gemini 2.0 Flash on desktop and mobile web. This will be made available to Gemini App users next year. Google is also testing Gemini 2.0 in AI Overviews with plans of widespread rollouts early next year.<\/p>\n","post_title":"Introducing Gemini 2.0: Google\u2019s Most Capable Model That Can Power AI Agents","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-2-0-googles-most-capable-model-that-can-power-ai-agents","to_ping":"","pinged":"","post_modified":"2024-12-19 21:51:21","post_modified_gmt":"2024-12-19 10:51:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19917","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19327,"post_author":"17","post_date":"2024-11-02 05:34:27","post_date_gmt":"2024-11-01 18:34:27","post_content":"\n

American tech company Google recently announced a $5.8 million investment to facilitate AI growth in Africa. The company confirmed<\/a> on Monday that the initiative would extend across sub-Saharan Africa to \u201cempower individuals and organizations to leverage AI for economic growth and social impact.\u201d<\/em><\/strong> <\/p>\n\n\n\n

This commitment further solidifies Google\u2019s larger ambition of accelerating Africa\u2019s digital transformation. In 2023, Google announced a $1 billion project over 5 years in Africa. The goal of this investment was to support a range of initiatives, from improved connectivity to investment in startups, to help boost Africa\u2019s digital transformation.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Venom To Launch A Blockchain Hub With Kenyan Government<\/a><\/p>\n\n\n\n

Google believes that Africa has a massive potential for developing AI. The company\u2019s report on the \u201cDigital Opportunity of Africa\u201d estimated that AI could contribute up to $30 billion to the Sub-saharan economy by 2030. With this in mind, the company is aiming to equip people with the skills and resources they need to build and use AI responsibly and effectively.<\/p>\n\n\n\n

\u201cWe've seen how AI can help social impact organizations accelerate and scale their work. The funding announced today will help organizations create AI tools that will benefit not only communities across Africa but across the globe.\u201d,<\/em> said Jen Carter Google.org Head of Tech and Volunteering.<\/a><\/p>\n","post_title":"A Look At Google\u2019s $5.8 Million Commitment To AI In Africa","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-look-at-googles-5-8-million-commitment-to-ai-in-africa","to_ping":"","pinged":"","post_modified":"2024-11-02 05:34:35","post_modified_gmt":"2024-11-01 18:34:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19327","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19266,"post_author":"17","post_date":"2024-10-26 22:16:11","post_date_gmt":"2024-10-26 11:16:11","post_content":"\n

Google has recently announced a partnership with energy company Kairos Powers. The deal will see the tech company buy nuclear energy to power artificial intelligence development. \u201cToday, we\u2019re building on these efforts by signing the world\u2019s first corporate agreement to purchase nuclear energy from multiple small modular reactors (SMRs) to be developed by Kairos Power\u201d<\/em><\/strong>, the company confirmed in a blog post<\/a>.<\/p>\n\n\n\n

Kairos Powers is a nuclear energy company based in California, USA. As part of the agreement, the company will build Google multiple small modular reactors (SMRs). These reactors utilize a molten-salt cooling system and graphite-pebble fuel to transport heat to a steam turbine, generating electrical energy. The first of these reactors is planned to be online by 2030 with the rest set to be active by 2035. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Using AI To Create A Sustainable Future: Microsoft Teams Up With Leading Energy Company<\/a><\/p>\n\n\n\n

The recent advancement in generative AI technology has caused a surge in electricity demand. Other tech companies such as Microsoft have also turned to nuclear as a solution for a clean, round-the-clock power source. Google hopes this project will allow 24\/7 carbon-free energy to further its AI technologies and data centers. <\/p>\n\n\n\n

Speaking on the impact this deal can have on AI, Michael Terrell, senior director of Energy and Climate at Google, said, \u201cThe grid needs new electricity sources to support AI technologies that are powering major scientific advances. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone\u201d<\/em><\/strong>. <\/p>\n\n\n\n

Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

\u201cToday we\u2019re excited to launch our next era of models built for this new agentic era: introducing Gemini 2.0, our most capable model yet. With new advances in multimodality \u2014 like native image and audio output \u2014 and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant\u201d,<\/em><\/strong> said Sundar Pichai, CEO of Google. <\/p>\n\n\n\n

With Gemini 2.0 Google is entering what it refers to as the \u201cera of AI agents\u201d. According to Demis Hassabi, CEO of Google DeepMind, \u201cGemini 2.0 Flash\u2019s native user interface action-capabilities, along with other improvements, all work in concert to enable a new class of agentic experiences\u201d<\/em><\/strong>. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces Gemini Flash As It Attempts To Top The Generative AI Race<\/a><\/p>\n\n\n\n

Gemini 2.0 Highlights<\/strong><\/h2>\n\n\n\n

Google released a blog post<\/a> that highlights Gemini 2.0\u2019s enhanced capabilities. Gemini 2.0 outperforms older models such as Gemini 1.5 Flash and Gemini 1.5 Pro in several key benchmarks. This new model can support both multimodal inputs and multimodal outputs.<\/p>\n\n\n\n

Additionally, Google is implementing Gemini 2.0 into several of its products. This includes Project Mariner, an experimental Chrome extension. This AI agent can browse the internet and complete online tasks (such as shopping) for the user. There is also Jules, an AI agent that can help programmers with debugging codes. Gemini 2.0 is also set to help gamers by generating strategies in real-time conversations. It should be noted that most of these agents are still in the development stage and are accessible to select users only.<\/p>\n\n\n\n

Currently, an experimental version of Gemini 2.0 Flash is available to all Gemini users. Users can also access a chat-optimized version of Gemini 2.0 Flash on desktop and mobile web. This will be made available to Gemini App users next year. Google is also testing Gemini 2.0 in AI Overviews with plans of widespread rollouts early next year.<\/p>\n","post_title":"Introducing Gemini 2.0: Google\u2019s Most Capable Model That Can Power AI Agents","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-2-0-googles-most-capable-model-that-can-power-ai-agents","to_ping":"","pinged":"","post_modified":"2024-12-19 21:51:21","post_modified_gmt":"2024-12-19 10:51:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19917","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19327,"post_author":"17","post_date":"2024-11-02 05:34:27","post_date_gmt":"2024-11-01 18:34:27","post_content":"\n

American tech company Google recently announced a $5.8 million investment to facilitate AI growth in Africa. The company confirmed<\/a> on Monday that the initiative would extend across sub-Saharan Africa to \u201cempower individuals and organizations to leverage AI for economic growth and social impact.\u201d<\/em><\/strong> <\/p>\n\n\n\n

This commitment further solidifies Google\u2019s larger ambition of accelerating Africa\u2019s digital transformation. In 2023, Google announced a $1 billion project over 5 years in Africa. The goal of this investment was to support a range of initiatives, from improved connectivity to investment in startups, to help boost Africa\u2019s digital transformation.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Venom To Launch A Blockchain Hub With Kenyan Government<\/a><\/p>\n\n\n\n

Google believes that Africa has a massive potential for developing AI. The company\u2019s report on the \u201cDigital Opportunity of Africa\u201d estimated that AI could contribute up to $30 billion to the Sub-saharan economy by 2030. With this in mind, the company is aiming to equip people with the skills and resources they need to build and use AI responsibly and effectively.<\/p>\n\n\n\n

\u201cWe've seen how AI can help social impact organizations accelerate and scale their work. The funding announced today will help organizations create AI tools that will benefit not only communities across Africa but across the globe.\u201d,<\/em> said Jen Carter Google.org Head of Tech and Volunteering.<\/a><\/p>\n","post_title":"A Look At Google\u2019s $5.8 Million Commitment To AI In Africa","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-look-at-googles-5-8-million-commitment-to-ai-in-africa","to_ping":"","pinged":"","post_modified":"2024-11-02 05:34:35","post_modified_gmt":"2024-11-01 18:34:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19327","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19266,"post_author":"17","post_date":"2024-10-26 22:16:11","post_date_gmt":"2024-10-26 11:16:11","post_content":"\n

Google has recently announced a partnership with energy company Kairos Powers. The deal will see the tech company buy nuclear energy to power artificial intelligence development. \u201cToday, we\u2019re building on these efforts by signing the world\u2019s first corporate agreement to purchase nuclear energy from multiple small modular reactors (SMRs) to be developed by Kairos Power\u201d<\/em><\/strong>, the company confirmed in a blog post<\/a>.<\/p>\n\n\n\n

Kairos Powers is a nuclear energy company based in California, USA. As part of the agreement, the company will build Google multiple small modular reactors (SMRs). These reactors utilize a molten-salt cooling system and graphite-pebble fuel to transport heat to a steam turbine, generating electrical energy. The first of these reactors is planned to be online by 2030 with the rest set to be active by 2035. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Using AI To Create A Sustainable Future: Microsoft Teams Up With Leading Energy Company<\/a><\/p>\n\n\n\n

The recent advancement in generative AI technology has caused a surge in electricity demand. Other tech companies such as Microsoft have also turned to nuclear as a solution for a clean, round-the-clock power source. Google hopes this project will allow 24\/7 carbon-free energy to further its AI technologies and data centers. <\/p>\n\n\n\n

Speaking on the impact this deal can have on AI, Michael Terrell, senior director of Energy and Climate at Google, said, \u201cThe grid needs new electricity sources to support AI technologies that are powering major scientific advances. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone\u201d<\/em><\/strong>. <\/p>\n\n\n\n

Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

American tech company Google has released Gemini 2.0, the latest version of the company\u2019s flagship AI model. Gemini 2.0 is reported to be Google's strongest AI model. It is set to power various AI agents that can autonomously perform tasks such as online shopping, browsing, and gaming.<\/p>\n\n\n\n

\u201cToday we\u2019re excited to launch our next era of models built for this new agentic era: introducing Gemini 2.0, our most capable model yet. With new advances in multimodality \u2014 like native image and audio output \u2014 and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant\u201d,<\/em><\/strong> said Sundar Pichai, CEO of Google. <\/p>\n\n\n\n

With Gemini 2.0 Google is entering what it refers to as the \u201cera of AI agents\u201d. According to Demis Hassabi, CEO of Google DeepMind, \u201cGemini 2.0 Flash\u2019s native user interface action-capabilities, along with other improvements, all work in concert to enable a new class of agentic experiences\u201d<\/em><\/strong>. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces Gemini Flash As It Attempts To Top The Generative AI Race<\/a><\/p>\n\n\n\n

Gemini 2.0 Highlights<\/strong><\/h2>\n\n\n\n

Google released a blog post<\/a> that highlights Gemini 2.0\u2019s enhanced capabilities. Gemini 2.0 outperforms older models such as Gemini 1.5 Flash and Gemini 1.5 Pro in several key benchmarks. This new model can support both multimodal inputs and multimodal outputs.<\/p>\n\n\n\n

Additionally, Google is implementing Gemini 2.0 into several of its products. This includes Project Mariner, an experimental Chrome extension. This AI agent can browse the internet and complete online tasks (such as shopping) for the user. There is also Jules, an AI agent that can help programmers with debugging codes. Gemini 2.0 is also set to help gamers by generating strategies in real-time conversations. It should be noted that most of these agents are still in the development stage and are accessible to select users only.<\/p>\n\n\n\n

Currently, an experimental version of Gemini 2.0 Flash is available to all Gemini users. Users can also access a chat-optimized version of Gemini 2.0 Flash on desktop and mobile web. This will be made available to Gemini App users next year. Google is also testing Gemini 2.0 in AI Overviews with plans of widespread rollouts early next year.<\/p>\n","post_title":"Introducing Gemini 2.0: Google\u2019s Most Capable Model That Can Power AI Agents","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-2-0-googles-most-capable-model-that-can-power-ai-agents","to_ping":"","pinged":"","post_modified":"2024-12-19 21:51:21","post_modified_gmt":"2024-12-19 10:51:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19917","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19327,"post_author":"17","post_date":"2024-11-02 05:34:27","post_date_gmt":"2024-11-01 18:34:27","post_content":"\n

American tech company Google recently announced a $5.8 million investment to facilitate AI growth in Africa. The company confirmed<\/a> on Monday that the initiative would extend across sub-Saharan Africa to \u201cempower individuals and organizations to leverage AI for economic growth and social impact.\u201d<\/em><\/strong> <\/p>\n\n\n\n

This commitment further solidifies Google\u2019s larger ambition of accelerating Africa\u2019s digital transformation. In 2023, Google announced a $1 billion project over 5 years in Africa. The goal of this investment was to support a range of initiatives, from improved connectivity to investment in startups, to help boost Africa\u2019s digital transformation.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Venom To Launch A Blockchain Hub With Kenyan Government<\/a><\/p>\n\n\n\n

Google believes that Africa has a massive potential for developing AI. The company\u2019s report on the \u201cDigital Opportunity of Africa\u201d estimated that AI could contribute up to $30 billion to the Sub-saharan economy by 2030. With this in mind, the company is aiming to equip people with the skills and resources they need to build and use AI responsibly and effectively.<\/p>\n\n\n\n

\u201cWe've seen how AI can help social impact organizations accelerate and scale their work. The funding announced today will help organizations create AI tools that will benefit not only communities across Africa but across the globe.\u201d,<\/em> said Jen Carter Google.org Head of Tech and Volunteering.<\/a><\/p>\n","post_title":"A Look At Google\u2019s $5.8 Million Commitment To AI In Africa","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-look-at-googles-5-8-million-commitment-to-ai-in-africa","to_ping":"","pinged":"","post_modified":"2024-11-02 05:34:35","post_modified_gmt":"2024-11-01 18:34:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19327","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19266,"post_author":"17","post_date":"2024-10-26 22:16:11","post_date_gmt":"2024-10-26 11:16:11","post_content":"\n

Google has recently announced a partnership with energy company Kairos Powers. The deal will see the tech company buy nuclear energy to power artificial intelligence development. \u201cToday, we\u2019re building on these efforts by signing the world\u2019s first corporate agreement to purchase nuclear energy from multiple small modular reactors (SMRs) to be developed by Kairos Power\u201d<\/em><\/strong>, the company confirmed in a blog post<\/a>.<\/p>\n\n\n\n

Kairos Powers is a nuclear energy company based in California, USA. As part of the agreement, the company will build Google multiple small modular reactors (SMRs). These reactors utilize a molten-salt cooling system and graphite-pebble fuel to transport heat to a steam turbine, generating electrical energy. The first of these reactors is planned to be online by 2030 with the rest set to be active by 2035. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Using AI To Create A Sustainable Future: Microsoft Teams Up With Leading Energy Company<\/a><\/p>\n\n\n\n

The recent advancement in generative AI technology has caused a surge in electricity demand. Other tech companies such as Microsoft have also turned to nuclear as a solution for a clean, round-the-clock power source. Google hopes this project will allow 24\/7 carbon-free energy to further its AI technologies and data centers. <\/p>\n\n\n\n

Speaking on the impact this deal can have on AI, Michael Terrell, senior director of Energy and Climate at Google, said, \u201cThe grid needs new electricity sources to support AI technologies that are powering major scientific advances. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone\u201d<\/em><\/strong>. <\/p>\n\n\n\n

Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Google\u2019s updated policies will take effect on January 15, 2025, offering crypto businesses a clear timeline to adapt. As regulators continue to tighten their grip, companies must stay proactive to avoid disruptions in their advertising strategies.<\/p>\n","post_title":"Google Tightens Crypto Ad Rules, FCA Registration Now Mandatory In UK","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-tightens-crypto-ad-rules-fca-registration-now-mandatory-in-uk","to_ping":"","pinged":"","post_modified":"2024-12-29 01:38:20","post_modified_gmt":"2024-12-28 14:38:20","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19945","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19917,"post_author":"17","post_date":"2024-12-19 21:51:13","post_date_gmt":"2024-12-19 10:51:13","post_content":"\n

American tech company Google has released Gemini 2.0, the latest version of the company\u2019s flagship AI model. Gemini 2.0 is reported to be Google's strongest AI model. It is set to power various AI agents that can autonomously perform tasks such as online shopping, browsing, and gaming.<\/p>\n\n\n\n

\u201cToday we\u2019re excited to launch our next era of models built for this new agentic era: introducing Gemini 2.0, our most capable model yet. With new advances in multimodality \u2014 like native image and audio output \u2014 and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant\u201d,<\/em><\/strong> said Sundar Pichai, CEO of Google. <\/p>\n\n\n\n

With Gemini 2.0 Google is entering what it refers to as the \u201cera of AI agents\u201d. According to Demis Hassabi, CEO of Google DeepMind, \u201cGemini 2.0 Flash\u2019s native user interface action-capabilities, along with other improvements, all work in concert to enable a new class of agentic experiences\u201d<\/em><\/strong>. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces Gemini Flash As It Attempts To Top The Generative AI Race<\/a><\/p>\n\n\n\n

Gemini 2.0 Highlights<\/strong><\/h2>\n\n\n\n

Google released a blog post<\/a> that highlights Gemini 2.0\u2019s enhanced capabilities. Gemini 2.0 outperforms older models such as Gemini 1.5 Flash and Gemini 1.5 Pro in several key benchmarks. This new model can support both multimodal inputs and multimodal outputs.<\/p>\n\n\n\n

Additionally, Google is implementing Gemini 2.0 into several of its products. This includes Project Mariner, an experimental Chrome extension. This AI agent can browse the internet and complete online tasks (such as shopping) for the user. There is also Jules, an AI agent that can help programmers with debugging codes. Gemini 2.0 is also set to help gamers by generating strategies in real-time conversations. It should be noted that most of these agents are still in the development stage and are accessible to select users only.<\/p>\n\n\n\n

Currently, an experimental version of Gemini 2.0 Flash is available to all Gemini users. Users can also access a chat-optimized version of Gemini 2.0 Flash on desktop and mobile web. This will be made available to Gemini App users next year. Google is also testing Gemini 2.0 in AI Overviews with plans of widespread rollouts early next year.<\/p>\n","post_title":"Introducing Gemini 2.0: Google\u2019s Most Capable Model That Can Power AI Agents","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-2-0-googles-most-capable-model-that-can-power-ai-agents","to_ping":"","pinged":"","post_modified":"2024-12-19 21:51:21","post_modified_gmt":"2024-12-19 10:51:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19917","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19327,"post_author":"17","post_date":"2024-11-02 05:34:27","post_date_gmt":"2024-11-01 18:34:27","post_content":"\n

American tech company Google recently announced a $5.8 million investment to facilitate AI growth in Africa. The company confirmed<\/a> on Monday that the initiative would extend across sub-Saharan Africa to \u201cempower individuals and organizations to leverage AI for economic growth and social impact.\u201d<\/em><\/strong> <\/p>\n\n\n\n

This commitment further solidifies Google\u2019s larger ambition of accelerating Africa\u2019s digital transformation. In 2023, Google announced a $1 billion project over 5 years in Africa. The goal of this investment was to support a range of initiatives, from improved connectivity to investment in startups, to help boost Africa\u2019s digital transformation.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Venom To Launch A Blockchain Hub With Kenyan Government<\/a><\/p>\n\n\n\n

Google believes that Africa has a massive potential for developing AI. The company\u2019s report on the \u201cDigital Opportunity of Africa\u201d estimated that AI could contribute up to $30 billion to the Sub-saharan economy by 2030. With this in mind, the company is aiming to equip people with the skills and resources they need to build and use AI responsibly and effectively.<\/p>\n\n\n\n

\u201cWe've seen how AI can help social impact organizations accelerate and scale their work. The funding announced today will help organizations create AI tools that will benefit not only communities across Africa but across the globe.\u201d,<\/em> said Jen Carter Google.org Head of Tech and Volunteering.<\/a><\/p>\n","post_title":"A Look At Google\u2019s $5.8 Million Commitment To AI In Africa","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-look-at-googles-5-8-million-commitment-to-ai-in-africa","to_ping":"","pinged":"","post_modified":"2024-11-02 05:34:35","post_modified_gmt":"2024-11-01 18:34:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19327","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19266,"post_author":"17","post_date":"2024-10-26 22:16:11","post_date_gmt":"2024-10-26 11:16:11","post_content":"\n

Google has recently announced a partnership with energy company Kairos Powers. The deal will see the tech company buy nuclear energy to power artificial intelligence development. \u201cToday, we\u2019re building on these efforts by signing the world\u2019s first corporate agreement to purchase nuclear energy from multiple small modular reactors (SMRs) to be developed by Kairos Power\u201d<\/em><\/strong>, the company confirmed in a blog post<\/a>.<\/p>\n\n\n\n

Kairos Powers is a nuclear energy company based in California, USA. As part of the agreement, the company will build Google multiple small modular reactors (SMRs). These reactors utilize a molten-salt cooling system and graphite-pebble fuel to transport heat to a steam turbine, generating electrical energy. The first of these reactors is planned to be online by 2030 with the rest set to be active by 2035. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Using AI To Create A Sustainable Future: Microsoft Teams Up With Leading Energy Company<\/a><\/p>\n\n\n\n

The recent advancement in generative AI technology has caused a surge in electricity demand. Other tech companies such as Microsoft have also turned to nuclear as a solution for a clean, round-the-clock power source. Google hopes this project will allow 24\/7 carbon-free energy to further its AI technologies and data centers. <\/p>\n\n\n\n

Speaking on the impact this deal can have on AI, Michael Terrell, senior director of Energy and Climate at Google, said, \u201cThe grid needs new electricity sources to support AI technologies that are powering major scientific advances. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone\u201d<\/em><\/strong>. <\/p>\n\n\n\n

Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Previously, Google updated its crypto advertising policies, but this latest move highlights the growing complexity of compliance in global markets. Advertisers must now navigate a patchwork of local regulations to ensure their campaigns remain legitimate.<\/p>\n\n\n\n

Google\u2019s updated policies will take effect on January 15, 2025, offering crypto businesses a clear timeline to adapt. As regulators continue to tighten their grip, companies must stay proactive to avoid disruptions in their advertising strategies.<\/p>\n","post_title":"Google Tightens Crypto Ad Rules, FCA Registration Now Mandatory In UK","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-tightens-crypto-ad-rules-fca-registration-now-mandatory-in-uk","to_ping":"","pinged":"","post_modified":"2024-12-29 01:38:20","post_modified_gmt":"2024-12-28 14:38:20","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19945","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19917,"post_author":"17","post_date":"2024-12-19 21:51:13","post_date_gmt":"2024-12-19 10:51:13","post_content":"\n

American tech company Google has released Gemini 2.0, the latest version of the company\u2019s flagship AI model. Gemini 2.0 is reported to be Google's strongest AI model. It is set to power various AI agents that can autonomously perform tasks such as online shopping, browsing, and gaming.<\/p>\n\n\n\n

\u201cToday we\u2019re excited to launch our next era of models built for this new agentic era: introducing Gemini 2.0, our most capable model yet. With new advances in multimodality \u2014 like native image and audio output \u2014 and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant\u201d,<\/em><\/strong> said Sundar Pichai, CEO of Google. <\/p>\n\n\n\n

With Gemini 2.0 Google is entering what it refers to as the \u201cera of AI agents\u201d. According to Demis Hassabi, CEO of Google DeepMind, \u201cGemini 2.0 Flash\u2019s native user interface action-capabilities, along with other improvements, all work in concert to enable a new class of agentic experiences\u201d<\/em><\/strong>. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces Gemini Flash As It Attempts To Top The Generative AI Race<\/a><\/p>\n\n\n\n

Gemini 2.0 Highlights<\/strong><\/h2>\n\n\n\n

Google released a blog post<\/a> that highlights Gemini 2.0\u2019s enhanced capabilities. Gemini 2.0 outperforms older models such as Gemini 1.5 Flash and Gemini 1.5 Pro in several key benchmarks. This new model can support both multimodal inputs and multimodal outputs.<\/p>\n\n\n\n

Additionally, Google is implementing Gemini 2.0 into several of its products. This includes Project Mariner, an experimental Chrome extension. This AI agent can browse the internet and complete online tasks (such as shopping) for the user. There is also Jules, an AI agent that can help programmers with debugging codes. Gemini 2.0 is also set to help gamers by generating strategies in real-time conversations. It should be noted that most of these agents are still in the development stage and are accessible to select users only.<\/p>\n\n\n\n

Currently, an experimental version of Gemini 2.0 Flash is available to all Gemini users. Users can also access a chat-optimized version of Gemini 2.0 Flash on desktop and mobile web. This will be made available to Gemini App users next year. Google is also testing Gemini 2.0 in AI Overviews with plans of widespread rollouts early next year.<\/p>\n","post_title":"Introducing Gemini 2.0: Google\u2019s Most Capable Model That Can Power AI Agents","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-2-0-googles-most-capable-model-that-can-power-ai-agents","to_ping":"","pinged":"","post_modified":"2024-12-19 21:51:21","post_modified_gmt":"2024-12-19 10:51:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19917","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19327,"post_author":"17","post_date":"2024-11-02 05:34:27","post_date_gmt":"2024-11-01 18:34:27","post_content":"\n

American tech company Google recently announced a $5.8 million investment to facilitate AI growth in Africa. The company confirmed<\/a> on Monday that the initiative would extend across sub-Saharan Africa to \u201cempower individuals and organizations to leverage AI for economic growth and social impact.\u201d<\/em><\/strong> <\/p>\n\n\n\n

This commitment further solidifies Google\u2019s larger ambition of accelerating Africa\u2019s digital transformation. In 2023, Google announced a $1 billion project over 5 years in Africa. The goal of this investment was to support a range of initiatives, from improved connectivity to investment in startups, to help boost Africa\u2019s digital transformation.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Venom To Launch A Blockchain Hub With Kenyan Government<\/a><\/p>\n\n\n\n

Google believes that Africa has a massive potential for developing AI. The company\u2019s report on the \u201cDigital Opportunity of Africa\u201d estimated that AI could contribute up to $30 billion to the Sub-saharan economy by 2030. With this in mind, the company is aiming to equip people with the skills and resources they need to build and use AI responsibly and effectively.<\/p>\n\n\n\n

\u201cWe've seen how AI can help social impact organizations accelerate and scale their work. The funding announced today will help organizations create AI tools that will benefit not only communities across Africa but across the globe.\u201d,<\/em> said Jen Carter Google.org Head of Tech and Volunteering.<\/a><\/p>\n","post_title":"A Look At Google\u2019s $5.8 Million Commitment To AI In Africa","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-look-at-googles-5-8-million-commitment-to-ai-in-africa","to_ping":"","pinged":"","post_modified":"2024-11-02 05:34:35","post_modified_gmt":"2024-11-01 18:34:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19327","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19266,"post_author":"17","post_date":"2024-10-26 22:16:11","post_date_gmt":"2024-10-26 11:16:11","post_content":"\n

Google has recently announced a partnership with energy company Kairos Powers. The deal will see the tech company buy nuclear energy to power artificial intelligence development. \u201cToday, we\u2019re building on these efforts by signing the world\u2019s first corporate agreement to purchase nuclear energy from multiple small modular reactors (SMRs) to be developed by Kairos Power\u201d<\/em><\/strong>, the company confirmed in a blog post<\/a>.<\/p>\n\n\n\n

Kairos Powers is a nuclear energy company based in California, USA. As part of the agreement, the company will build Google multiple small modular reactors (SMRs). These reactors utilize a molten-salt cooling system and graphite-pebble fuel to transport heat to a steam turbine, generating electrical energy. The first of these reactors is planned to be online by 2030 with the rest set to be active by 2035. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Using AI To Create A Sustainable Future: Microsoft Teams Up With Leading Energy Company<\/a><\/p>\n\n\n\n

The recent advancement in generative AI technology has caused a surge in electricity demand. Other tech companies such as Microsoft have also turned to nuclear as a solution for a clean, round-the-clock power source. Google hopes this project will allow 24\/7 carbon-free energy to further its AI technologies and data centers. <\/p>\n\n\n\n

Speaking on the impact this deal can have on AI, Michael Terrell, senior director of Energy and Climate at Google, said, \u201cThe grid needs new electricity sources to support AI technologies that are powering major scientific advances. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone\u201d<\/em><\/strong>. <\/p>\n\n\n\n

Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Google\u2019s decision reflects a broader trend in the financial sector. As the crypto market matures, governments and regulators are pushing for increased oversight to protect consumers. Misleading advertisements have often drawn criticism for downplaying the risks associated with investing in digital assets.<\/p>\n\n\n\n

Previously, Google updated its crypto advertising policies, but this latest move highlights the growing complexity of compliance in global markets. Advertisers must now navigate a patchwork of local regulations to ensure their campaigns remain legitimate.<\/p>\n\n\n\n

Google\u2019s updated policies will take effect on January 15, 2025, offering crypto businesses a clear timeline to adapt. As regulators continue to tighten their grip, companies must stay proactive to avoid disruptions in their advertising strategies.<\/p>\n","post_title":"Google Tightens Crypto Ad Rules, FCA Registration Now Mandatory In UK","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-tightens-crypto-ad-rules-fca-registration-now-mandatory-in-uk","to_ping":"","pinged":"","post_modified":"2024-12-29 01:38:20","post_modified_gmt":"2024-12-28 14:38:20","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19945","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19917,"post_author":"17","post_date":"2024-12-19 21:51:13","post_date_gmt":"2024-12-19 10:51:13","post_content":"\n

American tech company Google has released Gemini 2.0, the latest version of the company\u2019s flagship AI model. Gemini 2.0 is reported to be Google's strongest AI model. It is set to power various AI agents that can autonomously perform tasks such as online shopping, browsing, and gaming.<\/p>\n\n\n\n

\u201cToday we\u2019re excited to launch our next era of models built for this new agentic era: introducing Gemini 2.0, our most capable model yet. With new advances in multimodality \u2014 like native image and audio output \u2014 and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant\u201d,<\/em><\/strong> said Sundar Pichai, CEO of Google. <\/p>\n\n\n\n

With Gemini 2.0 Google is entering what it refers to as the \u201cera of AI agents\u201d. According to Demis Hassabi, CEO of Google DeepMind, \u201cGemini 2.0 Flash\u2019s native user interface action-capabilities, along with other improvements, all work in concert to enable a new class of agentic experiences\u201d<\/em><\/strong>. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces Gemini Flash As It Attempts To Top The Generative AI Race<\/a><\/p>\n\n\n\n

Gemini 2.0 Highlights<\/strong><\/h2>\n\n\n\n

Google released a blog post<\/a> that highlights Gemini 2.0\u2019s enhanced capabilities. Gemini 2.0 outperforms older models such as Gemini 1.5 Flash and Gemini 1.5 Pro in several key benchmarks. This new model can support both multimodal inputs and multimodal outputs.<\/p>\n\n\n\n

Additionally, Google is implementing Gemini 2.0 into several of its products. This includes Project Mariner, an experimental Chrome extension. This AI agent can browse the internet and complete online tasks (such as shopping) for the user. There is also Jules, an AI agent that can help programmers with debugging codes. Gemini 2.0 is also set to help gamers by generating strategies in real-time conversations. It should be noted that most of these agents are still in the development stage and are accessible to select users only.<\/p>\n\n\n\n

Currently, an experimental version of Gemini 2.0 Flash is available to all Gemini users. Users can also access a chat-optimized version of Gemini 2.0 Flash on desktop and mobile web. This will be made available to Gemini App users next year. Google is also testing Gemini 2.0 in AI Overviews with plans of widespread rollouts early next year.<\/p>\n","post_title":"Introducing Gemini 2.0: Google\u2019s Most Capable Model That Can Power AI Agents","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-2-0-googles-most-capable-model-that-can-power-ai-agents","to_ping":"","pinged":"","post_modified":"2024-12-19 21:51:21","post_modified_gmt":"2024-12-19 10:51:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19917","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19327,"post_author":"17","post_date":"2024-11-02 05:34:27","post_date_gmt":"2024-11-01 18:34:27","post_content":"\n

American tech company Google recently announced a $5.8 million investment to facilitate AI growth in Africa. The company confirmed<\/a> on Monday that the initiative would extend across sub-Saharan Africa to \u201cempower individuals and organizations to leverage AI for economic growth and social impact.\u201d<\/em><\/strong> <\/p>\n\n\n\n

This commitment further solidifies Google\u2019s larger ambition of accelerating Africa\u2019s digital transformation. In 2023, Google announced a $1 billion project over 5 years in Africa. The goal of this investment was to support a range of initiatives, from improved connectivity to investment in startups, to help boost Africa\u2019s digital transformation.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Venom To Launch A Blockchain Hub With Kenyan Government<\/a><\/p>\n\n\n\n

Google believes that Africa has a massive potential for developing AI. The company\u2019s report on the \u201cDigital Opportunity of Africa\u201d estimated that AI could contribute up to $30 billion to the Sub-saharan economy by 2030. With this in mind, the company is aiming to equip people with the skills and resources they need to build and use AI responsibly and effectively.<\/p>\n\n\n\n

\u201cWe've seen how AI can help social impact organizations accelerate and scale their work. The funding announced today will help organizations create AI tools that will benefit not only communities across Africa but across the globe.\u201d,<\/em> said Jen Carter Google.org Head of Tech and Volunteering.<\/a><\/p>\n","post_title":"A Look At Google\u2019s $5.8 Million Commitment To AI In Africa","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-look-at-googles-5-8-million-commitment-to-ai-in-africa","to_ping":"","pinged":"","post_modified":"2024-11-02 05:34:35","post_modified_gmt":"2024-11-01 18:34:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19327","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19266,"post_author":"17","post_date":"2024-10-26 22:16:11","post_date_gmt":"2024-10-26 11:16:11","post_content":"\n

Google has recently announced a partnership with energy company Kairos Powers. The deal will see the tech company buy nuclear energy to power artificial intelligence development. \u201cToday, we\u2019re building on these efforts by signing the world\u2019s first corporate agreement to purchase nuclear energy from multiple small modular reactors (SMRs) to be developed by Kairos Power\u201d<\/em><\/strong>, the company confirmed in a blog post<\/a>.<\/p>\n\n\n\n

Kairos Powers is a nuclear energy company based in California, USA. As part of the agreement, the company will build Google multiple small modular reactors (SMRs). These reactors utilize a molten-salt cooling system and graphite-pebble fuel to transport heat to a steam turbine, generating electrical energy. The first of these reactors is planned to be online by 2030 with the rest set to be active by 2035. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Using AI To Create A Sustainable Future: Microsoft Teams Up With Leading Energy Company<\/a><\/p>\n\n\n\n

The recent advancement in generative AI technology has caused a surge in electricity demand. Other tech companies such as Microsoft have also turned to nuclear as a solution for a clean, round-the-clock power source. Google hopes this project will allow 24\/7 carbon-free energy to further its AI technologies and data centers. <\/p>\n\n\n\n

Speaking on the impact this deal can have on AI, Michael Terrell, senior director of Energy and Climate at Google, said, \u201cThe grid needs new electricity sources to support AI technologies that are powering major scientific advances. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone\u201d<\/em><\/strong>. <\/p>\n\n\n\n

Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Similarly, Nigeria\u2019s Securities and Exchange Commission introduced stricter marketing rules for crypto products. Influencers and service providers in Nigeria now require explicit permission from the SEC before promoting digital assets.<\/p>\n\n\n\n

Google\u2019s decision reflects a broader trend in the financial sector. As the crypto market matures, governments and regulators are pushing for increased oversight to protect consumers. Misleading advertisements have often drawn criticism for downplaying the risks associated with investing in digital assets.<\/p>\n\n\n\n

Previously, Google updated its crypto advertising policies, but this latest move highlights the growing complexity of compliance in global markets. Advertisers must now navigate a patchwork of local regulations to ensure their campaigns remain legitimate.<\/p>\n\n\n\n

Google\u2019s updated policies will take effect on January 15, 2025, offering crypto businesses a clear timeline to adapt. As regulators continue to tighten their grip, companies must stay proactive to avoid disruptions in their advertising strategies.<\/p>\n","post_title":"Google Tightens Crypto Ad Rules, FCA Registration Now Mandatory In UK","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-tightens-crypto-ad-rules-fca-registration-now-mandatory-in-uk","to_ping":"","pinged":"","post_modified":"2024-12-29 01:38:20","post_modified_gmt":"2024-12-28 14:38:20","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19945","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19917,"post_author":"17","post_date":"2024-12-19 21:51:13","post_date_gmt":"2024-12-19 10:51:13","post_content":"\n

American tech company Google has released Gemini 2.0, the latest version of the company\u2019s flagship AI model. Gemini 2.0 is reported to be Google's strongest AI model. It is set to power various AI agents that can autonomously perform tasks such as online shopping, browsing, and gaming.<\/p>\n\n\n\n

\u201cToday we\u2019re excited to launch our next era of models built for this new agentic era: introducing Gemini 2.0, our most capable model yet. With new advances in multimodality \u2014 like native image and audio output \u2014 and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant\u201d,<\/em><\/strong> said Sundar Pichai, CEO of Google. <\/p>\n\n\n\n

With Gemini 2.0 Google is entering what it refers to as the \u201cera of AI agents\u201d. According to Demis Hassabi, CEO of Google DeepMind, \u201cGemini 2.0 Flash\u2019s native user interface action-capabilities, along with other improvements, all work in concert to enable a new class of agentic experiences\u201d<\/em><\/strong>. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces Gemini Flash As It Attempts To Top The Generative AI Race<\/a><\/p>\n\n\n\n

Gemini 2.0 Highlights<\/strong><\/h2>\n\n\n\n

Google released a blog post<\/a> that highlights Gemini 2.0\u2019s enhanced capabilities. Gemini 2.0 outperforms older models such as Gemini 1.5 Flash and Gemini 1.5 Pro in several key benchmarks. This new model can support both multimodal inputs and multimodal outputs.<\/p>\n\n\n\n

Additionally, Google is implementing Gemini 2.0 into several of its products. This includes Project Mariner, an experimental Chrome extension. This AI agent can browse the internet and complete online tasks (such as shopping) for the user. There is also Jules, an AI agent that can help programmers with debugging codes. Gemini 2.0 is also set to help gamers by generating strategies in real-time conversations. It should be noted that most of these agents are still in the development stage and are accessible to select users only.<\/p>\n\n\n\n

Currently, an experimental version of Gemini 2.0 Flash is available to all Gemini users. Users can also access a chat-optimized version of Gemini 2.0 Flash on desktop and mobile web. This will be made available to Gemini App users next year. Google is also testing Gemini 2.0 in AI Overviews with plans of widespread rollouts early next year.<\/p>\n","post_title":"Introducing Gemini 2.0: Google\u2019s Most Capable Model That Can Power AI Agents","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-2-0-googles-most-capable-model-that-can-power-ai-agents","to_ping":"","pinged":"","post_modified":"2024-12-19 21:51:21","post_modified_gmt":"2024-12-19 10:51:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19917","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19327,"post_author":"17","post_date":"2024-11-02 05:34:27","post_date_gmt":"2024-11-01 18:34:27","post_content":"\n

American tech company Google recently announced a $5.8 million investment to facilitate AI growth in Africa. The company confirmed<\/a> on Monday that the initiative would extend across sub-Saharan Africa to \u201cempower individuals and organizations to leverage AI for economic growth and social impact.\u201d<\/em><\/strong> <\/p>\n\n\n\n

This commitment further solidifies Google\u2019s larger ambition of accelerating Africa\u2019s digital transformation. In 2023, Google announced a $1 billion project over 5 years in Africa. The goal of this investment was to support a range of initiatives, from improved connectivity to investment in startups, to help boost Africa\u2019s digital transformation.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Venom To Launch A Blockchain Hub With Kenyan Government<\/a><\/p>\n\n\n\n

Google believes that Africa has a massive potential for developing AI. The company\u2019s report on the \u201cDigital Opportunity of Africa\u201d estimated that AI could contribute up to $30 billion to the Sub-saharan economy by 2030. With this in mind, the company is aiming to equip people with the skills and resources they need to build and use AI responsibly and effectively.<\/p>\n\n\n\n

\u201cWe've seen how AI can help social impact organizations accelerate and scale their work. The funding announced today will help organizations create AI tools that will benefit not only communities across Africa but across the globe.\u201d,<\/em> said Jen Carter Google.org Head of Tech and Volunteering.<\/a><\/p>\n","post_title":"A Look At Google\u2019s $5.8 Million Commitment To AI In Africa","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-look-at-googles-5-8-million-commitment-to-ai-in-africa","to_ping":"","pinged":"","post_modified":"2024-11-02 05:34:35","post_modified_gmt":"2024-11-01 18:34:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19327","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19266,"post_author":"17","post_date":"2024-10-26 22:16:11","post_date_gmt":"2024-10-26 11:16:11","post_content":"\n

Google has recently announced a partnership with energy company Kairos Powers. The deal will see the tech company buy nuclear energy to power artificial intelligence development. \u201cToday, we\u2019re building on these efforts by signing the world\u2019s first corporate agreement to purchase nuclear energy from multiple small modular reactors (SMRs) to be developed by Kairos Power\u201d<\/em><\/strong>, the company confirmed in a blog post<\/a>.<\/p>\n\n\n\n

Kairos Powers is a nuclear energy company based in California, USA. As part of the agreement, the company will build Google multiple small modular reactors (SMRs). These reactors utilize a molten-salt cooling system and graphite-pebble fuel to transport heat to a steam turbine, generating electrical energy. The first of these reactors is planned to be online by 2030 with the rest set to be active by 2035. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Using AI To Create A Sustainable Future: Microsoft Teams Up With Leading Energy Company<\/a><\/p>\n\n\n\n

The recent advancement in generative AI technology has caused a surge in electricity demand. Other tech companies such as Microsoft have also turned to nuclear as a solution for a clean, round-the-clock power source. Google hopes this project will allow 24\/7 carbon-free energy to further its AI technologies and data centers. <\/p>\n\n\n\n

Speaking on the impact this deal can have on AI, Michael Terrell, senior director of Energy and Climate at Google, said, \u201cThe grid needs new electricity sources to support AI technologies that are powering major scientific advances. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone\u201d<\/em><\/strong>. <\/p>\n\n\n\n

Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

The FCA recently warned against a Solana-based project called \u201cRetardio,\u201d highlighting concerns about unauthorized promotions that could leave consumers vulnerable to financial losses, Cointelegraph reported<\/a>.<\/p>\n\n\n\n

Similarly, Nigeria\u2019s Securities and Exchange Commission introduced stricter marketing rules for crypto products. Influencers and service providers in Nigeria now require explicit permission from the SEC before promoting digital assets.<\/p>\n\n\n\n

Google\u2019s decision reflects a broader trend in the financial sector. As the crypto market matures, governments and regulators are pushing for increased oversight to protect consumers. Misleading advertisements have often drawn criticism for downplaying the risks associated with investing in digital assets.<\/p>\n\n\n\n

Previously, Google updated its crypto advertising policies, but this latest move highlights the growing complexity of compliance in global markets. Advertisers must now navigate a patchwork of local regulations to ensure their campaigns remain legitimate.<\/p>\n\n\n\n

Google\u2019s updated policies will take effect on January 15, 2025, offering crypto businesses a clear timeline to adapt. As regulators continue to tighten their grip, companies must stay proactive to avoid disruptions in their advertising strategies.<\/p>\n","post_title":"Google Tightens Crypto Ad Rules, FCA Registration Now Mandatory In UK","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-tightens-crypto-ad-rules-fca-registration-now-mandatory-in-uk","to_ping":"","pinged":"","post_modified":"2024-12-29 01:38:20","post_modified_gmt":"2024-12-28 14:38:20","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19945","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19917,"post_author":"17","post_date":"2024-12-19 21:51:13","post_date_gmt":"2024-12-19 10:51:13","post_content":"\n

American tech company Google has released Gemini 2.0, the latest version of the company\u2019s flagship AI model. Gemini 2.0 is reported to be Google's strongest AI model. It is set to power various AI agents that can autonomously perform tasks such as online shopping, browsing, and gaming.<\/p>\n\n\n\n

\u201cToday we\u2019re excited to launch our next era of models built for this new agentic era: introducing Gemini 2.0, our most capable model yet. With new advances in multimodality \u2014 like native image and audio output \u2014 and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant\u201d,<\/em><\/strong> said Sundar Pichai, CEO of Google. <\/p>\n\n\n\n

With Gemini 2.0 Google is entering what it refers to as the \u201cera of AI agents\u201d. According to Demis Hassabi, CEO of Google DeepMind, \u201cGemini 2.0 Flash\u2019s native user interface action-capabilities, along with other improvements, all work in concert to enable a new class of agentic experiences\u201d<\/em><\/strong>. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces Gemini Flash As It Attempts To Top The Generative AI Race<\/a><\/p>\n\n\n\n

Gemini 2.0 Highlights<\/strong><\/h2>\n\n\n\n

Google released a blog post<\/a> that highlights Gemini 2.0\u2019s enhanced capabilities. Gemini 2.0 outperforms older models such as Gemini 1.5 Flash and Gemini 1.5 Pro in several key benchmarks. This new model can support both multimodal inputs and multimodal outputs.<\/p>\n\n\n\n

Additionally, Google is implementing Gemini 2.0 into several of its products. This includes Project Mariner, an experimental Chrome extension. This AI agent can browse the internet and complete online tasks (such as shopping) for the user. There is also Jules, an AI agent that can help programmers with debugging codes. Gemini 2.0 is also set to help gamers by generating strategies in real-time conversations. It should be noted that most of these agents are still in the development stage and are accessible to select users only.<\/p>\n\n\n\n

Currently, an experimental version of Gemini 2.0 Flash is available to all Gemini users. Users can also access a chat-optimized version of Gemini 2.0 Flash on desktop and mobile web. This will be made available to Gemini App users next year. Google is also testing Gemini 2.0 in AI Overviews with plans of widespread rollouts early next year.<\/p>\n","post_title":"Introducing Gemini 2.0: Google\u2019s Most Capable Model That Can Power AI Agents","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-2-0-googles-most-capable-model-that-can-power-ai-agents","to_ping":"","pinged":"","post_modified":"2024-12-19 21:51:21","post_modified_gmt":"2024-12-19 10:51:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19917","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19327,"post_author":"17","post_date":"2024-11-02 05:34:27","post_date_gmt":"2024-11-01 18:34:27","post_content":"\n

American tech company Google recently announced a $5.8 million investment to facilitate AI growth in Africa. The company confirmed<\/a> on Monday that the initiative would extend across sub-Saharan Africa to \u201cempower individuals and organizations to leverage AI for economic growth and social impact.\u201d<\/em><\/strong> <\/p>\n\n\n\n

This commitment further solidifies Google\u2019s larger ambition of accelerating Africa\u2019s digital transformation. In 2023, Google announced a $1 billion project over 5 years in Africa. The goal of this investment was to support a range of initiatives, from improved connectivity to investment in startups, to help boost Africa\u2019s digital transformation.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Venom To Launch A Blockchain Hub With Kenyan Government<\/a><\/p>\n\n\n\n

Google believes that Africa has a massive potential for developing AI. The company\u2019s report on the \u201cDigital Opportunity of Africa\u201d estimated that AI could contribute up to $30 billion to the Sub-saharan economy by 2030. With this in mind, the company is aiming to equip people with the skills and resources they need to build and use AI responsibly and effectively.<\/p>\n\n\n\n

\u201cWe've seen how AI can help social impact organizations accelerate and scale their work. The funding announced today will help organizations create AI tools that will benefit not only communities across Africa but across the globe.\u201d,<\/em> said Jen Carter Google.org Head of Tech and Volunteering.<\/a><\/p>\n","post_title":"A Look At Google\u2019s $5.8 Million Commitment To AI In Africa","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-look-at-googles-5-8-million-commitment-to-ai-in-africa","to_ping":"","pinged":"","post_modified":"2024-11-02 05:34:35","post_modified_gmt":"2024-11-01 18:34:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19327","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19266,"post_author":"17","post_date":"2024-10-26 22:16:11","post_date_gmt":"2024-10-26 11:16:11","post_content":"\n

Google has recently announced a partnership with energy company Kairos Powers. The deal will see the tech company buy nuclear energy to power artificial intelligence development. \u201cToday, we\u2019re building on these efforts by signing the world\u2019s first corporate agreement to purchase nuclear energy from multiple small modular reactors (SMRs) to be developed by Kairos Power\u201d<\/em><\/strong>, the company confirmed in a blog post<\/a>.<\/p>\n\n\n\n

Kairos Powers is a nuclear energy company based in California, USA. As part of the agreement, the company will build Google multiple small modular reactors (SMRs). These reactors utilize a molten-salt cooling system and graphite-pebble fuel to transport heat to a steam turbine, generating electrical energy. The first of these reactors is planned to be online by 2030 with the rest set to be active by 2035. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Using AI To Create A Sustainable Future: Microsoft Teams Up With Leading Energy Company<\/a><\/p>\n\n\n\n

The recent advancement in generative AI technology has caused a surge in electricity demand. Other tech companies such as Microsoft have also turned to nuclear as a solution for a clean, round-the-clock power source. Google hopes this project will allow 24\/7 carbon-free energy to further its AI technologies and data centers. <\/p>\n\n\n\n

Speaking on the impact this deal can have on AI, Michael Terrell, senior director of Energy and Climate at Google, said, \u201cThe grid needs new electricity sources to support AI technologies that are powering major scientific advances. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone\u201d<\/em><\/strong>. <\/p>\n\n\n\n

Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Regulatory Scrutiny Globally<\/strong><\/p>\n\n\n\n

The FCA recently warned against a Solana-based project called \u201cRetardio,\u201d highlighting concerns about unauthorized promotions that could leave consumers vulnerable to financial losses, Cointelegraph reported<\/a>.<\/p>\n\n\n\n

Similarly, Nigeria\u2019s Securities and Exchange Commission introduced stricter marketing rules for crypto products. Influencers and service providers in Nigeria now require explicit permission from the SEC before promoting digital assets.<\/p>\n\n\n\n

Google\u2019s decision reflects a broader trend in the financial sector. As the crypto market matures, governments and regulators are pushing for increased oversight to protect consumers. Misleading advertisements have often drawn criticism for downplaying the risks associated with investing in digital assets.<\/p>\n\n\n\n

Previously, Google updated its crypto advertising policies, but this latest move highlights the growing complexity of compliance in global markets. Advertisers must now navigate a patchwork of local regulations to ensure their campaigns remain legitimate.<\/p>\n\n\n\n

Google\u2019s updated policies will take effect on January 15, 2025, offering crypto businesses a clear timeline to adapt. As regulators continue to tighten their grip, companies must stay proactive to avoid disruptions in their advertising strategies.<\/p>\n","post_title":"Google Tightens Crypto Ad Rules, FCA Registration Now Mandatory In UK","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-tightens-crypto-ad-rules-fca-registration-now-mandatory-in-uk","to_ping":"","pinged":"","post_modified":"2024-12-29 01:38:20","post_modified_gmt":"2024-12-28 14:38:20","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19945","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19917,"post_author":"17","post_date":"2024-12-19 21:51:13","post_date_gmt":"2024-12-19 10:51:13","post_content":"\n

American tech company Google has released Gemini 2.0, the latest version of the company\u2019s flagship AI model. Gemini 2.0 is reported to be Google's strongest AI model. It is set to power various AI agents that can autonomously perform tasks such as online shopping, browsing, and gaming.<\/p>\n\n\n\n

\u201cToday we\u2019re excited to launch our next era of models built for this new agentic era: introducing Gemini 2.0, our most capable model yet. With new advances in multimodality \u2014 like native image and audio output \u2014 and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant\u201d,<\/em><\/strong> said Sundar Pichai, CEO of Google. <\/p>\n\n\n\n

With Gemini 2.0 Google is entering what it refers to as the \u201cera of AI agents\u201d. According to Demis Hassabi, CEO of Google DeepMind, \u201cGemini 2.0 Flash\u2019s native user interface action-capabilities, along with other improvements, all work in concert to enable a new class of agentic experiences\u201d<\/em><\/strong>. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces Gemini Flash As It Attempts To Top The Generative AI Race<\/a><\/p>\n\n\n\n

Gemini 2.0 Highlights<\/strong><\/h2>\n\n\n\n

Google released a blog post<\/a> that highlights Gemini 2.0\u2019s enhanced capabilities. Gemini 2.0 outperforms older models such as Gemini 1.5 Flash and Gemini 1.5 Pro in several key benchmarks. This new model can support both multimodal inputs and multimodal outputs.<\/p>\n\n\n\n

Additionally, Google is implementing Gemini 2.0 into several of its products. This includes Project Mariner, an experimental Chrome extension. This AI agent can browse the internet and complete online tasks (such as shopping) for the user. There is also Jules, an AI agent that can help programmers with debugging codes. Gemini 2.0 is also set to help gamers by generating strategies in real-time conversations. It should be noted that most of these agents are still in the development stage and are accessible to select users only.<\/p>\n\n\n\n

Currently, an experimental version of Gemini 2.0 Flash is available to all Gemini users. Users can also access a chat-optimized version of Gemini 2.0 Flash on desktop and mobile web. This will be made available to Gemini App users next year. Google is also testing Gemini 2.0 in AI Overviews with plans of widespread rollouts early next year.<\/p>\n","post_title":"Introducing Gemini 2.0: Google\u2019s Most Capable Model That Can Power AI Agents","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-2-0-googles-most-capable-model-that-can-power-ai-agents","to_ping":"","pinged":"","post_modified":"2024-12-19 21:51:21","post_modified_gmt":"2024-12-19 10:51:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19917","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19327,"post_author":"17","post_date":"2024-11-02 05:34:27","post_date_gmt":"2024-11-01 18:34:27","post_content":"\n

American tech company Google recently announced a $5.8 million investment to facilitate AI growth in Africa. The company confirmed<\/a> on Monday that the initiative would extend across sub-Saharan Africa to \u201cempower individuals and organizations to leverage AI for economic growth and social impact.\u201d<\/em><\/strong> <\/p>\n\n\n\n

This commitment further solidifies Google\u2019s larger ambition of accelerating Africa\u2019s digital transformation. In 2023, Google announced a $1 billion project over 5 years in Africa. The goal of this investment was to support a range of initiatives, from improved connectivity to investment in startups, to help boost Africa\u2019s digital transformation.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Venom To Launch A Blockchain Hub With Kenyan Government<\/a><\/p>\n\n\n\n

Google believes that Africa has a massive potential for developing AI. The company\u2019s report on the \u201cDigital Opportunity of Africa\u201d estimated that AI could contribute up to $30 billion to the Sub-saharan economy by 2030. With this in mind, the company is aiming to equip people with the skills and resources they need to build and use AI responsibly and effectively.<\/p>\n\n\n\n

\u201cWe've seen how AI can help social impact organizations accelerate and scale their work. The funding announced today will help organizations create AI tools that will benefit not only communities across Africa but across the globe.\u201d,<\/em> said Jen Carter Google.org Head of Tech and Volunteering.<\/a><\/p>\n","post_title":"A Look At Google\u2019s $5.8 Million Commitment To AI In Africa","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-look-at-googles-5-8-million-commitment-to-ai-in-africa","to_ping":"","pinged":"","post_modified":"2024-11-02 05:34:35","post_modified_gmt":"2024-11-01 18:34:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19327","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19266,"post_author":"17","post_date":"2024-10-26 22:16:11","post_date_gmt":"2024-10-26 11:16:11","post_content":"\n

Google has recently announced a partnership with energy company Kairos Powers. The deal will see the tech company buy nuclear energy to power artificial intelligence development. \u201cToday, we\u2019re building on these efforts by signing the world\u2019s first corporate agreement to purchase nuclear energy from multiple small modular reactors (SMRs) to be developed by Kairos Power\u201d<\/em><\/strong>, the company confirmed in a blog post<\/a>.<\/p>\n\n\n\n

Kairos Powers is a nuclear energy company based in California, USA. As part of the agreement, the company will build Google multiple small modular reactors (SMRs). These reactors utilize a molten-salt cooling system and graphite-pebble fuel to transport heat to a steam turbine, generating electrical energy. The first of these reactors is planned to be online by 2030 with the rest set to be active by 2035. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Using AI To Create A Sustainable Future: Microsoft Teams Up With Leading Energy Company<\/a><\/p>\n\n\n\n

The recent advancement in generative AI technology has caused a surge in electricity demand. Other tech companies such as Microsoft have also turned to nuclear as a solution for a clean, round-the-clock power source. Google hopes this project will allow 24\/7 carbon-free energy to further its AI technologies and data centers. <\/p>\n\n\n\n

Speaking on the impact this deal can have on AI, Michael Terrell, senior director of Energy and Climate at Google, said, \u201cThe grid needs new electricity sources to support AI technologies that are powering major scientific advances. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone\u201d<\/em><\/strong>. <\/p>\n\n\n\n

Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related:<\/em><\/strong> A Year On Out, Credit Suisse's Fallout And Regulatory Challenges<\/a><\/p>\n\n\n\n

Regulatory Scrutiny Globally<\/strong><\/p>\n\n\n\n

The FCA recently warned against a Solana-based project called \u201cRetardio,\u201d highlighting concerns about unauthorized promotions that could leave consumers vulnerable to financial losses, Cointelegraph reported<\/a>.<\/p>\n\n\n\n

Similarly, Nigeria\u2019s Securities and Exchange Commission introduced stricter marketing rules for crypto products. Influencers and service providers in Nigeria now require explicit permission from the SEC before promoting digital assets.<\/p>\n\n\n\n

Google\u2019s decision reflects a broader trend in the financial sector. As the crypto market matures, governments and regulators are pushing for increased oversight to protect consumers. Misleading advertisements have often drawn criticism for downplaying the risks associated with investing in digital assets.<\/p>\n\n\n\n

Previously, Google updated its crypto advertising policies, but this latest move highlights the growing complexity of compliance in global markets. Advertisers must now navigate a patchwork of local regulations to ensure their campaigns remain legitimate.<\/p>\n\n\n\n

Google\u2019s updated policies will take effect on January 15, 2025, offering crypto businesses a clear timeline to adapt. As regulators continue to tighten their grip, companies must stay proactive to avoid disruptions in their advertising strategies.<\/p>\n","post_title":"Google Tightens Crypto Ad Rules, FCA Registration Now Mandatory In UK","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-tightens-crypto-ad-rules-fca-registration-now-mandatory-in-uk","to_ping":"","pinged":"","post_modified":"2024-12-29 01:38:20","post_modified_gmt":"2024-12-28 14:38:20","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19945","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19917,"post_author":"17","post_date":"2024-12-19 21:51:13","post_date_gmt":"2024-12-19 10:51:13","post_content":"\n

American tech company Google has released Gemini 2.0, the latest version of the company\u2019s flagship AI model. Gemini 2.0 is reported to be Google's strongest AI model. It is set to power various AI agents that can autonomously perform tasks such as online shopping, browsing, and gaming.<\/p>\n\n\n\n

\u201cToday we\u2019re excited to launch our next era of models built for this new agentic era: introducing Gemini 2.0, our most capable model yet. With new advances in multimodality \u2014 like native image and audio output \u2014 and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant\u201d,<\/em><\/strong> said Sundar Pichai, CEO of Google. <\/p>\n\n\n\n

With Gemini 2.0 Google is entering what it refers to as the \u201cera of AI agents\u201d. According to Demis Hassabi, CEO of Google DeepMind, \u201cGemini 2.0 Flash\u2019s native user interface action-capabilities, along with other improvements, all work in concert to enable a new class of agentic experiences\u201d<\/em><\/strong>. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces Gemini Flash As It Attempts To Top The Generative AI Race<\/a><\/p>\n\n\n\n

Gemini 2.0 Highlights<\/strong><\/h2>\n\n\n\n

Google released a blog post<\/a> that highlights Gemini 2.0\u2019s enhanced capabilities. Gemini 2.0 outperforms older models such as Gemini 1.5 Flash and Gemini 1.5 Pro in several key benchmarks. This new model can support both multimodal inputs and multimodal outputs.<\/p>\n\n\n\n

Additionally, Google is implementing Gemini 2.0 into several of its products. This includes Project Mariner, an experimental Chrome extension. This AI agent can browse the internet and complete online tasks (such as shopping) for the user. There is also Jules, an AI agent that can help programmers with debugging codes. Gemini 2.0 is also set to help gamers by generating strategies in real-time conversations. It should be noted that most of these agents are still in the development stage and are accessible to select users only.<\/p>\n\n\n\n

Currently, an experimental version of Gemini 2.0 Flash is available to all Gemini users. Users can also access a chat-optimized version of Gemini 2.0 Flash on desktop and mobile web. This will be made available to Gemini App users next year. Google is also testing Gemini 2.0 in AI Overviews with plans of widespread rollouts early next year.<\/p>\n","post_title":"Introducing Gemini 2.0: Google\u2019s Most Capable Model That Can Power AI Agents","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-2-0-googles-most-capable-model-that-can-power-ai-agents","to_ping":"","pinged":"","post_modified":"2024-12-19 21:51:21","post_modified_gmt":"2024-12-19 10:51:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19917","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19327,"post_author":"17","post_date":"2024-11-02 05:34:27","post_date_gmt":"2024-11-01 18:34:27","post_content":"\n

American tech company Google recently announced a $5.8 million investment to facilitate AI growth in Africa. The company confirmed<\/a> on Monday that the initiative would extend across sub-Saharan Africa to \u201cempower individuals and organizations to leverage AI for economic growth and social impact.\u201d<\/em><\/strong> <\/p>\n\n\n\n

This commitment further solidifies Google\u2019s larger ambition of accelerating Africa\u2019s digital transformation. In 2023, Google announced a $1 billion project over 5 years in Africa. The goal of this investment was to support a range of initiatives, from improved connectivity to investment in startups, to help boost Africa\u2019s digital transformation.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Venom To Launch A Blockchain Hub With Kenyan Government<\/a><\/p>\n\n\n\n

Google believes that Africa has a massive potential for developing AI. The company\u2019s report on the \u201cDigital Opportunity of Africa\u201d estimated that AI could contribute up to $30 billion to the Sub-saharan economy by 2030. With this in mind, the company is aiming to equip people with the skills and resources they need to build and use AI responsibly and effectively.<\/p>\n\n\n\n

\u201cWe've seen how AI can help social impact organizations accelerate and scale their work. The funding announced today will help organizations create AI tools that will benefit not only communities across Africa but across the globe.\u201d,<\/em> said Jen Carter Google.org Head of Tech and Volunteering.<\/a><\/p>\n","post_title":"A Look At Google\u2019s $5.8 Million Commitment To AI In Africa","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-look-at-googles-5-8-million-commitment-to-ai-in-africa","to_ping":"","pinged":"","post_modified":"2024-11-02 05:34:35","post_modified_gmt":"2024-11-01 18:34:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19327","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19266,"post_author":"17","post_date":"2024-10-26 22:16:11","post_date_gmt":"2024-10-26 11:16:11","post_content":"\n

Google has recently announced a partnership with energy company Kairos Powers. The deal will see the tech company buy nuclear energy to power artificial intelligence development. \u201cToday, we\u2019re building on these efforts by signing the world\u2019s first corporate agreement to purchase nuclear energy from multiple small modular reactors (SMRs) to be developed by Kairos Power\u201d<\/em><\/strong>, the company confirmed in a blog post<\/a>.<\/p>\n\n\n\n

Kairos Powers is a nuclear energy company based in California, USA. As part of the agreement, the company will build Google multiple small modular reactors (SMRs). These reactors utilize a molten-salt cooling system and graphite-pebble fuel to transport heat to a steam turbine, generating electrical energy. The first of these reactors is planned to be online by 2030 with the rest set to be active by 2035. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Using AI To Create A Sustainable Future: Microsoft Teams Up With Leading Energy Company<\/a><\/p>\n\n\n\n

The recent advancement in generative AI technology has caused a surge in electricity demand. Other tech companies such as Microsoft have also turned to nuclear as a solution for a clean, round-the-clock power source. Google hopes this project will allow 24\/7 carbon-free energy to further its AI technologies and data centers. <\/p>\n\n\n\n

Speaking on the impact this deal can have on AI, Michael Terrell, senior director of Energy and Climate at Google, said, \u201cThe grid needs new electricity sources to support AI technologies that are powering major scientific advances. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone\u201d<\/em><\/strong>. <\/p>\n\n\n\n

Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Advertisers must ensure compliance with UK regulations and obtain Google\u2019s approval before launching campaigns. This policy overhaul comes amid growing efforts by global regulators to combat misleading or unauthorized crypto promotions.<\/p>\n\n\n\n

See Related:<\/em><\/strong> A Year On Out, Credit Suisse's Fallout And Regulatory Challenges<\/a><\/p>\n\n\n\n

Regulatory Scrutiny Globally<\/strong><\/p>\n\n\n\n

The FCA recently warned against a Solana-based project called \u201cRetardio,\u201d highlighting concerns about unauthorized promotions that could leave consumers vulnerable to financial losses, Cointelegraph reported<\/a>.<\/p>\n\n\n\n

Similarly, Nigeria\u2019s Securities and Exchange Commission introduced stricter marketing rules for crypto products. Influencers and service providers in Nigeria now require explicit permission from the SEC before promoting digital assets.<\/p>\n\n\n\n

Google\u2019s decision reflects a broader trend in the financial sector. As the crypto market matures, governments and regulators are pushing for increased oversight to protect consumers. Misleading advertisements have often drawn criticism for downplaying the risks associated with investing in digital assets.<\/p>\n\n\n\n

Previously, Google updated its crypto advertising policies, but this latest move highlights the growing complexity of compliance in global markets. Advertisers must now navigate a patchwork of local regulations to ensure their campaigns remain legitimate.<\/p>\n\n\n\n

Google\u2019s updated policies will take effect on January 15, 2025, offering crypto businesses a clear timeline to adapt. As regulators continue to tighten their grip, companies must stay proactive to avoid disruptions in their advertising strategies.<\/p>\n","post_title":"Google Tightens Crypto Ad Rules, FCA Registration Now Mandatory In UK","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-tightens-crypto-ad-rules-fca-registration-now-mandatory-in-uk","to_ping":"","pinged":"","post_modified":"2024-12-29 01:38:20","post_modified_gmt":"2024-12-28 14:38:20","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19945","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19917,"post_author":"17","post_date":"2024-12-19 21:51:13","post_date_gmt":"2024-12-19 10:51:13","post_content":"\n

American tech company Google has released Gemini 2.0, the latest version of the company\u2019s flagship AI model. Gemini 2.0 is reported to be Google's strongest AI model. It is set to power various AI agents that can autonomously perform tasks such as online shopping, browsing, and gaming.<\/p>\n\n\n\n

\u201cToday we\u2019re excited to launch our next era of models built for this new agentic era: introducing Gemini 2.0, our most capable model yet. With new advances in multimodality \u2014 like native image and audio output \u2014 and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant\u201d,<\/em><\/strong> said Sundar Pichai, CEO of Google. <\/p>\n\n\n\n

With Gemini 2.0 Google is entering what it refers to as the \u201cera of AI agents\u201d. According to Demis Hassabi, CEO of Google DeepMind, \u201cGemini 2.0 Flash\u2019s native user interface action-capabilities, along with other improvements, all work in concert to enable a new class of agentic experiences\u201d<\/em><\/strong>. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces Gemini Flash As It Attempts To Top The Generative AI Race<\/a><\/p>\n\n\n\n

Gemini 2.0 Highlights<\/strong><\/h2>\n\n\n\n

Google released a blog post<\/a> that highlights Gemini 2.0\u2019s enhanced capabilities. Gemini 2.0 outperforms older models such as Gemini 1.5 Flash and Gemini 1.5 Pro in several key benchmarks. This new model can support both multimodal inputs and multimodal outputs.<\/p>\n\n\n\n

Additionally, Google is implementing Gemini 2.0 into several of its products. This includes Project Mariner, an experimental Chrome extension. This AI agent can browse the internet and complete online tasks (such as shopping) for the user. There is also Jules, an AI agent that can help programmers with debugging codes. Gemini 2.0 is also set to help gamers by generating strategies in real-time conversations. It should be noted that most of these agents are still in the development stage and are accessible to select users only.<\/p>\n\n\n\n

Currently, an experimental version of Gemini 2.0 Flash is available to all Gemini users. Users can also access a chat-optimized version of Gemini 2.0 Flash on desktop and mobile web. This will be made available to Gemini App users next year. Google is also testing Gemini 2.0 in AI Overviews with plans of widespread rollouts early next year.<\/p>\n","post_title":"Introducing Gemini 2.0: Google\u2019s Most Capable Model That Can Power AI Agents","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-2-0-googles-most-capable-model-that-can-power-ai-agents","to_ping":"","pinged":"","post_modified":"2024-12-19 21:51:21","post_modified_gmt":"2024-12-19 10:51:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19917","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19327,"post_author":"17","post_date":"2024-11-02 05:34:27","post_date_gmt":"2024-11-01 18:34:27","post_content":"\n

American tech company Google recently announced a $5.8 million investment to facilitate AI growth in Africa. The company confirmed<\/a> on Monday that the initiative would extend across sub-Saharan Africa to \u201cempower individuals and organizations to leverage AI for economic growth and social impact.\u201d<\/em><\/strong> <\/p>\n\n\n\n

This commitment further solidifies Google\u2019s larger ambition of accelerating Africa\u2019s digital transformation. In 2023, Google announced a $1 billion project over 5 years in Africa. The goal of this investment was to support a range of initiatives, from improved connectivity to investment in startups, to help boost Africa\u2019s digital transformation.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Venom To Launch A Blockchain Hub With Kenyan Government<\/a><\/p>\n\n\n\n

Google believes that Africa has a massive potential for developing AI. The company\u2019s report on the \u201cDigital Opportunity of Africa\u201d estimated that AI could contribute up to $30 billion to the Sub-saharan economy by 2030. With this in mind, the company is aiming to equip people with the skills and resources they need to build and use AI responsibly and effectively.<\/p>\n\n\n\n

\u201cWe've seen how AI can help social impact organizations accelerate and scale their work. The funding announced today will help organizations create AI tools that will benefit not only communities across Africa but across the globe.\u201d,<\/em> said Jen Carter Google.org Head of Tech and Volunteering.<\/a><\/p>\n","post_title":"A Look At Google\u2019s $5.8 Million Commitment To AI In Africa","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-look-at-googles-5-8-million-commitment-to-ai-in-africa","to_ping":"","pinged":"","post_modified":"2024-11-02 05:34:35","post_modified_gmt":"2024-11-01 18:34:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19327","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19266,"post_author":"17","post_date":"2024-10-26 22:16:11","post_date_gmt":"2024-10-26 11:16:11","post_content":"\n

Google has recently announced a partnership with energy company Kairos Powers. The deal will see the tech company buy nuclear energy to power artificial intelligence development. \u201cToday, we\u2019re building on these efforts by signing the world\u2019s first corporate agreement to purchase nuclear energy from multiple small modular reactors (SMRs) to be developed by Kairos Power\u201d<\/em><\/strong>, the company confirmed in a blog post<\/a>.<\/p>\n\n\n\n

Kairos Powers is a nuclear energy company based in California, USA. As part of the agreement, the company will build Google multiple small modular reactors (SMRs). These reactors utilize a molten-salt cooling system and graphite-pebble fuel to transport heat to a steam turbine, generating electrical energy. The first of these reactors is planned to be online by 2030 with the rest set to be active by 2035. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Using AI To Create A Sustainable Future: Microsoft Teams Up With Leading Energy Company<\/a><\/p>\n\n\n\n

The recent advancement in generative AI technology has caused a surge in electricity demand. Other tech companies such as Microsoft have also turned to nuclear as a solution for a clean, round-the-clock power source. Google hopes this project will allow 24\/7 carbon-free energy to further its AI technologies and data centers. <\/p>\n\n\n\n

Speaking on the impact this deal can have on AI, Michael Terrell, senior director of Energy and Climate at Google, said, \u201cThe grid needs new electricity sources to support AI technologies that are powering major scientific advances. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone\u201d<\/em><\/strong>. <\/p>\n\n\n\n

Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Additionally, Google will allow ads for hardware wallets designed to store private crypto keys, provided they do not offer services such as buying, selling, or trading digital assets.<\/p>\n\n\n\n

Advertisers must ensure compliance with UK regulations and obtain Google\u2019s approval before launching campaigns. This policy overhaul comes amid growing efforts by global regulators to combat misleading or unauthorized crypto promotions.<\/p>\n\n\n\n

See Related:<\/em><\/strong> A Year On Out, Credit Suisse's Fallout And Regulatory Challenges<\/a><\/p>\n\n\n\n

Regulatory Scrutiny Globally<\/strong><\/p>\n\n\n\n

The FCA recently warned against a Solana-based project called \u201cRetardio,\u201d highlighting concerns about unauthorized promotions that could leave consumers vulnerable to financial losses, Cointelegraph reported<\/a>.<\/p>\n\n\n\n

Similarly, Nigeria\u2019s Securities and Exchange Commission introduced stricter marketing rules for crypto products. Influencers and service providers in Nigeria now require explicit permission from the SEC before promoting digital assets.<\/p>\n\n\n\n

Google\u2019s decision reflects a broader trend in the financial sector. As the crypto market matures, governments and regulators are pushing for increased oversight to protect consumers. Misleading advertisements have often drawn criticism for downplaying the risks associated with investing in digital assets.<\/p>\n\n\n\n

Previously, Google updated its crypto advertising policies, but this latest move highlights the growing complexity of compliance in global markets. Advertisers must now navigate a patchwork of local regulations to ensure their campaigns remain legitimate.<\/p>\n\n\n\n

Google\u2019s updated policies will take effect on January 15, 2025, offering crypto businesses a clear timeline to adapt. As regulators continue to tighten their grip, companies must stay proactive to avoid disruptions in their advertising strategies.<\/p>\n","post_title":"Google Tightens Crypto Ad Rules, FCA Registration Now Mandatory In UK","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-tightens-crypto-ad-rules-fca-registration-now-mandatory-in-uk","to_ping":"","pinged":"","post_modified":"2024-12-29 01:38:20","post_modified_gmt":"2024-12-28 14:38:20","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19945","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19917,"post_author":"17","post_date":"2024-12-19 21:51:13","post_date_gmt":"2024-12-19 10:51:13","post_content":"\n

American tech company Google has released Gemini 2.0, the latest version of the company\u2019s flagship AI model. Gemini 2.0 is reported to be Google's strongest AI model. It is set to power various AI agents that can autonomously perform tasks such as online shopping, browsing, and gaming.<\/p>\n\n\n\n

\u201cToday we\u2019re excited to launch our next era of models built for this new agentic era: introducing Gemini 2.0, our most capable model yet. With new advances in multimodality \u2014 like native image and audio output \u2014 and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant\u201d,<\/em><\/strong> said Sundar Pichai, CEO of Google. <\/p>\n\n\n\n

With Gemini 2.0 Google is entering what it refers to as the \u201cera of AI agents\u201d. According to Demis Hassabi, CEO of Google DeepMind, \u201cGemini 2.0 Flash\u2019s native user interface action-capabilities, along with other improvements, all work in concert to enable a new class of agentic experiences\u201d<\/em><\/strong>. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces Gemini Flash As It Attempts To Top The Generative AI Race<\/a><\/p>\n\n\n\n

Gemini 2.0 Highlights<\/strong><\/h2>\n\n\n\n

Google released a blog post<\/a> that highlights Gemini 2.0\u2019s enhanced capabilities. Gemini 2.0 outperforms older models such as Gemini 1.5 Flash and Gemini 1.5 Pro in several key benchmarks. This new model can support both multimodal inputs and multimodal outputs.<\/p>\n\n\n\n

Additionally, Google is implementing Gemini 2.0 into several of its products. This includes Project Mariner, an experimental Chrome extension. This AI agent can browse the internet and complete online tasks (such as shopping) for the user. There is also Jules, an AI agent that can help programmers with debugging codes. Gemini 2.0 is also set to help gamers by generating strategies in real-time conversations. It should be noted that most of these agents are still in the development stage and are accessible to select users only.<\/p>\n\n\n\n

Currently, an experimental version of Gemini 2.0 Flash is available to all Gemini users. Users can also access a chat-optimized version of Gemini 2.0 Flash on desktop and mobile web. This will be made available to Gemini App users next year. Google is also testing Gemini 2.0 in AI Overviews with plans of widespread rollouts early next year.<\/p>\n","post_title":"Introducing Gemini 2.0: Google\u2019s Most Capable Model That Can Power AI Agents","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-2-0-googles-most-capable-model-that-can-power-ai-agents","to_ping":"","pinged":"","post_modified":"2024-12-19 21:51:21","post_modified_gmt":"2024-12-19 10:51:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19917","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19327,"post_author":"17","post_date":"2024-11-02 05:34:27","post_date_gmt":"2024-11-01 18:34:27","post_content":"\n

American tech company Google recently announced a $5.8 million investment to facilitate AI growth in Africa. The company confirmed<\/a> on Monday that the initiative would extend across sub-Saharan Africa to \u201cempower individuals and organizations to leverage AI for economic growth and social impact.\u201d<\/em><\/strong> <\/p>\n\n\n\n

This commitment further solidifies Google\u2019s larger ambition of accelerating Africa\u2019s digital transformation. In 2023, Google announced a $1 billion project over 5 years in Africa. The goal of this investment was to support a range of initiatives, from improved connectivity to investment in startups, to help boost Africa\u2019s digital transformation.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Venom To Launch A Blockchain Hub With Kenyan Government<\/a><\/p>\n\n\n\n

Google believes that Africa has a massive potential for developing AI. The company\u2019s report on the \u201cDigital Opportunity of Africa\u201d estimated that AI could contribute up to $30 billion to the Sub-saharan economy by 2030. With this in mind, the company is aiming to equip people with the skills and resources they need to build and use AI responsibly and effectively.<\/p>\n\n\n\n

\u201cWe've seen how AI can help social impact organizations accelerate and scale their work. The funding announced today will help organizations create AI tools that will benefit not only communities across Africa but across the globe.\u201d,<\/em> said Jen Carter Google.org Head of Tech and Volunteering.<\/a><\/p>\n","post_title":"A Look At Google\u2019s $5.8 Million Commitment To AI In Africa","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-look-at-googles-5-8-million-commitment-to-ai-in-africa","to_ping":"","pinged":"","post_modified":"2024-11-02 05:34:35","post_modified_gmt":"2024-11-01 18:34:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19327","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19266,"post_author":"17","post_date":"2024-10-26 22:16:11","post_date_gmt":"2024-10-26 11:16:11","post_content":"\n

Google has recently announced a partnership with energy company Kairos Powers. The deal will see the tech company buy nuclear energy to power artificial intelligence development. \u201cToday, we\u2019re building on these efforts by signing the world\u2019s first corporate agreement to purchase nuclear energy from multiple small modular reactors (SMRs) to be developed by Kairos Power\u201d<\/em><\/strong>, the company confirmed in a blog post<\/a>.<\/p>\n\n\n\n

Kairos Powers is a nuclear energy company based in California, USA. As part of the agreement, the company will build Google multiple small modular reactors (SMRs). These reactors utilize a molten-salt cooling system and graphite-pebble fuel to transport heat to a steam turbine, generating electrical energy. The first of these reactors is planned to be online by 2030 with the rest set to be active by 2035. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Using AI To Create A Sustainable Future: Microsoft Teams Up With Leading Energy Company<\/a><\/p>\n\n\n\n

The recent advancement in generative AI technology has caused a surge in electricity demand. Other tech companies such as Microsoft have also turned to nuclear as a solution for a clean, round-the-clock power source. Google hopes this project will allow 24\/7 carbon-free energy to further its AI technologies and data centers. <\/p>\n\n\n\n

Speaking on the impact this deal can have on AI, Michael Terrell, senior director of Energy and Climate at Google, said, \u201cThe grid needs new electricity sources to support AI technologies that are powering major scientific advances. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone\u201d<\/em><\/strong>. <\/p>\n\n\n\n

Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Google\u2019s updated policy<\/a> now mandates Financial Conduct Authority (FCA) registration for advertisers promoting crypto exchanges or software wallets in the UK. These ads will only be allowed if advertisers meet all local regulatory requirements and secure Google certification.<\/p>\n\n\n\n

Additionally, Google will allow ads for hardware wallets designed to store private crypto keys, provided they do not offer services such as buying, selling, or trading digital assets.<\/p>\n\n\n\n

Advertisers must ensure compliance with UK regulations and obtain Google\u2019s approval before launching campaigns. This policy overhaul comes amid growing efforts by global regulators to combat misleading or unauthorized crypto promotions.<\/p>\n\n\n\n

See Related:<\/em><\/strong> A Year On Out, Credit Suisse's Fallout And Regulatory Challenges<\/a><\/p>\n\n\n\n

Regulatory Scrutiny Globally<\/strong><\/p>\n\n\n\n

The FCA recently warned against a Solana-based project called \u201cRetardio,\u201d highlighting concerns about unauthorized promotions that could leave consumers vulnerable to financial losses, Cointelegraph reported<\/a>.<\/p>\n\n\n\n

Similarly, Nigeria\u2019s Securities and Exchange Commission introduced stricter marketing rules for crypto products. Influencers and service providers in Nigeria now require explicit permission from the SEC before promoting digital assets.<\/p>\n\n\n\n

Google\u2019s decision reflects a broader trend in the financial sector. As the crypto market matures, governments and regulators are pushing for increased oversight to protect consumers. Misleading advertisements have often drawn criticism for downplaying the risks associated with investing in digital assets.<\/p>\n\n\n\n

Previously, Google updated its crypto advertising policies, but this latest move highlights the growing complexity of compliance in global markets. Advertisers must now navigate a patchwork of local regulations to ensure their campaigns remain legitimate.<\/p>\n\n\n\n

Google\u2019s updated policies will take effect on January 15, 2025, offering crypto businesses a clear timeline to adapt. As regulators continue to tighten their grip, companies must stay proactive to avoid disruptions in their advertising strategies.<\/p>\n","post_title":"Google Tightens Crypto Ad Rules, FCA Registration Now Mandatory In UK","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-tightens-crypto-ad-rules-fca-registration-now-mandatory-in-uk","to_ping":"","pinged":"","post_modified":"2024-12-29 01:38:20","post_modified_gmt":"2024-12-28 14:38:20","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19945","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19917,"post_author":"17","post_date":"2024-12-19 21:51:13","post_date_gmt":"2024-12-19 10:51:13","post_content":"\n

American tech company Google has released Gemini 2.0, the latest version of the company\u2019s flagship AI model. Gemini 2.0 is reported to be Google's strongest AI model. It is set to power various AI agents that can autonomously perform tasks such as online shopping, browsing, and gaming.<\/p>\n\n\n\n

\u201cToday we\u2019re excited to launch our next era of models built for this new agentic era: introducing Gemini 2.0, our most capable model yet. With new advances in multimodality \u2014 like native image and audio output \u2014 and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant\u201d,<\/em><\/strong> said Sundar Pichai, CEO of Google. <\/p>\n\n\n\n

With Gemini 2.0 Google is entering what it refers to as the \u201cera of AI agents\u201d. According to Demis Hassabi, CEO of Google DeepMind, \u201cGemini 2.0 Flash\u2019s native user interface action-capabilities, along with other improvements, all work in concert to enable a new class of agentic experiences\u201d<\/em><\/strong>. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces Gemini Flash As It Attempts To Top The Generative AI Race<\/a><\/p>\n\n\n\n

Gemini 2.0 Highlights<\/strong><\/h2>\n\n\n\n

Google released a blog post<\/a> that highlights Gemini 2.0\u2019s enhanced capabilities. Gemini 2.0 outperforms older models such as Gemini 1.5 Flash and Gemini 1.5 Pro in several key benchmarks. This new model can support both multimodal inputs and multimodal outputs.<\/p>\n\n\n\n

Additionally, Google is implementing Gemini 2.0 into several of its products. This includes Project Mariner, an experimental Chrome extension. This AI agent can browse the internet and complete online tasks (such as shopping) for the user. There is also Jules, an AI agent that can help programmers with debugging codes. Gemini 2.0 is also set to help gamers by generating strategies in real-time conversations. It should be noted that most of these agents are still in the development stage and are accessible to select users only.<\/p>\n\n\n\n

Currently, an experimental version of Gemini 2.0 Flash is available to all Gemini users. Users can also access a chat-optimized version of Gemini 2.0 Flash on desktop and mobile web. This will be made available to Gemini App users next year. Google is also testing Gemini 2.0 in AI Overviews with plans of widespread rollouts early next year.<\/p>\n","post_title":"Introducing Gemini 2.0: Google\u2019s Most Capable Model That Can Power AI Agents","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-2-0-googles-most-capable-model-that-can-power-ai-agents","to_ping":"","pinged":"","post_modified":"2024-12-19 21:51:21","post_modified_gmt":"2024-12-19 10:51:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19917","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19327,"post_author":"17","post_date":"2024-11-02 05:34:27","post_date_gmt":"2024-11-01 18:34:27","post_content":"\n

American tech company Google recently announced a $5.8 million investment to facilitate AI growth in Africa. The company confirmed<\/a> on Monday that the initiative would extend across sub-Saharan Africa to \u201cempower individuals and organizations to leverage AI for economic growth and social impact.\u201d<\/em><\/strong> <\/p>\n\n\n\n

This commitment further solidifies Google\u2019s larger ambition of accelerating Africa\u2019s digital transformation. In 2023, Google announced a $1 billion project over 5 years in Africa. The goal of this investment was to support a range of initiatives, from improved connectivity to investment in startups, to help boost Africa\u2019s digital transformation.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Venom To Launch A Blockchain Hub With Kenyan Government<\/a><\/p>\n\n\n\n

Google believes that Africa has a massive potential for developing AI. The company\u2019s report on the \u201cDigital Opportunity of Africa\u201d estimated that AI could contribute up to $30 billion to the Sub-saharan economy by 2030. With this in mind, the company is aiming to equip people with the skills and resources they need to build and use AI responsibly and effectively.<\/p>\n\n\n\n

\u201cWe've seen how AI can help social impact organizations accelerate and scale their work. The funding announced today will help organizations create AI tools that will benefit not only communities across Africa but across the globe.\u201d,<\/em> said Jen Carter Google.org Head of Tech and Volunteering.<\/a><\/p>\n","post_title":"A Look At Google\u2019s $5.8 Million Commitment To AI In Africa","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-look-at-googles-5-8-million-commitment-to-ai-in-africa","to_ping":"","pinged":"","post_modified":"2024-11-02 05:34:35","post_modified_gmt":"2024-11-01 18:34:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19327","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19266,"post_author":"17","post_date":"2024-10-26 22:16:11","post_date_gmt":"2024-10-26 11:16:11","post_content":"\n

Google has recently announced a partnership with energy company Kairos Powers. The deal will see the tech company buy nuclear energy to power artificial intelligence development. \u201cToday, we\u2019re building on these efforts by signing the world\u2019s first corporate agreement to purchase nuclear energy from multiple small modular reactors (SMRs) to be developed by Kairos Power\u201d<\/em><\/strong>, the company confirmed in a blog post<\/a>.<\/p>\n\n\n\n

Kairos Powers is a nuclear energy company based in California, USA. As part of the agreement, the company will build Google multiple small modular reactors (SMRs). These reactors utilize a molten-salt cooling system and graphite-pebble fuel to transport heat to a steam turbine, generating electrical energy. The first of these reactors is planned to be online by 2030 with the rest set to be active by 2035. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Using AI To Create A Sustainable Future: Microsoft Teams Up With Leading Energy Company<\/a><\/p>\n\n\n\n

The recent advancement in generative AI technology has caused a surge in electricity demand. Other tech companies such as Microsoft have also turned to nuclear as a solution for a clean, round-the-clock power source. Google hopes this project will allow 24\/7 carbon-free energy to further its AI technologies and data centers. <\/p>\n\n\n\n

Speaking on the impact this deal can have on AI, Michael Terrell, senior director of Energy and Climate at Google, said, \u201cThe grid needs new electricity sources to support AI technologies that are powering major scientific advances. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone\u201d<\/em><\/strong>. <\/p>\n\n\n\n

Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Google\u2019s move reflects increasing scrutiny of crypto promotions worldwide as regulators aim to protect consumers in the volatile digital asset market.<\/p>\n\n\n\n

Google\u2019s updated policy<\/a> now mandates Financial Conduct Authority (FCA) registration for advertisers promoting crypto exchanges or software wallets in the UK. These ads will only be allowed if advertisers meet all local regulatory requirements and secure Google certification.<\/p>\n\n\n\n

Additionally, Google will allow ads for hardware wallets designed to store private crypto keys, provided they do not offer services such as buying, selling, or trading digital assets.<\/p>\n\n\n\n

Advertisers must ensure compliance with UK regulations and obtain Google\u2019s approval before launching campaigns. This policy overhaul comes amid growing efforts by global regulators to combat misleading or unauthorized crypto promotions.<\/p>\n\n\n\n

See Related:<\/em><\/strong> A Year On Out, Credit Suisse's Fallout And Regulatory Challenges<\/a><\/p>\n\n\n\n

Regulatory Scrutiny Globally<\/strong><\/p>\n\n\n\n

The FCA recently warned against a Solana-based project called \u201cRetardio,\u201d highlighting concerns about unauthorized promotions that could leave consumers vulnerable to financial losses, Cointelegraph reported<\/a>.<\/p>\n\n\n\n

Similarly, Nigeria\u2019s Securities and Exchange Commission introduced stricter marketing rules for crypto products. Influencers and service providers in Nigeria now require explicit permission from the SEC before promoting digital assets.<\/p>\n\n\n\n

Google\u2019s decision reflects a broader trend in the financial sector. As the crypto market matures, governments and regulators are pushing for increased oversight to protect consumers. Misleading advertisements have often drawn criticism for downplaying the risks associated with investing in digital assets.<\/p>\n\n\n\n

Previously, Google updated its crypto advertising policies, but this latest move highlights the growing complexity of compliance in global markets. Advertisers must now navigate a patchwork of local regulations to ensure their campaigns remain legitimate.<\/p>\n\n\n\n

Google\u2019s updated policies will take effect on January 15, 2025, offering crypto businesses a clear timeline to adapt. As regulators continue to tighten their grip, companies must stay proactive to avoid disruptions in their advertising strategies.<\/p>\n","post_title":"Google Tightens Crypto Ad Rules, FCA Registration Now Mandatory In UK","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-tightens-crypto-ad-rules-fca-registration-now-mandatory-in-uk","to_ping":"","pinged":"","post_modified":"2024-12-29 01:38:20","post_modified_gmt":"2024-12-28 14:38:20","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19945","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19917,"post_author":"17","post_date":"2024-12-19 21:51:13","post_date_gmt":"2024-12-19 10:51:13","post_content":"\n

American tech company Google has released Gemini 2.0, the latest version of the company\u2019s flagship AI model. Gemini 2.0 is reported to be Google's strongest AI model. It is set to power various AI agents that can autonomously perform tasks such as online shopping, browsing, and gaming.<\/p>\n\n\n\n

\u201cToday we\u2019re excited to launch our next era of models built for this new agentic era: introducing Gemini 2.0, our most capable model yet. With new advances in multimodality \u2014 like native image and audio output \u2014 and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant\u201d,<\/em><\/strong> said Sundar Pichai, CEO of Google. <\/p>\n\n\n\n

With Gemini 2.0 Google is entering what it refers to as the \u201cera of AI agents\u201d. According to Demis Hassabi, CEO of Google DeepMind, \u201cGemini 2.0 Flash\u2019s native user interface action-capabilities, along with other improvements, all work in concert to enable a new class of agentic experiences\u201d<\/em><\/strong>. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces Gemini Flash As It Attempts To Top The Generative AI Race<\/a><\/p>\n\n\n\n

Gemini 2.0 Highlights<\/strong><\/h2>\n\n\n\n

Google released a blog post<\/a> that highlights Gemini 2.0\u2019s enhanced capabilities. Gemini 2.0 outperforms older models such as Gemini 1.5 Flash and Gemini 1.5 Pro in several key benchmarks. This new model can support both multimodal inputs and multimodal outputs.<\/p>\n\n\n\n

Additionally, Google is implementing Gemini 2.0 into several of its products. This includes Project Mariner, an experimental Chrome extension. This AI agent can browse the internet and complete online tasks (such as shopping) for the user. There is also Jules, an AI agent that can help programmers with debugging codes. Gemini 2.0 is also set to help gamers by generating strategies in real-time conversations. It should be noted that most of these agents are still in the development stage and are accessible to select users only.<\/p>\n\n\n\n

Currently, an experimental version of Gemini 2.0 Flash is available to all Gemini users. Users can also access a chat-optimized version of Gemini 2.0 Flash on desktop and mobile web. This will be made available to Gemini App users next year. Google is also testing Gemini 2.0 in AI Overviews with plans of widespread rollouts early next year.<\/p>\n","post_title":"Introducing Gemini 2.0: Google\u2019s Most Capable Model That Can Power AI Agents","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-2-0-googles-most-capable-model-that-can-power-ai-agents","to_ping":"","pinged":"","post_modified":"2024-12-19 21:51:21","post_modified_gmt":"2024-12-19 10:51:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19917","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19327,"post_author":"17","post_date":"2024-11-02 05:34:27","post_date_gmt":"2024-11-01 18:34:27","post_content":"\n

American tech company Google recently announced a $5.8 million investment to facilitate AI growth in Africa. The company confirmed<\/a> on Monday that the initiative would extend across sub-Saharan Africa to \u201cempower individuals and organizations to leverage AI for economic growth and social impact.\u201d<\/em><\/strong> <\/p>\n\n\n\n

This commitment further solidifies Google\u2019s larger ambition of accelerating Africa\u2019s digital transformation. In 2023, Google announced a $1 billion project over 5 years in Africa. The goal of this investment was to support a range of initiatives, from improved connectivity to investment in startups, to help boost Africa\u2019s digital transformation.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Venom To Launch A Blockchain Hub With Kenyan Government<\/a><\/p>\n\n\n\n

Google believes that Africa has a massive potential for developing AI. The company\u2019s report on the \u201cDigital Opportunity of Africa\u201d estimated that AI could contribute up to $30 billion to the Sub-saharan economy by 2030. With this in mind, the company is aiming to equip people with the skills and resources they need to build and use AI responsibly and effectively.<\/p>\n\n\n\n

\u201cWe've seen how AI can help social impact organizations accelerate and scale their work. The funding announced today will help organizations create AI tools that will benefit not only communities across Africa but across the globe.\u201d,<\/em> said Jen Carter Google.org Head of Tech and Volunteering.<\/a><\/p>\n","post_title":"A Look At Google\u2019s $5.8 Million Commitment To AI In Africa","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-look-at-googles-5-8-million-commitment-to-ai-in-africa","to_ping":"","pinged":"","post_modified":"2024-11-02 05:34:35","post_modified_gmt":"2024-11-01 18:34:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19327","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19266,"post_author":"17","post_date":"2024-10-26 22:16:11","post_date_gmt":"2024-10-26 11:16:11","post_content":"\n

Google has recently announced a partnership with energy company Kairos Powers. The deal will see the tech company buy nuclear energy to power artificial intelligence development. \u201cToday, we\u2019re building on these efforts by signing the world\u2019s first corporate agreement to purchase nuclear energy from multiple small modular reactors (SMRs) to be developed by Kairos Power\u201d<\/em><\/strong>, the company confirmed in a blog post<\/a>.<\/p>\n\n\n\n

Kairos Powers is a nuclear energy company based in California, USA. As part of the agreement, the company will build Google multiple small modular reactors (SMRs). These reactors utilize a molten-salt cooling system and graphite-pebble fuel to transport heat to a steam turbine, generating electrical energy. The first of these reactors is planned to be online by 2030 with the rest set to be active by 2035. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Using AI To Create A Sustainable Future: Microsoft Teams Up With Leading Energy Company<\/a><\/p>\n\n\n\n

The recent advancement in generative AI technology has caused a surge in electricity demand. Other tech companies such as Microsoft have also turned to nuclear as a solution for a clean, round-the-clock power source. Google hopes this project will allow 24\/7 carbon-free energy to further its AI technologies and data centers. <\/p>\n\n\n\n

Speaking on the impact this deal can have on AI, Michael Terrell, senior director of Energy and Climate at Google, said, \u201cThe grid needs new electricity sources to support AI technologies that are powering major scientific advances. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone\u201d<\/em><\/strong>. <\/p>\n\n\n\n

Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Starting in January 2025, crypto advertisers in the United Kingdom will face stricter rules as Google updates its advertising policies. The new requirements are designed to align with local regulations and target crypto exchanges, software wallets, and hardware wallets. <\/p>\n\n\n\n

Google\u2019s move reflects increasing scrutiny of crypto promotions worldwide as regulators aim to protect consumers in the volatile digital asset market.<\/p>\n\n\n\n

Google\u2019s updated policy<\/a> now mandates Financial Conduct Authority (FCA) registration for advertisers promoting crypto exchanges or software wallets in the UK. These ads will only be allowed if advertisers meet all local regulatory requirements and secure Google certification.<\/p>\n\n\n\n

Additionally, Google will allow ads for hardware wallets designed to store private crypto keys, provided they do not offer services such as buying, selling, or trading digital assets.<\/p>\n\n\n\n

Advertisers must ensure compliance with UK regulations and obtain Google\u2019s approval before launching campaigns. This policy overhaul comes amid growing efforts by global regulators to combat misleading or unauthorized crypto promotions.<\/p>\n\n\n\n

See Related:<\/em><\/strong> A Year On Out, Credit Suisse's Fallout And Regulatory Challenges<\/a><\/p>\n\n\n\n

Regulatory Scrutiny Globally<\/strong><\/p>\n\n\n\n

The FCA recently warned against a Solana-based project called \u201cRetardio,\u201d highlighting concerns about unauthorized promotions that could leave consumers vulnerable to financial losses, Cointelegraph reported<\/a>.<\/p>\n\n\n\n

Similarly, Nigeria\u2019s Securities and Exchange Commission introduced stricter marketing rules for crypto products. Influencers and service providers in Nigeria now require explicit permission from the SEC before promoting digital assets.<\/p>\n\n\n\n

Google\u2019s decision reflects a broader trend in the financial sector. As the crypto market matures, governments and regulators are pushing for increased oversight to protect consumers. Misleading advertisements have often drawn criticism for downplaying the risks associated with investing in digital assets.<\/p>\n\n\n\n

Previously, Google updated its crypto advertising policies, but this latest move highlights the growing complexity of compliance in global markets. Advertisers must now navigate a patchwork of local regulations to ensure their campaigns remain legitimate.<\/p>\n\n\n\n

Google\u2019s updated policies will take effect on January 15, 2025, offering crypto businesses a clear timeline to adapt. As regulators continue to tighten their grip, companies must stay proactive to avoid disruptions in their advertising strategies.<\/p>\n","post_title":"Google Tightens Crypto Ad Rules, FCA Registration Now Mandatory In UK","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-tightens-crypto-ad-rules-fca-registration-now-mandatory-in-uk","to_ping":"","pinged":"","post_modified":"2024-12-29 01:38:20","post_modified_gmt":"2024-12-28 14:38:20","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19945","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19917,"post_author":"17","post_date":"2024-12-19 21:51:13","post_date_gmt":"2024-12-19 10:51:13","post_content":"\n

American tech company Google has released Gemini 2.0, the latest version of the company\u2019s flagship AI model. Gemini 2.0 is reported to be Google's strongest AI model. It is set to power various AI agents that can autonomously perform tasks such as online shopping, browsing, and gaming.<\/p>\n\n\n\n

\u201cToday we\u2019re excited to launch our next era of models built for this new agentic era: introducing Gemini 2.0, our most capable model yet. With new advances in multimodality \u2014 like native image and audio output \u2014 and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant\u201d,<\/em><\/strong> said Sundar Pichai, CEO of Google. <\/p>\n\n\n\n

With Gemini 2.0 Google is entering what it refers to as the \u201cera of AI agents\u201d. According to Demis Hassabi, CEO of Google DeepMind, \u201cGemini 2.0 Flash\u2019s native user interface action-capabilities, along with other improvements, all work in concert to enable a new class of agentic experiences\u201d<\/em><\/strong>. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces Gemini Flash As It Attempts To Top The Generative AI Race<\/a><\/p>\n\n\n\n

Gemini 2.0 Highlights<\/strong><\/h2>\n\n\n\n

Google released a blog post<\/a> that highlights Gemini 2.0\u2019s enhanced capabilities. Gemini 2.0 outperforms older models such as Gemini 1.5 Flash and Gemini 1.5 Pro in several key benchmarks. This new model can support both multimodal inputs and multimodal outputs.<\/p>\n\n\n\n

Additionally, Google is implementing Gemini 2.0 into several of its products. This includes Project Mariner, an experimental Chrome extension. This AI agent can browse the internet and complete online tasks (such as shopping) for the user. There is also Jules, an AI agent that can help programmers with debugging codes. Gemini 2.0 is also set to help gamers by generating strategies in real-time conversations. It should be noted that most of these agents are still in the development stage and are accessible to select users only.<\/p>\n\n\n\n

Currently, an experimental version of Gemini 2.0 Flash is available to all Gemini users. Users can also access a chat-optimized version of Gemini 2.0 Flash on desktop and mobile web. This will be made available to Gemini App users next year. Google is also testing Gemini 2.0 in AI Overviews with plans of widespread rollouts early next year.<\/p>\n","post_title":"Introducing Gemini 2.0: Google\u2019s Most Capable Model That Can Power AI Agents","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-2-0-googles-most-capable-model-that-can-power-ai-agents","to_ping":"","pinged":"","post_modified":"2024-12-19 21:51:21","post_modified_gmt":"2024-12-19 10:51:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19917","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19327,"post_author":"17","post_date":"2024-11-02 05:34:27","post_date_gmt":"2024-11-01 18:34:27","post_content":"\n

American tech company Google recently announced a $5.8 million investment to facilitate AI growth in Africa. The company confirmed<\/a> on Monday that the initiative would extend across sub-Saharan Africa to \u201cempower individuals and organizations to leverage AI for economic growth and social impact.\u201d<\/em><\/strong> <\/p>\n\n\n\n

This commitment further solidifies Google\u2019s larger ambition of accelerating Africa\u2019s digital transformation. In 2023, Google announced a $1 billion project over 5 years in Africa. The goal of this investment was to support a range of initiatives, from improved connectivity to investment in startups, to help boost Africa\u2019s digital transformation.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Venom To Launch A Blockchain Hub With Kenyan Government<\/a><\/p>\n\n\n\n

Google believes that Africa has a massive potential for developing AI. The company\u2019s report on the \u201cDigital Opportunity of Africa\u201d estimated that AI could contribute up to $30 billion to the Sub-saharan economy by 2030. With this in mind, the company is aiming to equip people with the skills and resources they need to build and use AI responsibly and effectively.<\/p>\n\n\n\n

\u201cWe've seen how AI can help social impact organizations accelerate and scale their work. The funding announced today will help organizations create AI tools that will benefit not only communities across Africa but across the globe.\u201d,<\/em> said Jen Carter Google.org Head of Tech and Volunteering.<\/a><\/p>\n","post_title":"A Look At Google\u2019s $5.8 Million Commitment To AI In Africa","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-look-at-googles-5-8-million-commitment-to-ai-in-africa","to_ping":"","pinged":"","post_modified":"2024-11-02 05:34:35","post_modified_gmt":"2024-11-01 18:34:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19327","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19266,"post_author":"17","post_date":"2024-10-26 22:16:11","post_date_gmt":"2024-10-26 11:16:11","post_content":"\n

Google has recently announced a partnership with energy company Kairos Powers. The deal will see the tech company buy nuclear energy to power artificial intelligence development. \u201cToday, we\u2019re building on these efforts by signing the world\u2019s first corporate agreement to purchase nuclear energy from multiple small modular reactors (SMRs) to be developed by Kairos Power\u201d<\/em><\/strong>, the company confirmed in a blog post<\/a>.<\/p>\n\n\n\n

Kairos Powers is a nuclear energy company based in California, USA. As part of the agreement, the company will build Google multiple small modular reactors (SMRs). These reactors utilize a molten-salt cooling system and graphite-pebble fuel to transport heat to a steam turbine, generating electrical energy. The first of these reactors is planned to be online by 2030 with the rest set to be active by 2035. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Using AI To Create A Sustainable Future: Microsoft Teams Up With Leading Energy Company<\/a><\/p>\n\n\n\n

The recent advancement in generative AI technology has caused a surge in electricity demand. Other tech companies such as Microsoft have also turned to nuclear as a solution for a clean, round-the-clock power source. Google hopes this project will allow 24\/7 carbon-free energy to further its AI technologies and data centers. <\/p>\n\n\n\n

Speaking on the impact this deal can have on AI, Michael Terrell, senior director of Energy and Climate at Google, said, \u201cThe grid needs new electricity sources to support AI technologies that are powering major scientific advances. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone\u201d<\/em><\/strong>. <\/p>\n\n\n\n

Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

\u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

<\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n
  • The policy overhaul is part of global regulatory efforts to combat misleading crypto promotions.<\/li>\n<\/ul>\n\n\n\n

    Starting in January 2025, crypto advertisers in the United Kingdom will face stricter rules as Google updates its advertising policies. The new requirements are designed to align with local regulations and target crypto exchanges, software wallets, and hardware wallets. <\/p>\n\n\n\n

    Google\u2019s move reflects increasing scrutiny of crypto promotions worldwide as regulators aim to protect consumers in the volatile digital asset market.<\/p>\n\n\n\n

    Google\u2019s updated policy<\/a> now mandates Financial Conduct Authority (FCA) registration for advertisers promoting crypto exchanges or software wallets in the UK. These ads will only be allowed if advertisers meet all local regulatory requirements and secure Google certification.<\/p>\n\n\n\n

    Additionally, Google will allow ads for hardware wallets designed to store private crypto keys, provided they do not offer services such as buying, selling, or trading digital assets.<\/p>\n\n\n\n

    Advertisers must ensure compliance with UK regulations and obtain Google\u2019s approval before launching campaigns. This policy overhaul comes amid growing efforts by global regulators to combat misleading or unauthorized crypto promotions.<\/p>\n\n\n\n

    See Related:<\/em><\/strong> A Year On Out, Credit Suisse's Fallout And Regulatory Challenges<\/a><\/p>\n\n\n\n

    Regulatory Scrutiny Globally<\/strong><\/p>\n\n\n\n

    The FCA recently warned against a Solana-based project called \u201cRetardio,\u201d highlighting concerns about unauthorized promotions that could leave consumers vulnerable to financial losses, Cointelegraph reported<\/a>.<\/p>\n\n\n\n

    Similarly, Nigeria\u2019s Securities and Exchange Commission introduced stricter marketing rules for crypto products. Influencers and service providers in Nigeria now require explicit permission from the SEC before promoting digital assets.<\/p>\n\n\n\n

    Google\u2019s decision reflects a broader trend in the financial sector. As the crypto market matures, governments and regulators are pushing for increased oversight to protect consumers. Misleading advertisements have often drawn criticism for downplaying the risks associated with investing in digital assets.<\/p>\n\n\n\n

    Previously, Google updated its crypto advertising policies, but this latest move highlights the growing complexity of compliance in global markets. Advertisers must now navigate a patchwork of local regulations to ensure their campaigns remain legitimate.<\/p>\n\n\n\n

    Google\u2019s updated policies will take effect on January 15, 2025, offering crypto businesses a clear timeline to adapt. As regulators continue to tighten their grip, companies must stay proactive to avoid disruptions in their advertising strategies.<\/p>\n","post_title":"Google Tightens Crypto Ad Rules, FCA Registration Now Mandatory In UK","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-tightens-crypto-ad-rules-fca-registration-now-mandatory-in-uk","to_ping":"","pinged":"","post_modified":"2024-12-29 01:38:20","post_modified_gmt":"2024-12-28 14:38:20","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19945","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19917,"post_author":"17","post_date":"2024-12-19 21:51:13","post_date_gmt":"2024-12-19 10:51:13","post_content":"\n

    American tech company Google has released Gemini 2.0, the latest version of the company\u2019s flagship AI model. Gemini 2.0 is reported to be Google's strongest AI model. It is set to power various AI agents that can autonomously perform tasks such as online shopping, browsing, and gaming.<\/p>\n\n\n\n

    \u201cToday we\u2019re excited to launch our next era of models built for this new agentic era: introducing Gemini 2.0, our most capable model yet. With new advances in multimodality \u2014 like native image and audio output \u2014 and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant\u201d,<\/em><\/strong> said Sundar Pichai, CEO of Google. <\/p>\n\n\n\n

    With Gemini 2.0 Google is entering what it refers to as the \u201cera of AI agents\u201d. According to Demis Hassabi, CEO of Google DeepMind, \u201cGemini 2.0 Flash\u2019s native user interface action-capabilities, along with other improvements, all work in concert to enable a new class of agentic experiences\u201d<\/em><\/strong>. <\/p>\n\n\n\n

    See Related: <\/em><\/strong>Google Announces Gemini Flash As It Attempts To Top The Generative AI Race<\/a><\/p>\n\n\n\n

    Gemini 2.0 Highlights<\/strong><\/h2>\n\n\n\n

    Google released a blog post<\/a> that highlights Gemini 2.0\u2019s enhanced capabilities. Gemini 2.0 outperforms older models such as Gemini 1.5 Flash and Gemini 1.5 Pro in several key benchmarks. This new model can support both multimodal inputs and multimodal outputs.<\/p>\n\n\n\n

    Additionally, Google is implementing Gemini 2.0 into several of its products. This includes Project Mariner, an experimental Chrome extension. This AI agent can browse the internet and complete online tasks (such as shopping) for the user. There is also Jules, an AI agent that can help programmers with debugging codes. Gemini 2.0 is also set to help gamers by generating strategies in real-time conversations. It should be noted that most of these agents are still in the development stage and are accessible to select users only.<\/p>\n\n\n\n

    Currently, an experimental version of Gemini 2.0 Flash is available to all Gemini users. Users can also access a chat-optimized version of Gemini 2.0 Flash on desktop and mobile web. This will be made available to Gemini App users next year. Google is also testing Gemini 2.0 in AI Overviews with plans of widespread rollouts early next year.<\/p>\n","post_title":"Introducing Gemini 2.0: Google\u2019s Most Capable Model That Can Power AI Agents","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-2-0-googles-most-capable-model-that-can-power-ai-agents","to_ping":"","pinged":"","post_modified":"2024-12-19 21:51:21","post_modified_gmt":"2024-12-19 10:51:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19917","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19327,"post_author":"17","post_date":"2024-11-02 05:34:27","post_date_gmt":"2024-11-01 18:34:27","post_content":"\n

    American tech company Google recently announced a $5.8 million investment to facilitate AI growth in Africa. The company confirmed<\/a> on Monday that the initiative would extend across sub-Saharan Africa to \u201cempower individuals and organizations to leverage AI for economic growth and social impact.\u201d<\/em><\/strong> <\/p>\n\n\n\n

    This commitment further solidifies Google\u2019s larger ambition of accelerating Africa\u2019s digital transformation. In 2023, Google announced a $1 billion project over 5 years in Africa. The goal of this investment was to support a range of initiatives, from improved connectivity to investment in startups, to help boost Africa\u2019s digital transformation.<\/p>\n\n\n\n

    See Related: <\/em><\/strong>Venom To Launch A Blockchain Hub With Kenyan Government<\/a><\/p>\n\n\n\n

    Google believes that Africa has a massive potential for developing AI. The company\u2019s report on the \u201cDigital Opportunity of Africa\u201d estimated that AI could contribute up to $30 billion to the Sub-saharan economy by 2030. With this in mind, the company is aiming to equip people with the skills and resources they need to build and use AI responsibly and effectively.<\/p>\n\n\n\n

    \u201cWe've seen how AI can help social impact organizations accelerate and scale their work. The funding announced today will help organizations create AI tools that will benefit not only communities across Africa but across the globe.\u201d,<\/em> said Jen Carter Google.org Head of Tech and Volunteering.<\/a><\/p>\n","post_title":"A Look At Google\u2019s $5.8 Million Commitment To AI In Africa","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-look-at-googles-5-8-million-commitment-to-ai-in-africa","to_ping":"","pinged":"","post_modified":"2024-11-02 05:34:35","post_modified_gmt":"2024-11-01 18:34:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19327","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19266,"post_author":"17","post_date":"2024-10-26 22:16:11","post_date_gmt":"2024-10-26 11:16:11","post_content":"\n

    Google has recently announced a partnership with energy company Kairos Powers. The deal will see the tech company buy nuclear energy to power artificial intelligence development. \u201cToday, we\u2019re building on these efforts by signing the world\u2019s first corporate agreement to purchase nuclear energy from multiple small modular reactors (SMRs) to be developed by Kairos Power\u201d<\/em><\/strong>, the company confirmed in a blog post<\/a>.<\/p>\n\n\n\n

    Kairos Powers is a nuclear energy company based in California, USA. As part of the agreement, the company will build Google multiple small modular reactors (SMRs). These reactors utilize a molten-salt cooling system and graphite-pebble fuel to transport heat to a steam turbine, generating electrical energy. The first of these reactors is planned to be online by 2030 with the rest set to be active by 2035. <\/p>\n\n\n\n

    See Related: <\/em><\/strong>Using AI To Create A Sustainable Future: Microsoft Teams Up With Leading Energy Company<\/a><\/p>\n\n\n\n

    The recent advancement in generative AI technology has caused a surge in electricity demand. Other tech companies such as Microsoft have also turned to nuclear as a solution for a clean, round-the-clock power source. Google hopes this project will allow 24\/7 carbon-free energy to further its AI technologies and data centers. <\/p>\n\n\n\n

    Speaking on the impact this deal can have on AI, Michael Terrell, senior director of Energy and Climate at Google, said, \u201cThe grid needs new electricity sources to support AI technologies that are powering major scientific advances. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone\u201d<\/em><\/strong>. <\/p>\n\n\n\n

    Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

    Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

    \u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

    Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

    In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

    See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

    Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

    The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

    Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

    \u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

    AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

    See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

    The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

    AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

    AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

    <\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

    American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

    In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

    Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

    See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

    The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

    The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

    Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

    Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

    \u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

    Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

    See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

    Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

    Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

    Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

    Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

    During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

    See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

    Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

    Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

    The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

    American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

    \u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

    See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

    According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

    Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

    Most Read

    Subscribe To Our Newsletter

    By subscribing, you agree with our privacy and terms.

    Follow The Distributed

    ADVERTISEMENT
    \n
  • Advertisers promoting crypto exchanges or software wallets in the UK must register with the FCA.<\/li>\n\n\n\n
  • The policy overhaul is part of global regulatory efforts to combat misleading crypto promotions.<\/li>\n<\/ul>\n\n\n\n

    Starting in January 2025, crypto advertisers in the United Kingdom will face stricter rules as Google updates its advertising policies. The new requirements are designed to align with local regulations and target crypto exchanges, software wallets, and hardware wallets. <\/p>\n\n\n\n

    Google\u2019s move reflects increasing scrutiny of crypto promotions worldwide as regulators aim to protect consumers in the volatile digital asset market.<\/p>\n\n\n\n

    Google\u2019s updated policy<\/a> now mandates Financial Conduct Authority (FCA) registration for advertisers promoting crypto exchanges or software wallets in the UK. These ads will only be allowed if advertisers meet all local regulatory requirements and secure Google certification.<\/p>\n\n\n\n

    Additionally, Google will allow ads for hardware wallets designed to store private crypto keys, provided they do not offer services such as buying, selling, or trading digital assets.<\/p>\n\n\n\n

    Advertisers must ensure compliance with UK regulations and obtain Google\u2019s approval before launching campaigns. This policy overhaul comes amid growing efforts by global regulators to combat misleading or unauthorized crypto promotions.<\/p>\n\n\n\n

    See Related:<\/em><\/strong> A Year On Out, Credit Suisse's Fallout And Regulatory Challenges<\/a><\/p>\n\n\n\n

    Regulatory Scrutiny Globally<\/strong><\/p>\n\n\n\n

    The FCA recently warned against a Solana-based project called \u201cRetardio,\u201d highlighting concerns about unauthorized promotions that could leave consumers vulnerable to financial losses, Cointelegraph reported<\/a>.<\/p>\n\n\n\n

    Similarly, Nigeria\u2019s Securities and Exchange Commission introduced stricter marketing rules for crypto products. Influencers and service providers in Nigeria now require explicit permission from the SEC before promoting digital assets.<\/p>\n\n\n\n

    Google\u2019s decision reflects a broader trend in the financial sector. As the crypto market matures, governments and regulators are pushing for increased oversight to protect consumers. Misleading advertisements have often drawn criticism for downplaying the risks associated with investing in digital assets.<\/p>\n\n\n\n

    Previously, Google updated its crypto advertising policies, but this latest move highlights the growing complexity of compliance in global markets. Advertisers must now navigate a patchwork of local regulations to ensure their campaigns remain legitimate.<\/p>\n\n\n\n

    Google\u2019s updated policies will take effect on January 15, 2025, offering crypto businesses a clear timeline to adapt. As regulators continue to tighten their grip, companies must stay proactive to avoid disruptions in their advertising strategies.<\/p>\n","post_title":"Google Tightens Crypto Ad Rules, FCA Registration Now Mandatory In UK","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-tightens-crypto-ad-rules-fca-registration-now-mandatory-in-uk","to_ping":"","pinged":"","post_modified":"2024-12-29 01:38:20","post_modified_gmt":"2024-12-28 14:38:20","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19945","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19917,"post_author":"17","post_date":"2024-12-19 21:51:13","post_date_gmt":"2024-12-19 10:51:13","post_content":"\n

    American tech company Google has released Gemini 2.0, the latest version of the company\u2019s flagship AI model. Gemini 2.0 is reported to be Google's strongest AI model. It is set to power various AI agents that can autonomously perform tasks such as online shopping, browsing, and gaming.<\/p>\n\n\n\n

    \u201cToday we\u2019re excited to launch our next era of models built for this new agentic era: introducing Gemini 2.0, our most capable model yet. With new advances in multimodality \u2014 like native image and audio output \u2014 and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant\u201d,<\/em><\/strong> said Sundar Pichai, CEO of Google. <\/p>\n\n\n\n

    With Gemini 2.0 Google is entering what it refers to as the \u201cera of AI agents\u201d. According to Demis Hassabi, CEO of Google DeepMind, \u201cGemini 2.0 Flash\u2019s native user interface action-capabilities, along with other improvements, all work in concert to enable a new class of agentic experiences\u201d<\/em><\/strong>. <\/p>\n\n\n\n

    See Related: <\/em><\/strong>Google Announces Gemini Flash As It Attempts To Top The Generative AI Race<\/a><\/p>\n\n\n\n

    Gemini 2.0 Highlights<\/strong><\/h2>\n\n\n\n

    Google released a blog post<\/a> that highlights Gemini 2.0\u2019s enhanced capabilities. Gemini 2.0 outperforms older models such as Gemini 1.5 Flash and Gemini 1.5 Pro in several key benchmarks. This new model can support both multimodal inputs and multimodal outputs.<\/p>\n\n\n\n

    Additionally, Google is implementing Gemini 2.0 into several of its products. This includes Project Mariner, an experimental Chrome extension. This AI agent can browse the internet and complete online tasks (such as shopping) for the user. There is also Jules, an AI agent that can help programmers with debugging codes. Gemini 2.0 is also set to help gamers by generating strategies in real-time conversations. It should be noted that most of these agents are still in the development stage and are accessible to select users only.<\/p>\n\n\n\n

    Currently, an experimental version of Gemini 2.0 Flash is available to all Gemini users. Users can also access a chat-optimized version of Gemini 2.0 Flash on desktop and mobile web. This will be made available to Gemini App users next year. Google is also testing Gemini 2.0 in AI Overviews with plans of widespread rollouts early next year.<\/p>\n","post_title":"Introducing Gemini 2.0: Google\u2019s Most Capable Model That Can Power AI Agents","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-2-0-googles-most-capable-model-that-can-power-ai-agents","to_ping":"","pinged":"","post_modified":"2024-12-19 21:51:21","post_modified_gmt":"2024-12-19 10:51:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19917","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19327,"post_author":"17","post_date":"2024-11-02 05:34:27","post_date_gmt":"2024-11-01 18:34:27","post_content":"\n

    American tech company Google recently announced a $5.8 million investment to facilitate AI growth in Africa. The company confirmed<\/a> on Monday that the initiative would extend across sub-Saharan Africa to \u201cempower individuals and organizations to leverage AI for economic growth and social impact.\u201d<\/em><\/strong> <\/p>\n\n\n\n

    This commitment further solidifies Google\u2019s larger ambition of accelerating Africa\u2019s digital transformation. In 2023, Google announced a $1 billion project over 5 years in Africa. The goal of this investment was to support a range of initiatives, from improved connectivity to investment in startups, to help boost Africa\u2019s digital transformation.<\/p>\n\n\n\n

    See Related: <\/em><\/strong>Venom To Launch A Blockchain Hub With Kenyan Government<\/a><\/p>\n\n\n\n

    Google believes that Africa has a massive potential for developing AI. The company\u2019s report on the \u201cDigital Opportunity of Africa\u201d estimated that AI could contribute up to $30 billion to the Sub-saharan economy by 2030. With this in mind, the company is aiming to equip people with the skills and resources they need to build and use AI responsibly and effectively.<\/p>\n\n\n\n

    \u201cWe've seen how AI can help social impact organizations accelerate and scale their work. The funding announced today will help organizations create AI tools that will benefit not only communities across Africa but across the globe.\u201d,<\/em> said Jen Carter Google.org Head of Tech and Volunteering.<\/a><\/p>\n","post_title":"A Look At Google\u2019s $5.8 Million Commitment To AI In Africa","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-look-at-googles-5-8-million-commitment-to-ai-in-africa","to_ping":"","pinged":"","post_modified":"2024-11-02 05:34:35","post_modified_gmt":"2024-11-01 18:34:35","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19327","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19266,"post_author":"17","post_date":"2024-10-26 22:16:11","post_date_gmt":"2024-10-26 11:16:11","post_content":"\n

    Google has recently announced a partnership with energy company Kairos Powers. The deal will see the tech company buy nuclear energy to power artificial intelligence development. \u201cToday, we\u2019re building on these efforts by signing the world\u2019s first corporate agreement to purchase nuclear energy from multiple small modular reactors (SMRs) to be developed by Kairos Power\u201d<\/em><\/strong>, the company confirmed in a blog post<\/a>.<\/p>\n\n\n\n

    Kairos Powers is a nuclear energy company based in California, USA. As part of the agreement, the company will build Google multiple small modular reactors (SMRs). These reactors utilize a molten-salt cooling system and graphite-pebble fuel to transport heat to a steam turbine, generating electrical energy. The first of these reactors is planned to be online by 2030 with the rest set to be active by 2035. <\/p>\n\n\n\n

    See Related: <\/em><\/strong>Using AI To Create A Sustainable Future: Microsoft Teams Up With Leading Energy Company<\/a><\/p>\n\n\n\n

    The recent advancement in generative AI technology has caused a surge in electricity demand. Other tech companies such as Microsoft have also turned to nuclear as a solution for a clean, round-the-clock power source. Google hopes this project will allow 24\/7 carbon-free energy to further its AI technologies and data centers. <\/p>\n\n\n\n

    Speaking on the impact this deal can have on AI, Michael Terrell, senior director of Energy and Climate at Google, said, \u201cThe grid needs new electricity sources to support AI technologies that are powering major scientific advances. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone\u201d<\/em><\/strong>. <\/p>\n\n\n\n

    Google has not disclosed the location of the power plants or the financial details of the agreement. <\/p>\n","post_title":"Google Announces \u201cWorld\u2019s First\u201d Deal To Purchase Nuclear Energy To Power Its AI Ambitions","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-worlds-first-deal-to-purchase-nuclear-energy-to-power-its-ai-ambitions","to_ping":"","pinged":"","post_modified":"2024-10-26 22:16:19","post_modified_gmt":"2024-10-26 11:16:19","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19266","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

    Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

    \u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

    Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

    In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

    See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

    Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

    The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18622,"post_author":"17","post_date":"2024-09-14 20:21:09","post_date_gmt":"2024-09-14 10:21:09","post_content":"\n

    Google recently presented a new AI system called AlphaProteo designed for health and biological research. According to Google, this new technology has the \u201cpotential for advancing drug design, disease understanding, and more\u201d.<\/em><\/p>\n\n\n\n

    \u201cToday, we introduce AlphaProteo, our first AI system for designing novel, high-strength protein binders to serve as building blocks for biological and health research\u201d<\/em><\/strong>, the company stated in a blog post<\/a>. <\/p>\n\n\n\n

    AlphaProteo is claimed to be the first of its kind; an AI system that can generate novel proteins that bind with target molecules. Such binding proteins can help researchers in various fields including drug development, cancer treatment, and cell and tissue imaging. Google also states this technology can aid in understanding and properly diagnosing human diseases. <\/p>\n\n\n\n

    See Related: <\/em><\/strong>Google Announces DeepMind; Accelerating Its Attempt At Leading The AI Race<\/a><\/p>\n\n\n\n

    The reveal of AlphaProteo is in keeping with Google\u2019s endeavor to create AI tools to further health-related research. Earlier this year, the company launched AlphaFold 3, an AI model that can predict protein structures. They have also released AlphaMissense which catalogs millions of genetic mutations.<\/p>\n\n\n\n

    AlphaProteo is trained using data from the Protein Data Bank. It also incorporates \u201cmore than 100 million predicted structures\u201d<\/em> from Google\u2019s other AI systems, including AlphaFold.<\/p>\n\n\n\n

    AlphaProteo was developed by two research teams under Google: the Protein Design team and the Wet Lab team. Currently, the model is in development. <\/p>\n\n\n\n

    <\/p>\n","post_title":"Google Unveils AlphaProteo: An AI System Designed For Biology And Health Research","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-unveils-alphaproteo-an-ai-system-designed-for-biology-and-health-research","to_ping":"","pinged":"","post_modified":"2024-09-14 20:23:21","post_modified_gmt":"2024-09-14 10:23:21","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18622","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

    American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

    In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

    Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

    See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

    The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

    The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

    Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

    Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

    \u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

    Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

    See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

    Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

    Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

    Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

    Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

    During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

    See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

    Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

    Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

    The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

    American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

    \u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

    See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

    According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

    Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

    Most Read

    Subscribe To Our Newsletter

    By subscribing, you agree with our privacy and terms.

    Follow The Distributed

    ADVERTISEMENT
    \n