\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

1 2 3 4 5 15

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

1 2 3 4 5 15

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

<\/p>\n","post_title":"News From Samsung Unpacked: Samsung To Bring AI To Healthcare With New Galaxy Ring","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"","post_password":"","post_name":"news-from-samsung-unpacked-samsung-to-bring-ai-to-healthcare-with-new-galaxy-ring","to_ping":"","pinged":"","post_modified":"2024-08-04 03:28:14","post_modified_gmt":"2024-08-03 17:28:14","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18076","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

According to Samsung, the ring can last up to 7 days on a single charge. The ring comes in
sizes 5 to 12. Interested parties can utilize the free sizing kit to<\/em> find their optimum fit

The Galaxy ring has a body of solid titanium. It comes in three different colors: black, gold, and
silver. The starting price for the Galaxy ring is $399.<\/p>\n\n\n\n

<\/p>\n","post_title":"News From Samsung Unpacked: Samsung To Bring AI To Healthcare With New Galaxy Ring","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"","post_password":"","post_name":"news-from-samsung-unpacked-samsung-to-bring-ai-to-healthcare-with-new-galaxy-ring","to_ping":"","pinged":"","post_modified":"2024-08-04 03:28:14","post_modified_gmt":"2024-08-03 17:28:14","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18076","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

The ring\u2019s built-in censors will collect data such as heart rate, blood oxygen level, and sleep
time. The AI in the Samsung Health app will analyze the data and generate an \u201cEnergy Score\u201d.
The score will offer guidance for healthy balanced living. Users will also receive \u201cpersonalized
suggestions\u201d to improve their daily activities.<\/em><\/p>\n\n\n\n

According to Samsung, the ring can last up to 7 days on a single charge. The ring comes in
sizes 5 to 12. Interested parties can utilize the free sizing kit to<\/em> find their optimum fit

The Galaxy ring has a body of solid titanium. It comes in three different colors: black, gold, and
silver. The starting price for the Galaxy ring is $399.<\/p>\n\n\n\n

<\/p>\n","post_title":"News From Samsung Unpacked: Samsung To Bring AI To Healthcare With New Galaxy Ring","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"","post_password":"","post_name":"news-from-samsung-unpacked-samsung-to-bring-ai-to-healthcare-with-new-galaxy-ring","to_ping":"","pinged":"","post_modified":"2024-08-04 03:28:14","post_modified_gmt":"2024-08-03 17:28:14","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18076","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Benefits of Galaxy Ring<\/h2>\n\n\n\n

The ring\u2019s built-in censors will collect data such as heart rate, blood oxygen level, and sleep
time. The AI in the Samsung Health app will analyze the data and generate an \u201cEnergy Score\u201d.
The score will offer guidance for healthy balanced living. Users will also receive \u201cpersonalized
suggestions\u201d to improve their daily activities.<\/em><\/p>\n\n\n\n

According to Samsung, the ring can last up to 7 days on a single charge. The ring comes in
sizes 5 to 12. Interested parties can utilize the free sizing kit to<\/em> find their optimum fit

The Galaxy ring has a body of solid titanium. It comes in three different colors: black, gold, and
silver. The starting price for the Galaxy ring is $399.<\/p>\n\n\n\n

<\/p>\n","post_title":"News From Samsung Unpacked: Samsung To Bring AI To Healthcare With New Galaxy Ring","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"","post_password":"","post_name":"news-from-samsung-unpacked-samsung-to-bring-ai-to-healthcare-with-new-galaxy-ring","to_ping":"","pinged":"","post_modified":"2024-08-04 03:28:14","post_modified_gmt":"2024-08-03 17:28:14","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18076","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Benefits of Galaxy Ring<\/h2>\n\n\n\n

The ring\u2019s built-in censors will collect data such as heart rate, blood oxygen level, and sleep
time. The AI in the Samsung Health app will analyze the data and generate an \u201cEnergy Score\u201d.
The score will offer guidance for healthy balanced living. Users will also receive \u201cpersonalized
suggestions\u201d to improve their daily activities.<\/em><\/p>\n\n\n\n

According to Samsung, the ring can last up to 7 days on a single charge. The ring comes in
sizes 5 to 12. Interested parties can utilize the free sizing kit to<\/em> find their optimum fit

The Galaxy ring has a body of solid titanium. It comes in three different colors: black, gold, and
silver. The starting price for the Galaxy ring is $399.<\/p>\n\n\n\n

<\/p>\n","post_title":"News From Samsung Unpacked: Samsung To Bring AI To Healthcare With New Galaxy Ring","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"","post_password":"","post_name":"news-from-samsung-unpacked-samsung-to-bring-ai-to-healthcare-with-new-galaxy-ring","to_ping":"","pinged":"","post_modified":"2024-08-04 03:28:14","post_modified_gmt":"2024-08-03 17:28:14","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18076","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

The new ring will utilize Samsung\u2019s proprietary Galaxy AI via the Samsung Health app. The ring
is made for all-day use. It will provide features such as a sleep tracker, heart health monitor,
menstrual cycle tracker, stress monitor, and more.<\/em><\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Benefits of Galaxy Ring<\/h2>\n\n\n\n

The ring\u2019s built-in censors will collect data such as heart rate, blood oxygen level, and sleep
time. The AI in the Samsung Health app will analyze the data and generate an \u201cEnergy Score\u201d.
The score will offer guidance for healthy balanced living. Users will also receive \u201cpersonalized
suggestions\u201d to improve their daily activities.<\/em><\/p>\n\n\n\n

According to Samsung, the ring can last up to 7 days on a single charge. The ring comes in
sizes 5 to 12. Interested parties can utilize the free sizing kit to<\/em> find their optimum fit

The Galaxy ring has a body of solid titanium. It comes in three different colors: black, gold, and
silver. The starting price for the Galaxy ring is $399.<\/p>\n\n\n\n

<\/p>\n","post_title":"News From Samsung Unpacked: Samsung To Bring AI To Healthcare With New Galaxy Ring","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"","post_password":"","post_name":"news-from-samsung-unpacked-samsung-to-bring-ai-to-healthcare-with-new-galaxy-ring","to_ping":"","pinged":"","post_modified":"2024-08-04 03:28:14","post_modified_gmt":"2024-08-03 17:28:14","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18076","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Samsung has announced the launch of a new smart ring called the Galaxy Ring. It is the
company\u2019s first smart ring which aims to provide users with several health services. The
announcement came during the latest Samsung Unpacked event, a biannual show hosted by
Samsung Electronics.

\u201cThe release of the Galaxy Ring will usher in a new era of wellness. You can now wrap
health tracking around your finger through this new addition to the Galaxy family,\u201d <\/em>the
the company stated in a press release.<\/p>\n\n\n\n

The new ring will utilize Samsung\u2019s proprietary Galaxy AI via the Samsung Health app. The ring
is made for all-day use. It will provide features such as a sleep tracker, heart health monitor,
menstrual cycle tracker, stress monitor, and more.<\/em><\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Benefits of Galaxy Ring<\/h2>\n\n\n\n

The ring\u2019s built-in censors will collect data such as heart rate, blood oxygen level, and sleep
time. The AI in the Samsung Health app will analyze the data and generate an \u201cEnergy Score\u201d.
The score will offer guidance for healthy balanced living. Users will also receive \u201cpersonalized
suggestions\u201d to improve their daily activities.<\/em><\/p>\n\n\n\n

According to Samsung, the ring can last up to 7 days on a single charge. The ring comes in
sizes 5 to 12. Interested parties can utilize the free sizing kit to<\/em> find their optimum fit

The Galaxy ring has a body of solid titanium. It comes in three different colors: black, gold, and
silver. The starting price for the Galaxy ring is $399.<\/p>\n\n\n\n

<\/p>\n","post_title":"News From Samsung Unpacked: Samsung To Bring AI To Healthcare With New Galaxy Ring","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"","post_password":"","post_name":"news-from-samsung-unpacked-samsung-to-bring-ai-to-healthcare-with-new-galaxy-ring","to_ping":"","pinged":"","post_modified":"2024-08-04 03:28:14","post_modified_gmt":"2024-08-03 17:28:14","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18076","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a>

The new smartwatches follow Samsung's approach to making holistic health-related products such as the Galaxy Ring. The watch utilizes several Bioactive sensors to track vital signs of users such as sleep, heart rate, blood pressure, body composition, and more. The data is then analyzed by Galaxy AI to generate an energy score, which offers insight into the user's daily activities. Users will need the latest Samsung Health App on a compatible Android device (Android 11 or above) to unlock the full features.

The Galaxy Watch Ultra is made with titanium and sapphire crystals and comes in 3 different
colors. It has a 590 mAh battery that can last between 60-80 hours depending on usage.

The Galaxy Watch Ultra is currently available in one version for $649.99. The Galaxy Watch 7
comes in two sizes: 40 mm for $299.99 and 44 mm for $329.99. The watches with LTE support will cost a further $50.<\/p>\n","post_title":"From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"from-samsung-unpacked-samsung-brings-ai-to-fashion-with-2-new-smart-watches","to_ping":"","pinged":"","post_modified":"2024-08-10 18:30:34","post_modified_gmt":"2024-08-10 08:30:34","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18078","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18076,"post_author":"17","post_date":"2024-08-04 03:28:14","post_date_gmt":"2024-08-03 17:28:14","post_content":"\n

Samsung has announced the launch of a new smart ring called the Galaxy Ring. It is the
company\u2019s first smart ring which aims to provide users with several health services. The
announcement came during the latest Samsung Unpacked event, a biannual show hosted by
Samsung Electronics.

\u201cThe release of the Galaxy Ring will usher in a new era of wellness. You can now wrap
health tracking around your finger through this new addition to the Galaxy family,\u201d <\/em>the
the company stated in a press release.<\/p>\n\n\n\n

The new ring will utilize Samsung\u2019s proprietary Galaxy AI via the Samsung Health app. The ring
is made for all-day use. It will provide features such as a sleep tracker, heart health monitor,
menstrual cycle tracker, stress monitor, and more.<\/em><\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Benefits of Galaxy Ring<\/h2>\n\n\n\n

The ring\u2019s built-in censors will collect data such as heart rate, blood oxygen level, and sleep
time. The AI in the Samsung Health app will analyze the data and generate an \u201cEnergy Score\u201d.
The score will offer guidance for healthy balanced living. Users will also receive \u201cpersonalized
suggestions\u201d to improve their daily activities.<\/em><\/p>\n\n\n\n

According to Samsung, the ring can last up to 7 days on a single charge. The ring comes in
sizes 5 to 12. Interested parties can utilize the free sizing kit to<\/em> find their optimum fit

The Galaxy ring has a body of solid titanium. It comes in three different colors: black, gold, and
silver. The starting price for the Galaxy ring is $399.<\/p>\n\n\n\n

<\/p>\n","post_title":"News From Samsung Unpacked: Samsung To Bring AI To Healthcare With New Galaxy Ring","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"","post_password":"","post_name":"news-from-samsung-unpacked-samsung-to-bring-ai-to-healthcare-with-new-galaxy-ring","to_ping":"","pinged":"","post_modified":"2024-08-04 03:28:14","post_modified_gmt":"2024-08-03 17:28:14","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18076","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Samsung has unveiled 2 new smartwatches that harness the power of the company's
proprietary Galaxy AI. The news came during the recently concluded Samsung Unpacked<\/a> event held in Paris.

\u201cBuilt to push boundaries, Galaxy Watch Ultra withstands up to 55\u00b0C heat, 9,000m altitude, 10 ATM water pressure and runs smoothly through it all with a new, powerful 3nm processor.\u201d <\/em>
reads the official page on Sa<\/a>msung\u2019s website.

Along with several other products, Samsung introduced the Galaxy Ultra Watch and the Galaxy and the Galaxy Watch 7 to much anticipation. Industry experts are calling it a direct rival to Apple's smartwatches, with many noting the similarities between the two.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a>

The new smartwatches follow Samsung's approach to making holistic health-related products such as the Galaxy Ring. The watch utilizes several Bioactive sensors to track vital signs of users such as sleep, heart rate, blood pressure, body composition, and more. The data is then analyzed by Galaxy AI to generate an energy score, which offers insight into the user's daily activities. Users will need the latest Samsung Health App on a compatible Android device (Android 11 or above) to unlock the full features.

The Galaxy Watch Ultra is made with titanium and sapphire crystals and comes in 3 different
colors. It has a 590 mAh battery that can last between 60-80 hours depending on usage.

The Galaxy Watch Ultra is currently available in one version for $649.99. The Galaxy Watch 7
comes in two sizes: 40 mm for $299.99 and 44 mm for $329.99. The watches with LTE support will cost a further $50.<\/p>\n","post_title":"From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"from-samsung-unpacked-samsung-brings-ai-to-fashion-with-2-new-smart-watches","to_ping":"","pinged":"","post_modified":"2024-08-10 18:30:34","post_modified_gmt":"2024-08-10 08:30:34","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18078","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18076,"post_author":"17","post_date":"2024-08-04 03:28:14","post_date_gmt":"2024-08-03 17:28:14","post_content":"\n

Samsung has announced the launch of a new smart ring called the Galaxy Ring. It is the
company\u2019s first smart ring which aims to provide users with several health services. The
announcement came during the latest Samsung Unpacked event, a biannual show hosted by
Samsung Electronics.

\u201cThe release of the Galaxy Ring will usher in a new era of wellness. You can now wrap
health tracking around your finger through this new addition to the Galaxy family,\u201d <\/em>the
the company stated in a press release.<\/p>\n\n\n\n

The new ring will utilize Samsung\u2019s proprietary Galaxy AI via the Samsung Health app. The ring
is made for all-day use. It will provide features such as a sleep tracker, heart health monitor,
menstrual cycle tracker, stress monitor, and more.<\/em><\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Benefits of Galaxy Ring<\/h2>\n\n\n\n

The ring\u2019s built-in censors will collect data such as heart rate, blood oxygen level, and sleep
time. The AI in the Samsung Health app will analyze the data and generate an \u201cEnergy Score\u201d.
The score will offer guidance for healthy balanced living. Users will also receive \u201cpersonalized
suggestions\u201d to improve their daily activities.<\/em><\/p>\n\n\n\n

According to Samsung, the ring can last up to 7 days on a single charge. The ring comes in
sizes 5 to 12. Interested parties can utilize the free sizing kit to<\/em> find their optimum fit

The Galaxy ring has a body of solid titanium. It comes in three different colors: black, gold, and
silver. The starting price for the Galaxy ring is $399.<\/p>\n\n\n\n

<\/p>\n","post_title":"News From Samsung Unpacked: Samsung To Bring AI To Healthcare With New Galaxy Ring","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"","post_password":"","post_name":"news-from-samsung-unpacked-samsung-to-bring-ai-to-healthcare-with-new-galaxy-ring","to_ping":"","pinged":"","post_modified":"2024-08-04 03:28:14","post_modified_gmt":"2024-08-03 17:28:14","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18076","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18078,"post_author":"17","post_date":"2024-08-10 18:30:27","post_date_gmt":"2024-08-10 08:30:27","post_content":"\n

Samsung has unveiled 2 new smartwatches that harness the power of the company's
proprietary Galaxy AI. The news came during the recently concluded Samsung Unpacked<\/a> event held in Paris.

\u201cBuilt to push boundaries, Galaxy Watch Ultra withstands up to 55\u00b0C heat, 9,000m altitude, 10 ATM water pressure and runs smoothly through it all with a new, powerful 3nm processor.\u201d <\/em>
reads the official page on Sa<\/a>msung\u2019s website.

Along with several other products, Samsung introduced the Galaxy Ultra Watch and the Galaxy and the Galaxy Watch 7 to much anticipation. Industry experts are calling it a direct rival to Apple's smartwatches, with many noting the similarities between the two.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a>

The new smartwatches follow Samsung's approach to making holistic health-related products such as the Galaxy Ring. The watch utilizes several Bioactive sensors to track vital signs of users such as sleep, heart rate, blood pressure, body composition, and more. The data is then analyzed by Galaxy AI to generate an energy score, which offers insight into the user's daily activities. Users will need the latest Samsung Health App on a compatible Android device (Android 11 or above) to unlock the full features.

The Galaxy Watch Ultra is made with titanium and sapphire crystals and comes in 3 different
colors. It has a 590 mAh battery that can last between 60-80 hours depending on usage.

The Galaxy Watch Ultra is currently available in one version for $649.99. The Galaxy Watch 7
comes in two sizes: 40 mm for $299.99 and 44 mm for $329.99. The watches with LTE support will cost a further $50.<\/p>\n","post_title":"From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"from-samsung-unpacked-samsung-brings-ai-to-fashion-with-2-new-smart-watches","to_ping":"","pinged":"","post_modified":"2024-08-10 18:30:34","post_modified_gmt":"2024-08-10 08:30:34","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18078","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18076,"post_author":"17","post_date":"2024-08-04 03:28:14","post_date_gmt":"2024-08-03 17:28:14","post_content":"\n

Samsung has announced the launch of a new smart ring called the Galaxy Ring. It is the
company\u2019s first smart ring which aims to provide users with several health services. The
announcement came during the latest Samsung Unpacked event, a biannual show hosted by
Samsung Electronics.

\u201cThe release of the Galaxy Ring will usher in a new era of wellness. You can now wrap
health tracking around your finger through this new addition to the Galaxy family,\u201d <\/em>the
the company stated in a press release.<\/p>\n\n\n\n

The new ring will utilize Samsung\u2019s proprietary Galaxy AI via the Samsung Health app. The ring
is made for all-day use. It will provide features such as a sleep tracker, heart health monitor,
menstrual cycle tracker, stress monitor, and more.<\/em><\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Benefits of Galaxy Ring<\/h2>\n\n\n\n

The ring\u2019s built-in censors will collect data such as heart rate, blood oxygen level, and sleep
time. The AI in the Samsung Health app will analyze the data and generate an \u201cEnergy Score\u201d.
The score will offer guidance for healthy balanced living. Users will also receive \u201cpersonalized
suggestions\u201d to improve their daily activities.<\/em><\/p>\n\n\n\n

According to Samsung, the ring can last up to 7 days on a single charge. The ring comes in
sizes 5 to 12. Interested parties can utilize the free sizing kit to<\/em> find their optimum fit

The Galaxy ring has a body of solid titanium. It comes in three different colors: black, gold, and
silver. The starting price for the Galaxy ring is $399.<\/p>\n\n\n\n

<\/p>\n","post_title":"News From Samsung Unpacked: Samsung To Bring AI To Healthcare With New Galaxy Ring","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"","post_password":"","post_name":"news-from-samsung-unpacked-samsung-to-bring-ai-to-healthcare-with-new-galaxy-ring","to_ping":"","pinged":"","post_modified":"2024-08-04 03:28:14","post_modified_gmt":"2024-08-03 17:28:14","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18076","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18078,"post_author":"17","post_date":"2024-08-10 18:30:27","post_date_gmt":"2024-08-10 08:30:27","post_content":"\n

Samsung has unveiled 2 new smartwatches that harness the power of the company's
proprietary Galaxy AI. The news came during the recently concluded Samsung Unpacked<\/a> event held in Paris.

\u201cBuilt to push boundaries, Galaxy Watch Ultra withstands up to 55\u00b0C heat, 9,000m altitude, 10 ATM water pressure and runs smoothly through it all with a new, powerful 3nm processor.\u201d <\/em>
reads the official page on Sa<\/a>msung\u2019s website.

Along with several other products, Samsung introduced the Galaxy Ultra Watch and the Galaxy and the Galaxy Watch 7 to much anticipation. Industry experts are calling it a direct rival to Apple's smartwatches, with many noting the similarities between the two.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a>

The new smartwatches follow Samsung's approach to making holistic health-related products such as the Galaxy Ring. The watch utilizes several Bioactive sensors to track vital signs of users such as sleep, heart rate, blood pressure, body composition, and more. The data is then analyzed by Galaxy AI to generate an energy score, which offers insight into the user's daily activities. Users will need the latest Samsung Health App on a compatible Android device (Android 11 or above) to unlock the full features.

The Galaxy Watch Ultra is made with titanium and sapphire crystals and comes in 3 different
colors. It has a 590 mAh battery that can last between 60-80 hours depending on usage.

The Galaxy Watch Ultra is currently available in one version for $649.99. The Galaxy Watch 7
comes in two sizes: 40 mm for $299.99 and 44 mm for $329.99. The watches with LTE support will cost a further $50.<\/p>\n","post_title":"From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"from-samsung-unpacked-samsung-brings-ai-to-fashion-with-2-new-smart-watches","to_ping":"","pinged":"","post_modified":"2024-08-10 18:30:34","post_modified_gmt":"2024-08-10 08:30:34","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18078","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18076,"post_author":"17","post_date":"2024-08-04 03:28:14","post_date_gmt":"2024-08-03 17:28:14","post_content":"\n

Samsung has announced the launch of a new smart ring called the Galaxy Ring. It is the
company\u2019s first smart ring which aims to provide users with several health services. The
announcement came during the latest Samsung Unpacked event, a biannual show hosted by
Samsung Electronics.

\u201cThe release of the Galaxy Ring will usher in a new era of wellness. You can now wrap
health tracking around your finger through this new addition to the Galaxy family,\u201d <\/em>the
the company stated in a press release.<\/p>\n\n\n\n

The new ring will utilize Samsung\u2019s proprietary Galaxy AI via the Samsung Health app. The ring
is made for all-day use. It will provide features such as a sleep tracker, heart health monitor,
menstrual cycle tracker, stress monitor, and more.<\/em><\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Benefits of Galaxy Ring<\/h2>\n\n\n\n

The ring\u2019s built-in censors will collect data such as heart rate, blood oxygen level, and sleep
time. The AI in the Samsung Health app will analyze the data and generate an \u201cEnergy Score\u201d.
The score will offer guidance for healthy balanced living. Users will also receive \u201cpersonalized
suggestions\u201d to improve their daily activities.<\/em><\/p>\n\n\n\n

According to Samsung, the ring can last up to 7 days on a single charge. The ring comes in
sizes 5 to 12. Interested parties can utilize the free sizing kit to<\/em> find their optimum fit

The Galaxy ring has a body of solid titanium. It comes in three different colors: black, gold, and
silver. The starting price for the Galaxy ring is $399.<\/p>\n\n\n\n

<\/p>\n","post_title":"News From Samsung Unpacked: Samsung To Bring AI To Healthcare With New Galaxy Ring","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"","post_password":"","post_name":"news-from-samsung-unpacked-samsung-to-bring-ai-to-healthcare-with-new-galaxy-ring","to_ping":"","pinged":"","post_modified":"2024-08-04 03:28:14","post_modified_gmt":"2024-08-03 17:28:14","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18076","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18078,"post_author":"17","post_date":"2024-08-10 18:30:27","post_date_gmt":"2024-08-10 08:30:27","post_content":"\n

Samsung has unveiled 2 new smartwatches that harness the power of the company's
proprietary Galaxy AI. The news came during the recently concluded Samsung Unpacked<\/a> event held in Paris.

\u201cBuilt to push boundaries, Galaxy Watch Ultra withstands up to 55\u00b0C heat, 9,000m altitude, 10 ATM water pressure and runs smoothly through it all with a new, powerful 3nm processor.\u201d <\/em>
reads the official page on Sa<\/a>msung\u2019s website.

Along with several other products, Samsung introduced the Galaxy Ultra Watch and the Galaxy and the Galaxy Watch 7 to much anticipation. Industry experts are calling it a direct rival to Apple's smartwatches, with many noting the similarities between the two.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a>

The new smartwatches follow Samsung's approach to making holistic health-related products such as the Galaxy Ring. The watch utilizes several Bioactive sensors to track vital signs of users such as sleep, heart rate, blood pressure, body composition, and more. The data is then analyzed by Galaxy AI to generate an energy score, which offers insight into the user's daily activities. Users will need the latest Samsung Health App on a compatible Android device (Android 11 or above) to unlock the full features.

The Galaxy Watch Ultra is made with titanium and sapphire crystals and comes in 3 different
colors. It has a 590 mAh battery that can last between 60-80 hours depending on usage.

The Galaxy Watch Ultra is currently available in one version for $649.99. The Galaxy Watch 7
comes in two sizes: 40 mm for $299.99 and 44 mm for $329.99. The watches with LTE support will cost a further $50.<\/p>\n","post_title":"From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"from-samsung-unpacked-samsung-brings-ai-to-fashion-with-2-new-smart-watches","to_ping":"","pinged":"","post_modified":"2024-08-10 18:30:34","post_modified_gmt":"2024-08-10 08:30:34","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18078","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18076,"post_author":"17","post_date":"2024-08-04 03:28:14","post_date_gmt":"2024-08-03 17:28:14","post_content":"\n

Samsung has announced the launch of a new smart ring called the Galaxy Ring. It is the
company\u2019s first smart ring which aims to provide users with several health services. The
announcement came during the latest Samsung Unpacked event, a biannual show hosted by
Samsung Electronics.

\u201cThe release of the Galaxy Ring will usher in a new era of wellness. You can now wrap
health tracking around your finger through this new addition to the Galaxy family,\u201d <\/em>the
the company stated in a press release.<\/p>\n\n\n\n

The new ring will utilize Samsung\u2019s proprietary Galaxy AI via the Samsung Health app. The ring
is made for all-day use. It will provide features such as a sleep tracker, heart health monitor,
menstrual cycle tracker, stress monitor, and more.<\/em><\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Benefits of Galaxy Ring<\/h2>\n\n\n\n

The ring\u2019s built-in censors will collect data such as heart rate, blood oxygen level, and sleep
time. The AI in the Samsung Health app will analyze the data and generate an \u201cEnergy Score\u201d.
The score will offer guidance for healthy balanced living. Users will also receive \u201cpersonalized
suggestions\u201d to improve their daily activities.<\/em><\/p>\n\n\n\n

According to Samsung, the ring can last up to 7 days on a single charge. The ring comes in
sizes 5 to 12. Interested parties can utilize the free sizing kit to<\/em> find their optimum fit

The Galaxy ring has a body of solid titanium. It comes in three different colors: black, gold, and
silver. The starting price for the Galaxy ring is $399.<\/p>\n\n\n\n

<\/p>\n","post_title":"News From Samsung Unpacked: Samsung To Bring AI To Healthcare With New Galaxy Ring","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"","post_password":"","post_name":"news-from-samsung-unpacked-samsung-to-bring-ai-to-healthcare-with-new-galaxy-ring","to_ping":"","pinged":"","post_modified":"2024-08-04 03:28:14","post_modified_gmt":"2024-08-03 17:28:14","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18076","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18078,"post_author":"17","post_date":"2024-08-10 18:30:27","post_date_gmt":"2024-08-10 08:30:27","post_content":"\n

Samsung has unveiled 2 new smartwatches that harness the power of the company's
proprietary Galaxy AI. The news came during the
recently concluded Samsung Unpacked<\/a> event held in Paris.

\u201cBuilt to push boundaries, Galaxy Watch Ultra withstands up to 55\u00b0C heat, 9,000m altitude, 10 ATM water pressure and runs smoothly through it all with a new, powerful 3nm processor.\u201d <\/em>
reads the official page on Sa<\/a>msung\u2019s website.

Along with several other products, Samsung introduced the Galaxy Ultra Watch and the Galaxy and the Galaxy Watch 7 to much anticipation. Industry experts are calling it a direct rival to Apple's smartwatches, with many noting the similarities between the two.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a>

The new smartwatches follow Samsung's approach to making holistic health-related products such as the Galaxy Ring. The watch utilizes several Bioactive sensors to track vital signs of users such as sleep, heart rate, blood pressure, body composition, and more. The data is then analyzed by Galaxy AI to generate an energy score, which offers insight into the user's daily activities. Users will need the latest Samsung Health App on a compatible Android device (Android 11 or above) to unlock the full features.

The Galaxy Watch Ultra is made with titanium and sapphire crystals and comes in 3 different
colors. It has a 590 mAh battery that can last between 60-80 hours depending on usage.

The Galaxy Watch Ultra is currently available in one version for $649.99. The Galaxy Watch 7
comes in two sizes: 40 mm for $299.99 and 44 mm for $329.99. The watches with LTE support will cost a further $50.<\/p>\n","post_title":"From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"from-samsung-unpacked-samsung-brings-ai-to-fashion-with-2-new-smart-watches","to_ping":"","pinged":"","post_modified":"2024-08-10 18:30:34","post_modified_gmt":"2024-08-10 08:30:34","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18078","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18076,"post_author":"17","post_date":"2024-08-04 03:28:14","post_date_gmt":"2024-08-03 17:28:14","post_content":"\n

Samsung has announced the launch of a new smart ring called the Galaxy Ring. It is the
company\u2019s first smart ring which aims to provide users with several health services. The
announcement came during the latest Samsung Unpacked event, a biannual show hosted by
Samsung Electronics.

\u201cThe release of the Galaxy Ring will usher in a new era of wellness. You can now wrap
health tracking around your finger through this new addition to the Galaxy family,\u201d <\/em>the
the company stated in a press release.<\/p>\n\n\n\n

The new ring will utilize Samsung\u2019s proprietary Galaxy AI via the Samsung Health app. The ring
is made for all-day use. It will provide features such as a sleep tracker, heart health monitor,
menstrual cycle tracker, stress monitor, and more.<\/em><\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Benefits of Galaxy Ring<\/h2>\n\n\n\n

The ring\u2019s built-in censors will collect data such as heart rate, blood oxygen level, and sleep
time. The AI in the Samsung Health app will analyze the data and generate an \u201cEnergy Score\u201d.
The score will offer guidance for healthy balanced living. Users will also receive \u201cpersonalized
suggestions\u201d to improve their daily activities.<\/em><\/p>\n\n\n\n

According to Samsung, the ring can last up to 7 days on a single charge. The ring comes in
sizes 5 to 12. Interested parties can utilize the free sizing kit to<\/em> find their optimum fit

The Galaxy ring has a body of solid titanium. It comes in three different colors: black, gold, and
silver. The starting price for the Galaxy ring is $399.<\/p>\n\n\n\n

<\/p>\n","post_title":"News From Samsung Unpacked: Samsung To Bring AI To Healthcare With New Galaxy Ring","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"","post_password":"","post_name":"news-from-samsung-unpacked-samsung-to-bring-ai-to-healthcare-with-new-galaxy-ring","to_ping":"","pinged":"","post_modified":"2024-08-04 03:28:14","post_modified_gmt":"2024-08-03 17:28:14","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18076","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18078,"post_author":"17","post_date":"2024-08-10 18:30:27","post_date_gmt":"2024-08-10 08:30:27","post_content":"\n

Samsung has unveiled 2 new smartwatches that harness the power of the company's
proprietary Galaxy AI. The news came during the
recently concluded Samsung Unpacked<\/a> event held in Paris.

\u201cBuilt to push boundaries, Galaxy Watch Ultra withstands up to 55\u00b0C heat, 9,000m altitude, 10 ATM water pressure and runs smoothly through it all with a new, powerful 3nm processor.\u201d <\/em>
reads the official page on Sa<\/a>msung\u2019s website.

Along with several other products, Samsung introduced the Galaxy Ultra Watch and the Galaxy and the Galaxy Watch 7 to much anticipation. Industry experts are calling it a direct rival to Apple's smartwatches, with many noting the similarities between the two.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a>

The new smartwatches follow Samsung's approach to making holistic health-related products such as the Galaxy Ring. The watch utilizes several Bioactive sensors to track vital signs of users such as sleep, heart rate, blood pressure, body composition, and more. The data is then analyzed by Galaxy AI to generate an energy score, which offers insight into the user's daily activities. Users will need the latest Samsung Health App on a compatible Android device (Android 11 or above) to unlock the full features.

The Galaxy Watch Ultra is made with titanium and sapphire crystals and comes in 3 different
colors. It has a 590 mAh battery that can last between 60-80 hours depending on usage.

The Galaxy Watch Ultra is currently available in one version for $649.99. The Galaxy Watch 7
comes in two sizes: 40 mm for $299.99 and 44 mm for $329.99. The watches with LTE support will cost a further $50.<\/p>\n","post_title":"From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"from-samsung-unpacked-samsung-brings-ai-to-fashion-with-2-new-smart-watches","to_ping":"","pinged":"","post_modified":"2024-08-10 18:30:34","post_modified_gmt":"2024-08-10 08:30:34","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18078","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18076,"post_author":"17","post_date":"2024-08-04 03:28:14","post_date_gmt":"2024-08-03 17:28:14","post_content":"\n

Samsung has announced the launch of a new smart ring called the Galaxy Ring. It is the
company\u2019s first smart ring which aims to provide users with several health services. The
announcement came during the latest Samsung Unpacked event, a biannual show hosted by
Samsung Electronics.

\u201cThe release of the Galaxy Ring will usher in a new era of wellness. You can now wrap
health tracking around your finger through this new addition to the Galaxy family,\u201d <\/em>the
the company stated in a press release.<\/p>\n\n\n\n

The new ring will utilize Samsung\u2019s proprietary Galaxy AI via the Samsung Health app. The ring
is made for all-day use. It will provide features such as a sleep tracker, heart health monitor,
menstrual cycle tracker, stress monitor, and more.<\/em><\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Benefits of Galaxy Ring<\/h2>\n\n\n\n

The ring\u2019s built-in censors will collect data such as heart rate, blood oxygen level, and sleep
time. The AI in the Samsung Health app will analyze the data and generate an \u201cEnergy Score\u201d.
The score will offer guidance for healthy balanced living. Users will also receive \u201cpersonalized
suggestions\u201d to improve their daily activities.<\/em><\/p>\n\n\n\n

According to Samsung, the ring can last up to 7 days on a single charge. The ring comes in
sizes 5 to 12. Interested parties can utilize the free sizing kit to<\/em> find their optimum fit

The Galaxy ring has a body of solid titanium. It comes in three different colors: black, gold, and
silver. The starting price for the Galaxy ring is $399.<\/p>\n\n\n\n

<\/p>\n","post_title":"News From Samsung Unpacked: Samsung To Bring AI To Healthcare With New Galaxy Ring","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"","post_password":"","post_name":"news-from-samsung-unpacked-samsung-to-bring-ai-to-healthcare-with-new-galaxy-ring","to_ping":"","pinged":"","post_modified":"2024-08-04 03:28:14","post_modified_gmt":"2024-08-03 17:28:14","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18076","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18078,"post_author":"17","post_date":"2024-08-10 18:30:27","post_date_gmt":"2024-08-10 08:30:27","post_content":"\n

Samsung has unveiled 2 new smartwatches that harness the power of the company's
proprietary Galaxy AI. The news came during the
recently concluded Samsung Unpacked<\/a> event held in Paris.

\u201cBuilt to push boundaries, Galaxy Watch Ultra withstands up to 55\u00b0C heat, 9,000m altitude, 10 ATM water pressure and runs smoothly through it all with a new, powerful 3nm processor.\u201d <\/em>
reads the official page on Sa<\/a>msung\u2019s website.

Along with several other products, Samsung introduced the Galaxy Ultra Watch and the Galaxy and the Galaxy Watch 7 to much anticipation. Industry experts are calling it a direct rival to Apple's smartwatches, with many noting the similarities between the two.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a>

The new smartwatches follow Samsung's approach to making holistic health-related products such as the Galaxy Ring. The watch utilizes several Bioactive sensors to track vital signs of users such as sleep, heart rate, blood pressure, body composition, and more. The data is then analyzed by Galaxy AI to generate an energy score, which offers insight into the user's daily activities. Users will need the latest Samsung Health App on a compatible Android device (Android 11 or above) to unlock the full features.

The Galaxy Watch Ultra is made with titanium and sapphire crystals and comes in 3 different
colors. It has a 590 mAh battery that can last between 60-80 hours depending on usage.

The Galaxy Watch Ultra is currently available in one version for $649.99. The Galaxy Watch 7
comes in two sizes: 40 mm for $299.99 and 44 mm for $329.99. The watches with LTE support will cost a further $50.<\/p>\n","post_title":"From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"from-samsung-unpacked-samsung-brings-ai-to-fashion-with-2-new-smart-watches","to_ping":"","pinged":"","post_modified":"2024-08-10 18:30:34","post_modified_gmt":"2024-08-10 08:30:34","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18078","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18076,"post_author":"17","post_date":"2024-08-04 03:28:14","post_date_gmt":"2024-08-03 17:28:14","post_content":"\n

Samsung has announced the launch of a new smart ring called the Galaxy Ring. It is the
company\u2019s first smart ring which aims to provide users with several health services. The
announcement came during the latest Samsung Unpacked event, a biannual show hosted by
Samsung Electronics.

\u201cThe release of the Galaxy Ring will usher in a new era of wellness. You can now wrap
health tracking around your finger through this new addition to the Galaxy family,\u201d <\/em>the
the company stated in a press release.<\/p>\n\n\n\n

The new ring will utilize Samsung\u2019s proprietary Galaxy AI via the Samsung Health app. The ring
is made for all-day use. It will provide features such as a sleep tracker, heart health monitor,
menstrual cycle tracker, stress monitor, and more.<\/em><\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Benefits of Galaxy Ring<\/h2>\n\n\n\n

The ring\u2019s built-in censors will collect data such as heart rate, blood oxygen level, and sleep
time. The AI in the Samsung Health app will analyze the data and generate an \u201cEnergy Score\u201d.
The score will offer guidance for healthy balanced living. Users will also receive \u201cpersonalized
suggestions\u201d to improve their daily activities.<\/em><\/p>\n\n\n\n

According to Samsung, the ring can last up to 7 days on a single charge. The ring comes in
sizes 5 to 12. Interested parties can utilize the free sizing kit to<\/em> find their optimum fit

The Galaxy ring has a body of solid titanium. It comes in three different colors: black, gold, and
silver. The starting price for the Galaxy ring is $399.<\/p>\n\n\n\n

<\/p>\n","post_title":"News From Samsung Unpacked: Samsung To Bring AI To Healthcare With New Galaxy Ring","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"","post_password":"","post_name":"news-from-samsung-unpacked-samsung-to-bring-ai-to-healthcare-with-new-galaxy-ring","to_ping":"","pinged":"","post_modified":"2024-08-04 03:28:14","post_modified_gmt":"2024-08-03 17:28:14","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18076","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18078,"post_author":"17","post_date":"2024-08-10 18:30:27","post_date_gmt":"2024-08-10 08:30:27","post_content":"\n

Samsung has unveiled 2 new smartwatches that harness the power of the company's
proprietary Galaxy AI. The news came during the
recently concluded Samsung Unpacked<\/a> event held in Paris.

\u201cBuilt to push boundaries, Galaxy Watch Ultra withstands up to 55\u00b0C heat, 9,000m altitude, 10 ATM water pressure and runs smoothly through it all with a new, powerful 3nm processor.\u201d <\/em>
reads the official page on Sa<\/a>msung\u2019s website.

Along with several other products, Samsung introduced the Galaxy Ultra Watch and the Galaxy and the Galaxy Watch 7 to much anticipation. Industry experts are calling it a direct rival to Apple's smartwatches, with many noting the similarities between the two.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a>

The new smartwatches follow Samsung's approach to making holistic health-related products such as the Galaxy Ring. The watch utilizes several Bioactive sensors to track vital signs of users such as sleep, heart rate, blood pressure, body composition, and more. The data is then analyzed by Galaxy AI to generate an energy score, which offers insight into the user's daily activities. Users will need the latest Samsung Health App on a compatible Android device (Android 11 or above) to unlock the full features.

The Galaxy Watch Ultra is made with titanium and sapphire crystals and comes in 3 different
colors. It has a 590 mAh battery that can last between 60-80 hours depending on usage.

The Galaxy Watch Ultra is currently available in one version for $649.99. The Galaxy Watch 7
comes in two sizes: 40 mm for $299.99 and 44 mm for $329.99. The watches with LTE support will cost a further $50.<\/p>\n","post_title":"From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"from-samsung-unpacked-samsung-brings-ai-to-fashion-with-2-new-smart-watches","to_ping":"","pinged":"","post_modified":"2024-08-10 18:30:34","post_modified_gmt":"2024-08-10 08:30:34","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18078","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18076,"post_author":"17","post_date":"2024-08-04 03:28:14","post_date_gmt":"2024-08-03 17:28:14","post_content":"\n

Samsung has announced the launch of a new smart ring called the Galaxy Ring. It is the
company\u2019s first smart ring which aims to provide users with several health services. The
announcement came during the latest Samsung Unpacked event, a biannual show hosted by
Samsung Electronics.

\u201cThe release of the Galaxy Ring will usher in a new era of wellness. You can now wrap
health tracking around your finger through this new addition to the Galaxy family,\u201d <\/em>the
the company stated in a press release.<\/p>\n\n\n\n

The new ring will utilize Samsung\u2019s proprietary Galaxy AI via the Samsung Health app. The ring
is made for all-day use. It will provide features such as a sleep tracker, heart health monitor,
menstrual cycle tracker, stress monitor, and more.<\/em><\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Benefits of Galaxy Ring<\/h2>\n\n\n\n

The ring\u2019s built-in censors will collect data such as heart rate, blood oxygen level, and sleep
time. The AI in the Samsung Health app will analyze the data and generate an \u201cEnergy Score\u201d.
The score will offer guidance for healthy balanced living. Users will also receive \u201cpersonalized
suggestions\u201d to improve their daily activities.<\/em><\/p>\n\n\n\n

According to Samsung, the ring can last up to 7 days on a single charge. The ring comes in
sizes 5 to 12. Interested parties can utilize the free sizing kit to<\/em> find their optimum fit

The Galaxy ring has a body of solid titanium. It comes in three different colors: black, gold, and
silver. The starting price for the Galaxy ring is $399.<\/p>\n\n\n\n

<\/p>\n","post_title":"News From Samsung Unpacked: Samsung To Bring AI To Healthcare With New Galaxy Ring","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"","post_password":"","post_name":"news-from-samsung-unpacked-samsung-to-bring-ai-to-healthcare-with-new-galaxy-ring","to_ping":"","pinged":"","post_modified":"2024-08-04 03:28:14","post_modified_gmt":"2024-08-03 17:28:14","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18076","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18078,"post_author":"17","post_date":"2024-08-10 18:30:27","post_date_gmt":"2024-08-10 08:30:27","post_content":"\n

Samsung has unveiled 2 new smartwatches that harness the power of the company's
proprietary Galaxy AI. The news came during the
recently concluded Samsung Unpacked<\/a> event held in Paris.

\u201cBuilt to push boundaries, Galaxy Watch Ultra withstands up to 55\u00b0C heat, 9,000m altitude, 10 ATM water pressure and runs smoothly through it all with a new, powerful 3nm processor.\u201d <\/em>
reads the official page on Sa<\/a>msung\u2019s website.

Along with several other products, Samsung introduced the Galaxy Ultra Watch and the Galaxy and the Galaxy Watch 7 to much anticipation. Industry experts are calling it a direct rival to Apple's smartwatches, with many noting the similarities between the two.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a>

The new smartwatches follow Samsung's approach to making holistic health-related products such as the Galaxy Ring. The watch utilizes several Bioactive sensors to track vital signs of users such as sleep, heart rate, blood pressure, body composition, and more. The data is then analyzed by Galaxy AI to generate an energy score, which offers insight into the user's daily activities. Users will need the latest Samsung Health App on a compatible Android device (Android 11 or above) to unlock the full features.

The Galaxy Watch Ultra is made with titanium and sapphire crystals and comes in 3 different
colors. It has a 590 mAh battery that can last between 60-80 hours depending on usage.

The Galaxy Watch Ultra is currently available in one version for $649.99. The Galaxy Watch 7
comes in two sizes: 40 mm for $299.99 and 44 mm for $329.99. The watches with LTE support will cost a further $50.<\/p>\n","post_title":"From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"from-samsung-unpacked-samsung-brings-ai-to-fashion-with-2-new-smart-watches","to_ping":"","pinged":"","post_modified":"2024-08-10 18:30:34","post_modified_gmt":"2024-08-10 08:30:34","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18078","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18076,"post_author":"17","post_date":"2024-08-04 03:28:14","post_date_gmt":"2024-08-03 17:28:14","post_content":"\n

Samsung has announced the launch of a new smart ring called the Galaxy Ring. It is the
company\u2019s first smart ring which aims to provide users with several health services. The
announcement came during the latest Samsung Unpacked event, a biannual show hosted by
Samsung Electronics.

\u201cThe release of the Galaxy Ring will usher in a new era of wellness. You can now wrap
health tracking around your finger through this new addition to the Galaxy family,\u201d <\/em>the
the company stated in a press release.<\/p>\n\n\n\n

The new ring will utilize Samsung\u2019s proprietary Galaxy AI via the Samsung Health app. The ring
is made for all-day use. It will provide features such as a sleep tracker, heart health monitor,
menstrual cycle tracker, stress monitor, and more.<\/em><\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Benefits of Galaxy Ring<\/h2>\n\n\n\n

The ring\u2019s built-in censors will collect data such as heart rate, blood oxygen level, and sleep
time. The AI in the Samsung Health app will analyze the data and generate an \u201cEnergy Score\u201d.
The score will offer guidance for healthy balanced living. Users will also receive \u201cpersonalized
suggestions\u201d to improve their daily activities.<\/em><\/p>\n\n\n\n

According to Samsung, the ring can last up to 7 days on a single charge. The ring comes in
sizes 5 to 12. Interested parties can utilize the free sizing kit to<\/em> find their optimum fit

The Galaxy ring has a body of solid titanium. It comes in three different colors: black, gold, and
silver. The starting price for the Galaxy ring is $399.<\/p>\n\n\n\n

<\/p>\n","post_title":"News From Samsung Unpacked: Samsung To Bring AI To Healthcare With New Galaxy Ring","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"","post_password":"","post_name":"news-from-samsung-unpacked-samsung-to-bring-ai-to-healthcare-with-new-galaxy-ring","to_ping":"","pinged":"","post_modified":"2024-08-04 03:28:14","post_modified_gmt":"2024-08-03 17:28:14","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18076","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18078,"post_author":"17","post_date":"2024-08-10 18:30:27","post_date_gmt":"2024-08-10 08:30:27","post_content":"\n

Samsung has unveiled 2 new smartwatches that harness the power of the company's
proprietary Galaxy AI. The news came during the
recently concluded Samsung Unpacked<\/a> event held in Paris.

\u201cBuilt to push boundaries, Galaxy Watch Ultra withstands up to 55\u00b0C heat, 9,000m altitude, 10 ATM water pressure and runs smoothly through it all with a new, powerful 3nm processor.\u201d <\/em>
reads the official page on Sa<\/a>msung\u2019s website.

Along with several other products, Samsung introduced the Galaxy Ultra Watch and the Galaxy and the Galaxy Watch 7 to much anticipation. Industry experts are calling it a direct rival to Apple's smartwatches, with many noting the similarities between the two.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a>

The new smartwatches follow Samsung's approach to making holistic health-related products such as the Galaxy Ring. The watch utilizes several Bioactive sensors to track vital signs of users such as sleep, heart rate, blood pressure, body composition, and more. The data is then analyzed by Galaxy AI to generate an energy score, which offers insight into the user's daily activities. Users will need the latest Samsung Health App on a compatible Android device (Android 11 or above) to unlock the full features.

The Galaxy Watch Ultra is made with titanium and sapphire crystals and comes in 3 different
colors. It has a 590 mAh battery that can last between 60-80 hours depending on usage.

The Galaxy Watch Ultra is currently available in one version for $649.99. The Galaxy Watch 7
comes in two sizes: 40 mm for $299.99 and 44 mm for $329.99. The watches with LTE support will cost a further $50.<\/p>\n","post_title":"From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"from-samsung-unpacked-samsung-brings-ai-to-fashion-with-2-new-smart-watches","to_ping":"","pinged":"","post_modified":"2024-08-10 18:30:34","post_modified_gmt":"2024-08-10 08:30:34","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18078","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18076,"post_author":"17","post_date":"2024-08-04 03:28:14","post_date_gmt":"2024-08-03 17:28:14","post_content":"\n

Samsung has announced the launch of a new smart ring called the Galaxy Ring. It is the
company\u2019s first smart ring which aims to provide users with several health services. The
announcement came during the latest Samsung Unpacked event, a biannual show hosted by
Samsung Electronics.

\u201cThe release of the Galaxy Ring will usher in a new era of wellness. You can now wrap
health tracking around your finger through this new addition to the Galaxy family,\u201d <\/em>the
the company stated in a press release.<\/p>\n\n\n\n

The new ring will utilize Samsung\u2019s proprietary Galaxy AI via the Samsung Health app. The ring
is made for all-day use. It will provide features such as a sleep tracker, heart health monitor,
menstrual cycle tracker, stress monitor, and more.<\/em><\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Benefits of Galaxy Ring<\/h2>\n\n\n\n

The ring\u2019s built-in censors will collect data such as heart rate, blood oxygen level, and sleep
time. The AI in the Samsung Health app will analyze the data and generate an \u201cEnergy Score\u201d.
The score will offer guidance for healthy balanced living. Users will also receive \u201cpersonalized
suggestions\u201d to improve their daily activities.<\/em><\/p>\n\n\n\n

According to Samsung, the ring can last up to 7 days on a single charge. The ring comes in
sizes 5 to 12. Interested parties can utilize the free sizing kit to<\/em> find their optimum fit

The Galaxy ring has a body of solid titanium. It comes in three different colors: black, gold, and
silver. The starting price for the Galaxy ring is $399.<\/p>\n\n\n\n

<\/p>\n","post_title":"News From Samsung Unpacked: Samsung To Bring AI To Healthcare With New Galaxy Ring","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"","post_password":"","post_name":"news-from-samsung-unpacked-samsung-to-bring-ai-to-healthcare-with-new-galaxy-ring","to_ping":"","pinged":"","post_modified":"2024-08-04 03:28:14","post_modified_gmt":"2024-08-03 17:28:14","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18076","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18078,"post_author":"17","post_date":"2024-08-10 18:30:27","post_date_gmt":"2024-08-10 08:30:27","post_content":"\n

Samsung has unveiled 2 new smartwatches that harness the power of the company's
proprietary Galaxy AI. The news came during the
recently concluded Samsung Unpacked<\/a> event held in Paris.

\u201cBuilt to push boundaries, Galaxy Watch Ultra withstands up to 55\u00b0C heat, 9,000m altitude, 10 ATM water pressure and runs smoothly through it all with a new, powerful 3nm processor.\u201d <\/em>
reads the official page on Sa<\/a>msung\u2019s website.

Along with several other products, Samsung introduced the Galaxy Ultra Watch and the Galaxy and the Galaxy Watch 7 to much anticipation. Industry experts are calling it a direct rival to Apple's smartwatches, with many noting the similarities between the two.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a>

The new smartwatches follow Samsung's approach to making holistic health-related products such as the Galaxy Ring. The watch utilizes several Bioactive sensors to track vital signs of users such as sleep, heart rate, blood pressure, body composition, and more. The data is then analyzed by Galaxy AI to generate an energy score, which offers insight into the user's daily activities. Users will need the latest Samsung Health App on a compatible Android device (Android 11 or above) to unlock the full features.

The Galaxy Watch Ultra is made with titanium and sapphire crystals and comes in 3 different
colors. It has a 590 mAh battery that can last between 60-80 hours depending on usage.

The Galaxy Watch Ultra is currently available in one version for $649.99. The Galaxy Watch 7
comes in two sizes: 40 mm for $299.99 and 44 mm for $329.99. The watches with LTE support will cost a further $50.<\/p>\n","post_title":"From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"from-samsung-unpacked-samsung-brings-ai-to-fashion-with-2-new-smart-watches","to_ping":"","pinged":"","post_modified":"2024-08-10 18:30:34","post_modified_gmt":"2024-08-10 08:30:34","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18078","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18076,"post_author":"17","post_date":"2024-08-04 03:28:14","post_date_gmt":"2024-08-03 17:28:14","post_content":"\n

Samsung has announced the launch of a new smart ring called the Galaxy Ring. It is the
company\u2019s first smart ring which aims to provide users with several health services. The
announcement came during the latest Samsung Unpacked event, a biannual show hosted by
Samsung Electronics.

\u201cThe release of the Galaxy Ring will usher in a new era of wellness. You can now wrap
health tracking around your finger through this new addition to the Galaxy family,\u201d <\/em>the
the company stated in a press release.<\/p>\n\n\n\n

The new ring will utilize Samsung\u2019s proprietary Galaxy AI via the Samsung Health app. The ring
is made for all-day use. It will provide features such as a sleep tracker, heart health monitor,
menstrual cycle tracker, stress monitor, and more.<\/em><\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Benefits of Galaxy Ring<\/h2>\n\n\n\n

The ring\u2019s built-in censors will collect data such as heart rate, blood oxygen level, and sleep
time. The AI in the Samsung Health app will analyze the data and generate an \u201cEnergy Score\u201d.
The score will offer guidance for healthy balanced living. Users will also receive \u201cpersonalized
suggestions\u201d to improve their daily activities.<\/em><\/p>\n\n\n\n

According to Samsung, the ring can last up to 7 days on a single charge. The ring comes in
sizes 5 to 12. Interested parties can utilize the free sizing kit to<\/em> find their optimum fit

The Galaxy ring has a body of solid titanium. It comes in three different colors: black, gold, and
silver. The starting price for the Galaxy ring is $399.<\/p>\n\n\n\n

<\/p>\n","post_title":"News From Samsung Unpacked: Samsung To Bring AI To Healthcare With New Galaxy Ring","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"","post_password":"","post_name":"news-from-samsung-unpacked-samsung-to-bring-ai-to-healthcare-with-new-galaxy-ring","to_ping":"","pinged":"","post_modified":"2024-08-04 03:28:14","post_modified_gmt":"2024-08-03 17:28:14","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18076","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18078,"post_author":"17","post_date":"2024-08-10 18:30:27","post_date_gmt":"2024-08-10 08:30:27","post_content":"\n

Samsung has unveiled 2 new smartwatches that harness the power of the company's
proprietary Galaxy AI. The news came during the
recently concluded Samsung Unpacked<\/a> event held in Paris.

\u201cBuilt to push boundaries, Galaxy Watch Ultra withstands up to 55\u00b0C heat, 9,000m altitude, 10 ATM water pressure and runs smoothly through it all with a new, powerful 3nm processor.\u201d <\/em>
reads the official page on Sa<\/a>msung\u2019s website.

Along with several other products, Samsung introduced the Galaxy Ultra Watch and the Galaxy and the Galaxy Watch 7 to much anticipation. Industry experts are calling it a direct rival to Apple's smartwatches, with many noting the similarities between the two.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a>

The new smartwatches follow Samsung's approach to making holistic health-related products such as the Galaxy Ring. The watch utilizes several Bioactive sensors to track vital signs of users such as sleep, heart rate, blood pressure, body composition, and more. The data is then analyzed by Galaxy AI to generate an energy score, which offers insight into the user's daily activities. Users will need the latest Samsung Health App on a compatible Android device (Android 11 or above) to unlock the full features.

The Galaxy Watch Ultra is made with titanium and sapphire crystals and comes in 3 different
colors. It has a 590 mAh battery that can last between 60-80 hours depending on usage.

The Galaxy Watch Ultra is currently available in one version for $649.99. The Galaxy Watch 7
comes in two sizes: 40 mm for $299.99 and 44 mm for $329.99. The watches with LTE support will cost a further $50.<\/p>\n","post_title":"From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"from-samsung-unpacked-samsung-brings-ai-to-fashion-with-2-new-smart-watches","to_ping":"","pinged":"","post_modified":"2024-08-10 18:30:34","post_modified_gmt":"2024-08-10 08:30:34","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18078","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18076,"post_author":"17","post_date":"2024-08-04 03:28:14","post_date_gmt":"2024-08-03 17:28:14","post_content":"\n

Samsung has announced the launch of a new smart ring called the Galaxy Ring. It is the
company\u2019s first smart ring which aims to provide users with several health services. The
announcement came during the latest Samsung Unpacked event, a biannual show hosted by
Samsung Electronics.

\u201cThe release of the Galaxy Ring will usher in a new era of wellness. You can now wrap
health tracking around your finger through this new addition to the Galaxy family,\u201d <\/em>the
the company stated in a press release.<\/p>\n\n\n\n

The new ring will utilize Samsung\u2019s proprietary Galaxy AI via the Samsung Health app. The ring
is made for all-day use. It will provide features such as a sleep tracker, heart health monitor,
menstrual cycle tracker, stress monitor, and more.<\/em><\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Benefits of Galaxy Ring<\/h2>\n\n\n\n

The ring\u2019s built-in censors will collect data such as heart rate, blood oxygen level, and sleep
time. The AI in the Samsung Health app will analyze the data and generate an \u201cEnergy Score\u201d.
The score will offer guidance for healthy balanced living. Users will also receive \u201cpersonalized
suggestions\u201d to improve their daily activities.<\/em><\/p>\n\n\n\n

According to Samsung, the ring can last up to 7 days on a single charge. The ring comes in
sizes 5 to 12. Interested parties can utilize the free sizing kit to<\/em> find their optimum fit

The Galaxy ring has a body of solid titanium. It comes in three different colors: black, gold, and
silver. The starting price for the Galaxy ring is $399.<\/p>\n\n\n\n

<\/p>\n","post_title":"News From Samsung Unpacked: Samsung To Bring AI To Healthcare With New Galaxy Ring","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"","post_password":"","post_name":"news-from-samsung-unpacked-samsung-to-bring-ai-to-healthcare-with-new-galaxy-ring","to_ping":"","pinged":"","post_modified":"2024-08-04 03:28:14","post_modified_gmt":"2024-08-03 17:28:14","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18076","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18078,"post_author":"17","post_date":"2024-08-10 18:30:27","post_date_gmt":"2024-08-10 08:30:27","post_content":"\n

Samsung has unveiled 2 new smartwatches that harness the power of the company's
proprietary Galaxy AI. The news came during the
recently concluded Samsung Unpacked<\/a> event held in Paris.

\u201cBuilt to push boundaries, Galaxy Watch Ultra withstands up to 55\u00b0C heat, 9,000m altitude, 10 ATM water pressure and runs smoothly through it all with a new, powerful 3nm processor.\u201d <\/em>
reads the official page on Sa<\/a>msung\u2019s website.

Along with several other products, Samsung introduced the Galaxy Ultra Watch and the Galaxy and the Galaxy Watch 7 to much anticipation. Industry experts are calling it a direct rival to Apple's smartwatches, with many noting the similarities between the two.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a>

The new smartwatches follow Samsung's approach to making holistic health-related products such as the Galaxy Ring. The watch utilizes several Bioactive sensors to track vital signs of users such as sleep, heart rate, blood pressure, body composition, and more. The data is then analyzed by Galaxy AI to generate an energy score, which offers insight into the user's daily activities. Users will need the latest Samsung Health App on a compatible Android device (Android 11 or above) to unlock the full features.

The Galaxy Watch Ultra is made with titanium and sapphire crystals and comes in 3 different
colors. It has a 590 mAh battery that can last between 60-80 hours depending on usage.

The Galaxy Watch Ultra is currently available in one version for $649.99. The Galaxy Watch 7
comes in two sizes: 40 mm for $299.99 and 44 mm for $329.99. The watches with LTE support will cost a further $50.<\/p>\n","post_title":"From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"from-samsung-unpacked-samsung-brings-ai-to-fashion-with-2-new-smart-watches","to_ping":"","pinged":"","post_modified":"2024-08-10 18:30:34","post_modified_gmt":"2024-08-10 08:30:34","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18078","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18076,"post_author":"17","post_date":"2024-08-04 03:28:14","post_date_gmt":"2024-08-03 17:28:14","post_content":"\n

Samsung has announced the launch of a new smart ring called the Galaxy Ring. It is the
company\u2019s first smart ring which aims to provide users with several health services. The
announcement came during the latest Samsung Unpacked event, a biannual show hosted by
Samsung Electronics.

\u201cThe release of the Galaxy Ring will usher in a new era of wellness. You can now wrap
health tracking around your finger through this new addition to the Galaxy family,\u201d <\/em>the
the company stated in a press release.<\/p>\n\n\n\n

The new ring will utilize Samsung\u2019s proprietary Galaxy AI via the Samsung Health app. The ring
is made for all-day use. It will provide features such as a sleep tracker, heart health monitor,
menstrual cycle tracker, stress monitor, and more.<\/em><\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Benefits of Galaxy Ring<\/h2>\n\n\n\n

The ring\u2019s built-in censors will collect data such as heart rate, blood oxygen level, and sleep
time. The AI in the Samsung Health app will analyze the data and generate an \u201cEnergy Score\u201d.
The score will offer guidance for healthy balanced living. Users will also receive \u201cpersonalized
suggestions\u201d to improve their daily activities.<\/em><\/p>\n\n\n\n

According to Samsung, the ring can last up to 7 days on a single charge. The ring comes in
sizes 5 to 12. Interested parties can utilize the free sizing kit to<\/em> find their optimum fit

The Galaxy ring has a body of solid titanium. It comes in three different colors: black, gold, and
silver. The starting price for the Galaxy ring is $399.<\/p>\n\n\n\n

<\/p>\n","post_title":"News From Samsung Unpacked: Samsung To Bring AI To Healthcare With New Galaxy Ring","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"","post_password":"","post_name":"news-from-samsung-unpacked-samsung-to-bring-ai-to-healthcare-with-new-galaxy-ring","to_ping":"","pinged":"","post_modified":"2024-08-04 03:28:14","post_modified_gmt":"2024-08-03 17:28:14","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18076","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18078,"post_author":"17","post_date":"2024-08-10 18:30:27","post_date_gmt":"2024-08-10 08:30:27","post_content":"\n

Samsung has unveiled 2 new smartwatches that harness the power of the company's
proprietary Galaxy AI. The news came during the
recently concluded Samsung Unpacked<\/a> event held in Paris.

\u201cBuilt to push boundaries, Galaxy Watch Ultra withstands up to 55\u00b0C heat, 9,000m altitude, 10 ATM water pressure and runs smoothly through it all with a new, powerful 3nm processor.\u201d <\/em>
reads the official page on Sa<\/a>msung\u2019s website.

Along with several other products, Samsung introduced the Galaxy Ultra Watch and the Galaxy and the Galaxy Watch 7 to much anticipation. Industry experts are calling it a direct rival to Apple's smartwatches, with many noting the similarities between the two.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a>

The new smartwatches follow Samsung's approach to making holistic health-related products such as the Galaxy Ring. The watch utilizes several Bioactive sensors to track vital signs of users such as sleep, heart rate, blood pressure, body composition, and more. The data is then analyzed by Galaxy AI to generate an energy score, which offers insight into the user's daily activities. Users will need the latest Samsung Health App on a compatible Android device (Android 11 or above) to unlock the full features.

The Galaxy Watch Ultra is made with titanium and sapphire crystals and comes in 3 different
colors. It has a 590 mAh battery that can last between 60-80 hours depending on usage.

The Galaxy Watch Ultra is currently available in one version for $649.99. The Galaxy Watch 7
comes in two sizes: 40 mm for $299.99 and 44 mm for $329.99. The watches with LTE support will cost a further $50.<\/p>\n","post_title":"From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"from-samsung-unpacked-samsung-brings-ai-to-fashion-with-2-new-smart-watches","to_ping":"","pinged":"","post_modified":"2024-08-10 18:30:34","post_modified_gmt":"2024-08-10 08:30:34","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18078","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18076,"post_author":"17","post_date":"2024-08-04 03:28:14","post_date_gmt":"2024-08-03 17:28:14","post_content":"\n

Samsung has announced the launch of a new smart ring called the Galaxy Ring. It is the
company\u2019s first smart ring which aims to provide users with several health services. The
announcement came during the latest Samsung Unpacked event, a biannual show hosted by
Samsung Electronics.

\u201cThe release of the Galaxy Ring will usher in a new era of wellness. You can now wrap
health tracking around your finger through this new addition to the Galaxy family,\u201d <\/em>the
the company stated in a press release.<\/p>\n\n\n\n

The new ring will utilize Samsung\u2019s proprietary Galaxy AI via the Samsung Health app. The ring
is made for all-day use. It will provide features such as a sleep tracker, heart health monitor,
menstrual cycle tracker, stress monitor, and more.<\/em><\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Benefits of Galaxy Ring<\/h2>\n\n\n\n

The ring\u2019s built-in censors will collect data such as heart rate, blood oxygen level, and sleep
time. The AI in the Samsung Health app will analyze the data and generate an \u201cEnergy Score\u201d.
The score will offer guidance for healthy balanced living. Users will also receive \u201cpersonalized
suggestions\u201d to improve their daily activities.<\/em><\/p>\n\n\n\n

According to Samsung, the ring can last up to 7 days on a single charge. The ring comes in
sizes 5 to 12. Interested parties can utilize the free sizing kit to<\/em> find their optimum fit

The Galaxy ring has a body of solid titanium. It comes in three different colors: black, gold, and
silver. The starting price for the Galaxy ring is $399.<\/p>\n\n\n\n

<\/p>\n","post_title":"News From Samsung Unpacked: Samsung To Bring AI To Healthcare With New Galaxy Ring","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"","post_password":"","post_name":"news-from-samsung-unpacked-samsung-to-bring-ai-to-healthcare-with-new-galaxy-ring","to_ping":"","pinged":"","post_modified":"2024-08-04 03:28:14","post_modified_gmt":"2024-08-03 17:28:14","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18076","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18078,"post_author":"17","post_date":"2024-08-10 18:30:27","post_date_gmt":"2024-08-10 08:30:27","post_content":"\n

Samsung has unveiled 2 new smartwatches that harness the power of the company's
proprietary Galaxy AI. The news came during the
recently concluded Samsung Unpacked<\/a> event held in Paris.

\u201cBuilt to push boundaries, Galaxy Watch Ultra withstands up to 55\u00b0C heat, 9,000m altitude, 10 ATM water pressure and runs smoothly through it all with a new, powerful 3nm processor.\u201d <\/em>
reads the official page on Sa<\/a>msung\u2019s website.

Along with several other products, Samsung introduced the Galaxy Ultra Watch and the Galaxy and the Galaxy Watch 7 to much anticipation. Industry experts are calling it a direct rival to Apple's smartwatches, with many noting the similarities between the two.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a>

The new smartwatches follow Samsung's approach to making holistic health-related products such as the Galaxy Ring. The watch utilizes several Bioactive sensors to track vital signs of users such as sleep, heart rate, blood pressure, body composition, and more. The data is then analyzed by Galaxy AI to generate an energy score, which offers insight into the user's daily activities. Users will need the latest Samsung Health App on a compatible Android device (Android 11 or above) to unlock the full features.

The Galaxy Watch Ultra is made with titanium and sapphire crystals and comes in 3 different
colors. It has a 590 mAh battery that can last between 60-80 hours depending on usage.

The Galaxy Watch Ultra is currently available in one version for $649.99. The Galaxy Watch 7
comes in two sizes: 40 mm for $299.99 and 44 mm for $329.99. The watches with LTE support will cost a further $50.<\/p>\n","post_title":"From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"from-samsung-unpacked-samsung-brings-ai-to-fashion-with-2-new-smart-watches","to_ping":"","pinged":"","post_modified":"2024-08-10 18:30:34","post_modified_gmt":"2024-08-10 08:30:34","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18078","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18076,"post_author":"17","post_date":"2024-08-04 03:28:14","post_date_gmt":"2024-08-03 17:28:14","post_content":"\n

Samsung has announced the launch of a new smart ring called the Galaxy Ring. It is the
company\u2019s first smart ring which aims to provide users with several health services. The
announcement came during the latest Samsung Unpacked event, a biannual show hosted by
Samsung Electronics.

\u201cThe release of the Galaxy Ring will usher in a new era of wellness. You can now wrap
health tracking around your finger through this new addition to the Galaxy family,\u201d <\/em>the
the company stated in a press release.<\/p>\n\n\n\n

The new ring will utilize Samsung\u2019s proprietary Galaxy AI via the Samsung Health app. The ring
is made for all-day use. It will provide features such as a sleep tracker, heart health monitor,
menstrual cycle tracker, stress monitor, and more.<\/em><\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Benefits of Galaxy Ring<\/h2>\n\n\n\n

The ring\u2019s built-in censors will collect data such as heart rate, blood oxygen level, and sleep
time. The AI in the Samsung Health app will analyze the data and generate an \u201cEnergy Score\u201d.
The score will offer guidance for healthy balanced living. Users will also receive \u201cpersonalized
suggestions\u201d to improve their daily activities.<\/em><\/p>\n\n\n\n

According to Samsung, the ring can last up to 7 days on a single charge. The ring comes in
sizes 5 to 12. Interested parties can utilize the free sizing kit to<\/em> find their optimum fit

The Galaxy ring has a body of solid titanium. It comes in three different colors: black, gold, and
silver. The starting price for the Galaxy ring is $399.<\/p>\n\n\n\n

<\/p>\n","post_title":"News From Samsung Unpacked: Samsung To Bring AI To Healthcare With New Galaxy Ring","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"","post_password":"","post_name":"news-from-samsung-unpacked-samsung-to-bring-ai-to-healthcare-with-new-galaxy-ring","to_ping":"","pinged":"","post_modified":"2024-08-04 03:28:14","post_modified_gmt":"2024-08-03 17:28:14","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18076","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Neither party has disclosed the financial terms of the contract. Previously, OpenAI had entered into long-term content deals with the Associated Press, Axel Springer, TIME, Vox, NewsCorps, and several other publishers.<\/p>\n","post_title":"OpenAI Teams Up With Cond\u00e9 Nast In A \u201cMulti-Year Content Deal\u201d","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"openai-teams-up-with-conde-nast-in-a-multi-year-content-deal","to_ping":"","pinged":"","post_modified":"2024-08-29 12:19:44","post_modified_gmt":"2024-08-29 02:19:44","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18403","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18078,"post_author":"17","post_date":"2024-08-10 18:30:27","post_date_gmt":"2024-08-10 08:30:27","post_content":"\n

Samsung has unveiled 2 new smartwatches that harness the power of the company's
proprietary Galaxy AI. The news came during the
recently concluded Samsung Unpacked<\/a> event held in Paris.

\u201cBuilt to push boundaries, Galaxy Watch Ultra withstands up to 55\u00b0C heat, 9,000m altitude, 10 ATM water pressure and runs smoothly through it all with a new, powerful 3nm processor.\u201d <\/em>
reads the official page on Sa<\/a>msung\u2019s website.

Along with several other products, Samsung introduced the Galaxy Ultra Watch and the Galaxy and the Galaxy Watch 7 to much anticipation. Industry experts are calling it a direct rival to Apple's smartwatches, with many noting the similarities between the two.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a>

The new smartwatches follow Samsung's approach to making holistic health-related products such as the Galaxy Ring. The watch utilizes several Bioactive sensors to track vital signs of users such as sleep, heart rate, blood pressure, body composition, and more. The data is then analyzed by Galaxy AI to generate an energy score, which offers insight into the user's daily activities. Users will need the latest Samsung Health App on a compatible Android device (Android 11 or above) to unlock the full features.

The Galaxy Watch Ultra is made with titanium and sapphire crystals and comes in 3 different
colors. It has a 590 mAh battery that can last between 60-80 hours depending on usage.

The Galaxy Watch Ultra is currently available in one version for $649.99. The Galaxy Watch 7
comes in two sizes: 40 mm for $299.99 and 44 mm for $329.99. The watches with LTE support will cost a further $50.<\/p>\n","post_title":"From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"from-samsung-unpacked-samsung-brings-ai-to-fashion-with-2-new-smart-watches","to_ping":"","pinged":"","post_modified":"2024-08-10 18:30:34","post_modified_gmt":"2024-08-10 08:30:34","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18078","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18076,"post_author":"17","post_date":"2024-08-04 03:28:14","post_date_gmt":"2024-08-03 17:28:14","post_content":"\n

Samsung has announced the launch of a new smart ring called the Galaxy Ring. It is the
company\u2019s first smart ring which aims to provide users with several health services. The
announcement came during the latest Samsung Unpacked event, a biannual show hosted by
Samsung Electronics.

\u201cThe release of the Galaxy Ring will usher in a new era of wellness. You can now wrap
health tracking around your finger through this new addition to the Galaxy family,\u201d <\/em>the
the company stated in a press release.<\/p>\n\n\n\n

The new ring will utilize Samsung\u2019s proprietary Galaxy AI via the Samsung Health app. The ring
is made for all-day use. It will provide features such as a sleep tracker, heart health monitor,
menstrual cycle tracker, stress monitor, and more.<\/em><\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Benefits of Galaxy Ring<\/h2>\n\n\n\n

The ring\u2019s built-in censors will collect data such as heart rate, blood oxygen level, and sleep
time. The AI in the Samsung Health app will analyze the data and generate an \u201cEnergy Score\u201d.
The score will offer guidance for healthy balanced living. Users will also receive \u201cpersonalized
suggestions\u201d to improve their daily activities.<\/em><\/p>\n\n\n\n

According to Samsung, the ring can last up to 7 days on a single charge. The ring comes in
sizes 5 to 12. Interested parties can utilize the free sizing kit to<\/em> find their optimum fit

The Galaxy ring has a body of solid titanium. It comes in three different colors: black, gold, and
silver. The starting price for the Galaxy ring is $399.<\/p>\n\n\n\n

<\/p>\n","post_title":"News From Samsung Unpacked: Samsung To Bring AI To Healthcare With New Galaxy Ring","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"","post_password":"","post_name":"news-from-samsung-unpacked-samsung-to-bring-ai-to-healthcare-with-new-galaxy-ring","to_ping":"","pinged":"","post_modified":"2024-08-04 03:28:14","post_modified_gmt":"2024-08-03 17:28:14","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18076","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

\u201cWe\u2019re committed to working with Cond\u00e9 Nast and other news publishers to ensure that as AI plays a larger role in news discovery and delivery, it maintains accuracy, integrity, and respect for quality reporting.\u201d<\/em>, said Brad Lightcap, COO at OpenAI.<\/p>\n\n\n\n

Neither party has disclosed the financial terms of the contract. Previously, OpenAI had entered into long-term content deals with the Associated Press, Axel Springer, TIME, Vox, NewsCorps, and several other publishers.<\/p>\n","post_title":"OpenAI Teams Up With Cond\u00e9 Nast In A \u201cMulti-Year Content Deal\u201d","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"openai-teams-up-with-conde-nast-in-a-multi-year-content-deal","to_ping":"","pinged":"","post_modified":"2024-08-29 12:19:44","post_modified_gmt":"2024-08-29 02:19:44","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18403","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18078,"post_author":"17","post_date":"2024-08-10 18:30:27","post_date_gmt":"2024-08-10 08:30:27","post_content":"\n

Samsung has unveiled 2 new smartwatches that harness the power of the company's
proprietary Galaxy AI. The news came during the
recently concluded Samsung Unpacked<\/a> event held in Paris.

\u201cBuilt to push boundaries, Galaxy Watch Ultra withstands up to 55\u00b0C heat, 9,000m altitude, 10 ATM water pressure and runs smoothly through it all with a new, powerful 3nm processor.\u201d <\/em>
reads the official page on Sa<\/a>msung\u2019s website.

Along with several other products, Samsung introduced the Galaxy Ultra Watch and the Galaxy and the Galaxy Watch 7 to much anticipation. Industry experts are calling it a direct rival to Apple's smartwatches, with many noting the similarities between the two.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a>

The new smartwatches follow Samsung's approach to making holistic health-related products such as the Galaxy Ring. The watch utilizes several Bioactive sensors to track vital signs of users such as sleep, heart rate, blood pressure, body composition, and more. The data is then analyzed by Galaxy AI to generate an energy score, which offers insight into the user's daily activities. Users will need the latest Samsung Health App on a compatible Android device (Android 11 or above) to unlock the full features.

The Galaxy Watch Ultra is made with titanium and sapphire crystals and comes in 3 different
colors. It has a 590 mAh battery that can last between 60-80 hours depending on usage.

The Galaxy Watch Ultra is currently available in one version for $649.99. The Galaxy Watch 7
comes in two sizes: 40 mm for $299.99 and 44 mm for $329.99. The watches with LTE support will cost a further $50.<\/p>\n","post_title":"From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"from-samsung-unpacked-samsung-brings-ai-to-fashion-with-2-new-smart-watches","to_ping":"","pinged":"","post_modified":"2024-08-10 18:30:34","post_modified_gmt":"2024-08-10 08:30:34","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18078","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18076,"post_author":"17","post_date":"2024-08-04 03:28:14","post_date_gmt":"2024-08-03 17:28:14","post_content":"\n

Samsung has announced the launch of a new smart ring called the Galaxy Ring. It is the
company\u2019s first smart ring which aims to provide users with several health services. The
announcement came during the latest Samsung Unpacked event, a biannual show hosted by
Samsung Electronics.

\u201cThe release of the Galaxy Ring will usher in a new era of wellness. You can now wrap
health tracking around your finger through this new addition to the Galaxy family,\u201d <\/em>the
the company stated in a press release.<\/p>\n\n\n\n

The new ring will utilize Samsung\u2019s proprietary Galaxy AI via the Samsung Health app. The ring
is made for all-day use. It will provide features such as a sleep tracker, heart health monitor,
menstrual cycle tracker, stress monitor, and more.<\/em><\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Benefits of Galaxy Ring<\/h2>\n\n\n\n

The ring\u2019s built-in censors will collect data such as heart rate, blood oxygen level, and sleep
time. The AI in the Samsung Health app will analyze the data and generate an \u201cEnergy Score\u201d.
The score will offer guidance for healthy balanced living. Users will also receive \u201cpersonalized
suggestions\u201d to improve their daily activities.<\/em><\/p>\n\n\n\n

According to Samsung, the ring can last up to 7 days on a single charge. The ring comes in
sizes 5 to 12. Interested parties can utilize the free sizing kit to<\/em> find their optimum fit

The Galaxy ring has a body of solid titanium. It comes in three different colors: black, gold, and
silver. The starting price for the Galaxy ring is $399.<\/p>\n\n\n\n

<\/p>\n","post_title":"News From Samsung Unpacked: Samsung To Bring AI To Healthcare With New Galaxy Ring","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"","post_password":"","post_name":"news-from-samsung-unpacked-samsung-to-bring-ai-to-healthcare-with-new-galaxy-ring","to_ping":"","pinged":"","post_modified":"2024-08-04 03:28:14","post_modified_gmt":"2024-08-03 17:28:14","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18076","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

In return, OpenAI will get access to \u201cfeedback and insights on the design and performance of SearchGPT\u201d<\/em> from users. The company plans to use this data to improve its products and enhance user experience. Many sources expressed<\/a> that this data will be used to train AI models currently employed by OpenAI.<\/p>\n\n\n\n

\u201cWe\u2019re committed to working with Cond\u00e9 Nast and other news publishers to ensure that as AI plays a larger role in news discovery and delivery, it maintains accuracy, integrity, and respect for quality reporting.\u201d<\/em>, said Brad Lightcap, COO at OpenAI.<\/p>\n\n\n\n

Neither party has disclosed the financial terms of the contract. Previously, OpenAI had entered into long-term content deals with the Associated Press, Axel Springer, TIME, Vox, NewsCorps, and several other publishers.<\/p>\n","post_title":"OpenAI Teams Up With Cond\u00e9 Nast In A \u201cMulti-Year Content Deal\u201d","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"openai-teams-up-with-conde-nast-in-a-multi-year-content-deal","to_ping":"","pinged":"","post_modified":"2024-08-29 12:19:44","post_modified_gmt":"2024-08-29 02:19:44","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18403","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18078,"post_author":"17","post_date":"2024-08-10 18:30:27","post_date_gmt":"2024-08-10 08:30:27","post_content":"\n

Samsung has unveiled 2 new smartwatches that harness the power of the company's
proprietary Galaxy AI. The news came during the
recently concluded Samsung Unpacked<\/a> event held in Paris.

\u201cBuilt to push boundaries, Galaxy Watch Ultra withstands up to 55\u00b0C heat, 9,000m altitude, 10 ATM water pressure and runs smoothly through it all with a new, powerful 3nm processor.\u201d <\/em>
reads the official page on Sa<\/a>msung\u2019s website.

Along with several other products, Samsung introduced the Galaxy Ultra Watch and the Galaxy and the Galaxy Watch 7 to much anticipation. Industry experts are calling it a direct rival to Apple's smartwatches, with many noting the similarities between the two.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a>

The new smartwatches follow Samsung's approach to making holistic health-related products such as the Galaxy Ring. The watch utilizes several Bioactive sensors to track vital signs of users such as sleep, heart rate, blood pressure, body composition, and more. The data is then analyzed by Galaxy AI to generate an energy score, which offers insight into the user's daily activities. Users will need the latest Samsung Health App on a compatible Android device (Android 11 or above) to unlock the full features.

The Galaxy Watch Ultra is made with titanium and sapphire crystals and comes in 3 different
colors. It has a 590 mAh battery that can last between 60-80 hours depending on usage.

The Galaxy Watch Ultra is currently available in one version for $649.99. The Galaxy Watch 7
comes in two sizes: 40 mm for $299.99 and 44 mm for $329.99. The watches with LTE support will cost a further $50.<\/p>\n","post_title":"From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"from-samsung-unpacked-samsung-brings-ai-to-fashion-with-2-new-smart-watches","to_ping":"","pinged":"","post_modified":"2024-08-10 18:30:34","post_modified_gmt":"2024-08-10 08:30:34","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18078","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18076,"post_author":"17","post_date":"2024-08-04 03:28:14","post_date_gmt":"2024-08-03 17:28:14","post_content":"\n

Samsung has announced the launch of a new smart ring called the Galaxy Ring. It is the
company\u2019s first smart ring which aims to provide users with several health services. The
announcement came during the latest Samsung Unpacked event, a biannual show hosted by
Samsung Electronics.

\u201cThe release of the Galaxy Ring will usher in a new era of wellness. You can now wrap
health tracking around your finger through this new addition to the Galaxy family,\u201d <\/em>the
the company stated in a press release.<\/p>\n\n\n\n

The new ring will utilize Samsung\u2019s proprietary Galaxy AI via the Samsung Health app. The ring
is made for all-day use. It will provide features such as a sleep tracker, heart health monitor,
menstrual cycle tracker, stress monitor, and more.<\/em><\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Benefits of Galaxy Ring<\/h2>\n\n\n\n

The ring\u2019s built-in censors will collect data such as heart rate, blood oxygen level, and sleep
time. The AI in the Samsung Health app will analyze the data and generate an \u201cEnergy Score\u201d.
The score will offer guidance for healthy balanced living. Users will also receive \u201cpersonalized
suggestions\u201d to improve their daily activities.<\/em><\/p>\n\n\n\n

According to Samsung, the ring can last up to 7 days on a single charge. The ring comes in
sizes 5 to 12. Interested parties can utilize the free sizing kit to<\/em> find their optimum fit

The Galaxy ring has a body of solid titanium. It comes in three different colors: black, gold, and
silver. The starting price for the Galaxy ring is $399.<\/p>\n\n\n\n

<\/p>\n","post_title":"News From Samsung Unpacked: Samsung To Bring AI To Healthcare With New Galaxy Ring","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"","post_password":"","post_name":"news-from-samsung-unpacked-samsung-to-bring-ai-to-healthcare-with-new-galaxy-ring","to_ping":"","pinged":"","post_modified":"2024-08-04 03:28:14","post_modified_gmt":"2024-08-03 17:28:14","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18076","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

In return, OpenAI will get access to \u201cfeedback and insights on the design and performance of SearchGPT\u201d<\/em> from users. The company plans to use this data to improve its products and enhance user experience. Many sources expressed<\/a> that this data will be used to train AI models currently employed by OpenAI.<\/p>\n\n\n\n

\u201cWe\u2019re committed to working with Cond\u00e9 Nast and other news publishers to ensure that as AI plays a larger role in news discovery and delivery, it maintains accuracy, integrity, and respect for quality reporting.\u201d<\/em>, said Brad Lightcap, COO at OpenAI.<\/p>\n\n\n\n

Neither party has disclosed the financial terms of the contract. Previously, OpenAI had entered into long-term content deals with the Associated Press, Axel Springer, TIME, Vox, NewsCorps, and several other publishers.<\/p>\n","post_title":"OpenAI Teams Up With Cond\u00e9 Nast In A \u201cMulti-Year Content Deal\u201d","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"openai-teams-up-with-conde-nast-in-a-multi-year-content-deal","to_ping":"","pinged":"","post_modified":"2024-08-29 12:19:44","post_modified_gmt":"2024-08-29 02:19:44","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18403","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18078,"post_author":"17","post_date":"2024-08-10 18:30:27","post_date_gmt":"2024-08-10 08:30:27","post_content":"\n

Samsung has unveiled 2 new smartwatches that harness the power of the company's
proprietary Galaxy AI. The news came during the
recently concluded Samsung Unpacked<\/a> event held in Paris.

\u201cBuilt to push boundaries, Galaxy Watch Ultra withstands up to 55\u00b0C heat, 9,000m altitude, 10 ATM water pressure and runs smoothly through it all with a new, powerful 3nm processor.\u201d <\/em>
reads the official page on Sa<\/a>msung\u2019s website.

Along with several other products, Samsung introduced the Galaxy Ultra Watch and the Galaxy and the Galaxy Watch 7 to much anticipation. Industry experts are calling it a direct rival to Apple's smartwatches, with many noting the similarities between the two.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a>

The new smartwatches follow Samsung's approach to making holistic health-related products such as the Galaxy Ring. The watch utilizes several Bioactive sensors to track vital signs of users such as sleep, heart rate, blood pressure, body composition, and more. The data is then analyzed by Galaxy AI to generate an energy score, which offers insight into the user's daily activities. Users will need the latest Samsung Health App on a compatible Android device (Android 11 or above) to unlock the full features.

The Galaxy Watch Ultra is made with titanium and sapphire crystals and comes in 3 different
colors. It has a 590 mAh battery that can last between 60-80 hours depending on usage.

The Galaxy Watch Ultra is currently available in one version for $649.99. The Galaxy Watch 7
comes in two sizes: 40 mm for $299.99 and 44 mm for $329.99. The watches with LTE support will cost a further $50.<\/p>\n","post_title":"From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"from-samsung-unpacked-samsung-brings-ai-to-fashion-with-2-new-smart-watches","to_ping":"","pinged":"","post_modified":"2024-08-10 18:30:34","post_modified_gmt":"2024-08-10 08:30:34","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18078","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18076,"post_author":"17","post_date":"2024-08-04 03:28:14","post_date_gmt":"2024-08-03 17:28:14","post_content":"\n

Samsung has announced the launch of a new smart ring called the Galaxy Ring. It is the
company\u2019s first smart ring which aims to provide users with several health services. The
announcement came during the latest Samsung Unpacked event, a biannual show hosted by
Samsung Electronics.

\u201cThe release of the Galaxy Ring will usher in a new era of wellness. You can now wrap
health tracking around your finger through this new addition to the Galaxy family,\u201d <\/em>the
the company stated in a press release.<\/p>\n\n\n\n

The new ring will utilize Samsung\u2019s proprietary Galaxy AI via the Samsung Health app. The ring
is made for all-day use. It will provide features such as a sleep tracker, heart health monitor,
menstrual cycle tracker, stress monitor, and more.<\/em><\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Benefits of Galaxy Ring<\/h2>\n\n\n\n

The ring\u2019s built-in censors will collect data such as heart rate, blood oxygen level, and sleep
time. The AI in the Samsung Health app will analyze the data and generate an \u201cEnergy Score\u201d.
The score will offer guidance for healthy balanced living. Users will also receive \u201cpersonalized
suggestions\u201d to improve their daily activities.<\/em><\/p>\n\n\n\n

According to Samsung, the ring can last up to 7 days on a single charge. The ring comes in
sizes 5 to 12. Interested parties can utilize the free sizing kit to<\/em> find their optimum fit

The Galaxy ring has a body of solid titanium. It comes in three different colors: black, gold, and
silver. The starting price for the Galaxy ring is $399.<\/p>\n\n\n\n

<\/p>\n","post_title":"News From Samsung Unpacked: Samsung To Bring AI To Healthcare With New Galaxy Ring","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"","post_password":"","post_name":"news-from-samsung-unpacked-samsung-to-bring-ai-to-healthcare-with-new-galaxy-ring","to_ping":"","pinged":"","post_modified":"2024-08-04 03:28:14","post_modified_gmt":"2024-08-03 17:28:14","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18076","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

OpenAI will display content from Conde Nast\u2019s various media outlets directly on the AI company\u2019s products as part of the agreement. These outlets include well-renowned magazines such as Vogue, The New Yorker, Cond\u00e9 Nast Traveler, GQ, Architectural Digest, Vanity Fair, Wired, Bon App\u00e9tit, etc.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

In return, OpenAI will get access to \u201cfeedback and insights on the design and performance of SearchGPT\u201d<\/em> from users. The company plans to use this data to improve its products and enhance user experience. Many sources expressed<\/a> that this data will be used to train AI models currently employed by OpenAI.<\/p>\n\n\n\n

\u201cWe\u2019re committed to working with Cond\u00e9 Nast and other news publishers to ensure that as AI plays a larger role in news discovery and delivery, it maintains accuracy, integrity, and respect for quality reporting.\u201d<\/em>, said Brad Lightcap, COO at OpenAI.<\/p>\n\n\n\n

Neither party has disclosed the financial terms of the contract. Previously, OpenAI had entered into long-term content deals with the Associated Press, Axel Springer, TIME, Vox, NewsCorps, and several other publishers.<\/p>\n","post_title":"OpenAI Teams Up With Cond\u00e9 Nast In A \u201cMulti-Year Content Deal\u201d","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"openai-teams-up-with-conde-nast-in-a-multi-year-content-deal","to_ping":"","pinged":"","post_modified":"2024-08-29 12:19:44","post_modified_gmt":"2024-08-29 02:19:44","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18403","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18078,"post_author":"17","post_date":"2024-08-10 18:30:27","post_date_gmt":"2024-08-10 08:30:27","post_content":"\n

Samsung has unveiled 2 new smartwatches that harness the power of the company's
proprietary Galaxy AI. The news came during the
recently concluded Samsung Unpacked<\/a> event held in Paris.

\u201cBuilt to push boundaries, Galaxy Watch Ultra withstands up to 55\u00b0C heat, 9,000m altitude, 10 ATM water pressure and runs smoothly through it all with a new, powerful 3nm processor.\u201d <\/em>
reads the official page on Sa<\/a>msung\u2019s website.

Along with several other products, Samsung introduced the Galaxy Ultra Watch and the Galaxy and the Galaxy Watch 7 to much anticipation. Industry experts are calling it a direct rival to Apple's smartwatches, with many noting the similarities between the two.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a>

The new smartwatches follow Samsung's approach to making holistic health-related products such as the Galaxy Ring. The watch utilizes several Bioactive sensors to track vital signs of users such as sleep, heart rate, blood pressure, body composition, and more. The data is then analyzed by Galaxy AI to generate an energy score, which offers insight into the user's daily activities. Users will need the latest Samsung Health App on a compatible Android device (Android 11 or above) to unlock the full features.

The Galaxy Watch Ultra is made with titanium and sapphire crystals and comes in 3 different
colors. It has a 590 mAh battery that can last between 60-80 hours depending on usage.

The Galaxy Watch Ultra is currently available in one version for $649.99. The Galaxy Watch 7
comes in two sizes: 40 mm for $299.99 and 44 mm for $329.99. The watches with LTE support will cost a further $50.<\/p>\n","post_title":"From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"from-samsung-unpacked-samsung-brings-ai-to-fashion-with-2-new-smart-watches","to_ping":"","pinged":"","post_modified":"2024-08-10 18:30:34","post_modified_gmt":"2024-08-10 08:30:34","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18078","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18076,"post_author":"17","post_date":"2024-08-04 03:28:14","post_date_gmt":"2024-08-03 17:28:14","post_content":"\n

Samsung has announced the launch of a new smart ring called the Galaxy Ring. It is the
company\u2019s first smart ring which aims to provide users with several health services. The
announcement came during the latest Samsung Unpacked event, a biannual show hosted by
Samsung Electronics.

\u201cThe release of the Galaxy Ring will usher in a new era of wellness. You can now wrap
health tracking around your finger through this new addition to the Galaxy family,\u201d <\/em>the
the company stated in a press release.<\/p>\n\n\n\n

The new ring will utilize Samsung\u2019s proprietary Galaxy AI via the Samsung Health app. The ring
is made for all-day use. It will provide features such as a sleep tracker, heart health monitor,
menstrual cycle tracker, stress monitor, and more.<\/em><\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Benefits of Galaxy Ring<\/h2>\n\n\n\n

The ring\u2019s built-in censors will collect data such as heart rate, blood oxygen level, and sleep
time. The AI in the Samsung Health app will analyze the data and generate an \u201cEnergy Score\u201d.
The score will offer guidance for healthy balanced living. Users will also receive \u201cpersonalized
suggestions\u201d to improve their daily activities.<\/em><\/p>\n\n\n\n

According to Samsung, the ring can last up to 7 days on a single charge. The ring comes in
sizes 5 to 12. Interested parties can utilize the free sizing kit to<\/em> find their optimum fit

The Galaxy ring has a body of solid titanium. It comes in three different colors: black, gold, and
silver. The starting price for the Galaxy ring is $399.<\/p>\n\n\n\n

<\/p>\n","post_title":"News From Samsung Unpacked: Samsung To Bring AI To Healthcare With New Galaxy Ring","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"","post_password":"","post_name":"news-from-samsung-unpacked-samsung-to-bring-ai-to-healthcare-with-new-galaxy-ring","to_ping":"","pinged":"","post_modified":"2024-08-04 03:28:14","post_modified_gmt":"2024-08-03 17:28:14","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18076","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Led by Sam Altman and partly owned by Microsoft, OpenAI is a startup best known for the AI chatbot called ChatGPT. Recently, they have launched a prototype search engine called SearchGPT. This marks a direct foray by OpenAI into the search engine market, which Google still dominates.<\/p>\n\n\n\n

OpenAI will display content from Conde Nast\u2019s various media outlets directly on the AI company\u2019s products as part of the agreement. These outlets include well-renowned magazines such as Vogue, The New Yorker, Cond\u00e9 Nast Traveler, GQ, Architectural Digest, Vanity Fair, Wired, Bon App\u00e9tit, etc.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

In return, OpenAI will get access to \u201cfeedback and insights on the design and performance of SearchGPT\u201d<\/em> from users. The company plans to use this data to improve its products and enhance user experience. Many sources expressed<\/a> that this data will be used to train AI models currently employed by OpenAI.<\/p>\n\n\n\n

\u201cWe\u2019re committed to working with Cond\u00e9 Nast and other news publishers to ensure that as AI plays a larger role in news discovery and delivery, it maintains accuracy, integrity, and respect for quality reporting.\u201d<\/em>, said Brad Lightcap, COO at OpenAI.<\/p>\n\n\n\n

Neither party has disclosed the financial terms of the contract. Previously, OpenAI had entered into long-term content deals with the Associated Press, Axel Springer, TIME, Vox, NewsCorps, and several other publishers.<\/p>\n","post_title":"OpenAI Teams Up With Cond\u00e9 Nast In A \u201cMulti-Year Content Deal\u201d","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"openai-teams-up-with-conde-nast-in-a-multi-year-content-deal","to_ping":"","pinged":"","post_modified":"2024-08-29 12:19:44","post_modified_gmt":"2024-08-29 02:19:44","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18403","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18078,"post_author":"17","post_date":"2024-08-10 18:30:27","post_date_gmt":"2024-08-10 08:30:27","post_content":"\n

Samsung has unveiled 2 new smartwatches that harness the power of the company's
proprietary Galaxy AI. The news came during the
recently concluded Samsung Unpacked<\/a> event held in Paris.

\u201cBuilt to push boundaries, Galaxy Watch Ultra withstands up to 55\u00b0C heat, 9,000m altitude, 10 ATM water pressure and runs smoothly through it all with a new, powerful 3nm processor.\u201d <\/em>
reads the official page on Sa<\/a>msung\u2019s website.

Along with several other products, Samsung introduced the Galaxy Ultra Watch and the Galaxy and the Galaxy Watch 7 to much anticipation. Industry experts are calling it a direct rival to Apple's smartwatches, with many noting the similarities between the two.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a>

The new smartwatches follow Samsung's approach to making holistic health-related products such as the Galaxy Ring. The watch utilizes several Bioactive sensors to track vital signs of users such as sleep, heart rate, blood pressure, body composition, and more. The data is then analyzed by Galaxy AI to generate an energy score, which offers insight into the user's daily activities. Users will need the latest Samsung Health App on a compatible Android device (Android 11 or above) to unlock the full features.

The Galaxy Watch Ultra is made with titanium and sapphire crystals and comes in 3 different
colors. It has a 590 mAh battery that can last between 60-80 hours depending on usage.

The Galaxy Watch Ultra is currently available in one version for $649.99. The Galaxy Watch 7
comes in two sizes: 40 mm for $299.99 and 44 mm for $329.99. The watches with LTE support will cost a further $50.<\/p>\n","post_title":"From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"from-samsung-unpacked-samsung-brings-ai-to-fashion-with-2-new-smart-watches","to_ping":"","pinged":"","post_modified":"2024-08-10 18:30:34","post_modified_gmt":"2024-08-10 08:30:34","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18078","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18076,"post_author":"17","post_date":"2024-08-04 03:28:14","post_date_gmt":"2024-08-03 17:28:14","post_content":"\n

Samsung has announced the launch of a new smart ring called the Galaxy Ring. It is the
company\u2019s first smart ring which aims to provide users with several health services. The
announcement came during the latest Samsung Unpacked event, a biannual show hosted by
Samsung Electronics.

\u201cThe release of the Galaxy Ring will usher in a new era of wellness. You can now wrap
health tracking around your finger through this new addition to the Galaxy family,\u201d <\/em>the
the company stated in a press release.<\/p>\n\n\n\n

The new ring will utilize Samsung\u2019s proprietary Galaxy AI via the Samsung Health app. The ring
is made for all-day use. It will provide features such as a sleep tracker, heart health monitor,
menstrual cycle tracker, stress monitor, and more.<\/em><\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Benefits of Galaxy Ring<\/h2>\n\n\n\n

The ring\u2019s built-in censors will collect data such as heart rate, blood oxygen level, and sleep
time. The AI in the Samsung Health app will analyze the data and generate an \u201cEnergy Score\u201d.
The score will offer guidance for healthy balanced living. Users will also receive \u201cpersonalized
suggestions\u201d to improve their daily activities.<\/em><\/p>\n\n\n\n

According to Samsung, the ring can last up to 7 days on a single charge. The ring comes in
sizes 5 to 12. Interested parties can utilize the free sizing kit to<\/em> find their optimum fit

The Galaxy ring has a body of solid titanium. It comes in three different colors: black, gold, and
silver. The starting price for the Galaxy ring is $399.<\/p>\n\n\n\n

<\/p>\n","post_title":"News From Samsung Unpacked: Samsung To Bring AI To Healthcare With New Galaxy Ring","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"","post_password":"","post_name":"news-from-samsung-unpacked-samsung-to-bring-ai-to-healthcare-with-new-galaxy-ring","to_ping":"","pinged":"","post_modified":"2024-08-04 03:28:14","post_modified_gmt":"2024-08-03 17:28:14","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18076","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

\u201cWe\u2019re announcing a partnership with Cond\u00e9 Nast to display content from top brands within our products\u201d,<\/em> the company states in a blog post<\/a>.<\/p>\n\n\n\n

Led by Sam Altman and partly owned by Microsoft, OpenAI is a startup best known for the AI chatbot called ChatGPT. Recently, they have launched a prototype search engine called SearchGPT. This marks a direct foray by OpenAI into the search engine market, which Google still dominates.<\/p>\n\n\n\n

OpenAI will display content from Conde Nast\u2019s various media outlets directly on the AI company\u2019s products as part of the agreement. These outlets include well-renowned magazines such as Vogue, The New Yorker, Cond\u00e9 Nast Traveler, GQ, Architectural Digest, Vanity Fair, Wired, Bon App\u00e9tit, etc.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

In return, OpenAI will get access to \u201cfeedback and insights on the design and performance of SearchGPT\u201d<\/em> from users. The company plans to use this data to improve its products and enhance user experience. Many sources expressed<\/a> that this data will be used to train AI models currently employed by OpenAI.<\/p>\n\n\n\n

\u201cWe\u2019re committed to working with Cond\u00e9 Nast and other news publishers to ensure that as AI plays a larger role in news discovery and delivery, it maintains accuracy, integrity, and respect for quality reporting.\u201d<\/em>, said Brad Lightcap, COO at OpenAI.<\/p>\n\n\n\n

Neither party has disclosed the financial terms of the contract. Previously, OpenAI had entered into long-term content deals with the Associated Press, Axel Springer, TIME, Vox, NewsCorps, and several other publishers.<\/p>\n","post_title":"OpenAI Teams Up With Cond\u00e9 Nast In A \u201cMulti-Year Content Deal\u201d","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"openai-teams-up-with-conde-nast-in-a-multi-year-content-deal","to_ping":"","pinged":"","post_modified":"2024-08-29 12:19:44","post_modified_gmt":"2024-08-29 02:19:44","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18403","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18078,"post_author":"17","post_date":"2024-08-10 18:30:27","post_date_gmt":"2024-08-10 08:30:27","post_content":"\n

Samsung has unveiled 2 new smartwatches that harness the power of the company's
proprietary Galaxy AI. The news came during the
recently concluded Samsung Unpacked<\/a> event held in Paris.

\u201cBuilt to push boundaries, Galaxy Watch Ultra withstands up to 55\u00b0C heat, 9,000m altitude, 10 ATM water pressure and runs smoothly through it all with a new, powerful 3nm processor.\u201d <\/em>
reads the official page on Sa<\/a>msung\u2019s website.

Along with several other products, Samsung introduced the Galaxy Ultra Watch and the Galaxy and the Galaxy Watch 7 to much anticipation. Industry experts are calling it a direct rival to Apple's smartwatches, with many noting the similarities between the two.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a>

The new smartwatches follow Samsung's approach to making holistic health-related products such as the Galaxy Ring. The watch utilizes several Bioactive sensors to track vital signs of users such as sleep, heart rate, blood pressure, body composition, and more. The data is then analyzed by Galaxy AI to generate an energy score, which offers insight into the user's daily activities. Users will need the latest Samsung Health App on a compatible Android device (Android 11 or above) to unlock the full features.

The Galaxy Watch Ultra is made with titanium and sapphire crystals and comes in 3 different
colors. It has a 590 mAh battery that can last between 60-80 hours depending on usage.

The Galaxy Watch Ultra is currently available in one version for $649.99. The Galaxy Watch 7
comes in two sizes: 40 mm for $299.99 and 44 mm for $329.99. The watches with LTE support will cost a further $50.<\/p>\n","post_title":"From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"from-samsung-unpacked-samsung-brings-ai-to-fashion-with-2-new-smart-watches","to_ping":"","pinged":"","post_modified":"2024-08-10 18:30:34","post_modified_gmt":"2024-08-10 08:30:34","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18078","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18076,"post_author":"17","post_date":"2024-08-04 03:28:14","post_date_gmt":"2024-08-03 17:28:14","post_content":"\n

Samsung has announced the launch of a new smart ring called the Galaxy Ring. It is the
company\u2019s first smart ring which aims to provide users with several health services. The
announcement came during the latest Samsung Unpacked event, a biannual show hosted by
Samsung Electronics.

\u201cThe release of the Galaxy Ring will usher in a new era of wellness. You can now wrap
health tracking around your finger through this new addition to the Galaxy family,\u201d <\/em>the
the company stated in a press release.<\/p>\n\n\n\n

The new ring will utilize Samsung\u2019s proprietary Galaxy AI via the Samsung Health app. The ring
is made for all-day use. It will provide features such as a sleep tracker, heart health monitor,
menstrual cycle tracker, stress monitor, and more.<\/em><\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Benefits of Galaxy Ring<\/h2>\n\n\n\n

The ring\u2019s built-in censors will collect data such as heart rate, blood oxygen level, and sleep
time. The AI in the Samsung Health app will analyze the data and generate an \u201cEnergy Score\u201d.
The score will offer guidance for healthy balanced living. Users will also receive \u201cpersonalized
suggestions\u201d to improve their daily activities.<\/em><\/p>\n\n\n\n

According to Samsung, the ring can last up to 7 days on a single charge. The ring comes in
sizes 5 to 12. Interested parties can utilize the free sizing kit to<\/em> find their optimum fit

The Galaxy ring has a body of solid titanium. It comes in three different colors: black, gold, and
silver. The starting price for the Galaxy ring is $399.<\/p>\n\n\n\n

<\/p>\n","post_title":"News From Samsung Unpacked: Samsung To Bring AI To Healthcare With New Galaxy Ring","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"","post_password":"","post_name":"news-from-samsung-unpacked-samsung-to-bring-ai-to-healthcare-with-new-galaxy-ring","to_ping":"","pinged":"","post_modified":"2024-08-04 03:28:14","post_modified_gmt":"2024-08-03 17:28:14","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18076","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Global mass media company Cond\u00e9 Nast has agreed on a multi-year content deal with AI company OpenAI. This deal follows a recent trend of AI companies partnering with media outlets to benefit both parties.<\/p>\n\n\n\n

\u201cWe\u2019re announcing a partnership with Cond\u00e9 Nast to display content from top brands within our products\u201d,<\/em> the company states in a blog post<\/a>.<\/p>\n\n\n\n

Led by Sam Altman and partly owned by Microsoft, OpenAI is a startup best known for the AI chatbot called ChatGPT. Recently, they have launched a prototype search engine called SearchGPT. This marks a direct foray by OpenAI into the search engine market, which Google still dominates.<\/p>\n\n\n\n

OpenAI will display content from Conde Nast\u2019s various media outlets directly on the AI company\u2019s products as part of the agreement. These outlets include well-renowned magazines such as Vogue, The New Yorker, Cond\u00e9 Nast Traveler, GQ, Architectural Digest, Vanity Fair, Wired, Bon App\u00e9tit, etc.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

In return, OpenAI will get access to \u201cfeedback and insights on the design and performance of SearchGPT\u201d<\/em> from users. The company plans to use this data to improve its products and enhance user experience. Many sources expressed<\/a> that this data will be used to train AI models currently employed by OpenAI.<\/p>\n\n\n\n

\u201cWe\u2019re committed to working with Cond\u00e9 Nast and other news publishers to ensure that as AI plays a larger role in news discovery and delivery, it maintains accuracy, integrity, and respect for quality reporting.\u201d<\/em>, said Brad Lightcap, COO at OpenAI.<\/p>\n\n\n\n

Neither party has disclosed the financial terms of the contract. Previously, OpenAI had entered into long-term content deals with the Associated Press, Axel Springer, TIME, Vox, NewsCorps, and several other publishers.<\/p>\n","post_title":"OpenAI Teams Up With Cond\u00e9 Nast In A \u201cMulti-Year Content Deal\u201d","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"openai-teams-up-with-conde-nast-in-a-multi-year-content-deal","to_ping":"","pinged":"","post_modified":"2024-08-29 12:19:44","post_modified_gmt":"2024-08-29 02:19:44","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18403","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18317,"post_author":"17","post_date":"2024-08-23 05:23:33","post_date_gmt":"2024-08-22 19:23:33","post_content":"\n

American tech giant Google has recently released the Imagen 3 image generator to the public. Previously, it was only available to select Vertex AI subscribers, but the tool is now free to use for all users in the US. This new tool is reported to bring<\/a> \u201cGoogle's state of the art image generative AI capabilities to application developers.\u201d<\/em><\/p>\n\n\n\n

In a research paper accompanying<\/a> the release, Google states, \u201cWe introduce Imagen 3, a latent diffusion model that generates high-quality images from text prompts.\u201d. <\/em>The paper details the quality and safety concerns regarding the product and describes various user experiences.\u00a0<\/p>\n\n\n\n

Currently, the response to the new AI has been mixed<\/a>. Some users are highlighting its improved texture and better attention to detail. Others have criticized the strict content policy as it limits creativity.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Reveals \u201cSora\u201d: A Text-to-Video AI Model Set to Change The Generative AI Landscape.<\/a><\/p>\n\n\n\n

The expansion of Imagen 3\u2019s availability coincides with the release of Grok-2, another AI model developed by X. Notably, Grok-2 has much more relaxed filters, which has led to many comparisons.<\/p>\n\n\n\n

The Imagen 3 was originally announced<\/a> during the Google I\/O event in May. Like other similar AI models, Imagen 3 generates images from text prompts. To stand out from the competition, Google promised that its new tool is \u201ccapable of generating images with even better detail, richer lighting, and fewer distracting artifacts\u201d <\/em>compared to previous models.\u00a0<\/p>\n\n\n\n

Users can try out Imagen 3 via the ImageFX platform.<\/p>\n","post_title":"Google Makes Imagen 3 Available To US Users","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-makes-imagen-3-available-to-us-users","to_ping":"","pinged":"","post_modified":"2024-08-23 05:23:39","post_modified_gmt":"2024-08-22 19:23:39","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18317","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18231,"post_author":"17","post_date":"2024-08-15 19:36:56","post_date_gmt":"2024-08-15 09:36:56","post_content":"\n

Google has unveiled a new feature for its flagship AI model called Gemini Live. The announcement came during the recently concluded<\/a> \u201cMade By Google\u201d event.<\/p>\n\n\n\n

\u201cGemini Live is the most natural way to interact with Gemini. Now you can have free-flowing conversations with Gemini\u201d<\/em>, the company stated during their keynote speech<\/a>.<\/p>\n\n\n\n

Gemini Live allows users to freely converse with Gemini. The AI will respond in real-time to offer solutions or generate answers to a given question. Users can interrupt the AI mid-response to change the topic or explore a particular point further.<\/p>\n\n\n\n

See Related:<\/em><\/strong> Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Live also works in the background or when the phone is locked. So, users can continue chatting with the AI while performing other tasks. Users can choose from 10 different voices for their Gemini model.<\/p>\n\n\n\n

Google hopes this feature will be able to replicate real-life conversations, making the user experience more natural and satisfying. The company has also claimed that it has completely integrated Gemini to the Android user experience.<\/p>\n\n\n\n

Currently, Gemini Live is available only to Gemini Advanced subscribers and is only available in English. Google has stated that the feature will expand to iOS and other languages in the coming weeks.<\/p>\n","post_title":"Introducing Gemini Live: Google's New AI Feature That Allows Real-Time Conversations","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-gemini-live-googles-new-ai-feature-that-allows-real-time-conversations","to_ping":"","pinged":"","post_modified":"2024-08-15 19:38:31","post_modified_gmt":"2024-08-15 09:38:31","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18231","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18078,"post_author":"17","post_date":"2024-08-10 18:30:27","post_date_gmt":"2024-08-10 08:30:27","post_content":"\n

Samsung has unveiled 2 new smartwatches that harness the power of the company's
proprietary Galaxy AI. The news came during the
recently concluded Samsung Unpacked<\/a> event held in Paris.

\u201cBuilt to push boundaries, Galaxy Watch Ultra withstands up to 55\u00b0C heat, 9,000m altitude, 10 ATM water pressure and runs smoothly through it all with a new, powerful 3nm processor.\u201d <\/em>
reads the official page on Sa<\/a>msung\u2019s website.

Along with several other products, Samsung introduced the Galaxy Ultra Watch and the Galaxy and the Galaxy Watch 7 to much anticipation. Industry experts are calling it a direct rival to Apple's smartwatches, with many noting the similarities between the two.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a>

The new smartwatches follow Samsung's approach to making holistic health-related products such as the Galaxy Ring. The watch utilizes several Bioactive sensors to track vital signs of users such as sleep, heart rate, blood pressure, body composition, and more. The data is then analyzed by Galaxy AI to generate an energy score, which offers insight into the user's daily activities. Users will need the latest Samsung Health App on a compatible Android device (Android 11 or above) to unlock the full features.

The Galaxy Watch Ultra is made with titanium and sapphire crystals and comes in 3 different
colors. It has a 590 mAh battery that can last between 60-80 hours depending on usage.

The Galaxy Watch Ultra is currently available in one version for $649.99. The Galaxy Watch 7
comes in two sizes: 40 mm for $299.99 and 44 mm for $329.99. The watches with LTE support will cost a further $50.<\/p>\n","post_title":"From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"from-samsung-unpacked-samsung-brings-ai-to-fashion-with-2-new-smart-watches","to_ping":"","pinged":"","post_modified":"2024-08-10 18:30:34","post_modified_gmt":"2024-08-10 08:30:34","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18078","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18076,"post_author":"17","post_date":"2024-08-04 03:28:14","post_date_gmt":"2024-08-03 17:28:14","post_content":"\n

Samsung has announced the launch of a new smart ring called the Galaxy Ring. It is the
company\u2019s first smart ring which aims to provide users with several health services. The
announcement came during the latest Samsung Unpacked event, a biannual show hosted by
Samsung Electronics.

\u201cThe release of the Galaxy Ring will usher in a new era of wellness. You can now wrap
health tracking around your finger through this new addition to the Galaxy family,\u201d <\/em>the
the company stated in a press release.<\/p>\n\n\n\n

The new ring will utilize Samsung\u2019s proprietary Galaxy AI via the Samsung Health app. The ring
is made for all-day use. It will provide features such as a sleep tracker, heart health monitor,
menstrual cycle tracker, stress monitor, and more.<\/em><\/p>\n\n\n\n

See Related: <\/em><\/strong>Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Benefits of Galaxy Ring<\/h2>\n\n\n\n

The ring\u2019s built-in censors will collect data such as heart rate, blood oxygen level, and sleep
time. The AI in the Samsung Health app will analyze the data and generate an \u201cEnergy Score\u201d.
The score will offer guidance for healthy balanced living. Users will also receive \u201cpersonalized
suggestions\u201d to improve their daily activities.<\/em><\/p>\n\n\n\n

According to Samsung, the ring can last up to 7 days on a single charge. The ring comes in
sizes 5 to 12. Interested parties can utilize the free sizing kit to<\/em> find their optimum fit

The Galaxy ring has a body of solid titanium. It comes in three different colors: black, gold, and
silver. The starting price for the Galaxy ring is $399.<\/p>\n\n\n\n

<\/p>\n","post_title":"News From Samsung Unpacked: Samsung To Bring AI To Healthcare With New Galaxy Ring","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"","post_password":"","post_name":"news-from-samsung-unpacked-samsung-to-bring-ai-to-healthcare-with-new-galaxy-ring","to_ping":"","pinged":"","post_modified":"2024-08-04 03:28:14","post_modified_gmt":"2024-08-03 17:28:14","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18076","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17565,"post_author":"17","post_date":"2024-07-04 18:30:23","post_date_gmt":"2024-07-04 08:30:23","post_content":"\n

Anthropic, one of the leading AI developers in the world, has announced its latest and most proficient AI model yet. The new model is called \u201cClaude 3.5 Sonnet\u201d and is the first in the Claude 3.5 family of AI models. <\/p>\n\n\n\n

\u201cClaude 3.5 Sonnet raises the industry bar for intelligence, outperforming competitor models and Claude 3 Opus on a wide range of evaluations\u201d<\/em><\/strong>, Anthropic stated in a blog post<\/a>. The latest model is also said to outperform previous Claude chatbots while costing less. Currently, the model has a 200K context window and costs $3 per million input tokens and $15 per million output tokens.<\/p>\n\n\n\n

The company has published data that shows 3.5 Sonnet beating its competitors in several industry benchmark tests. According to Anthropic, the new model is a \u201cmarked improvement in grasping nuance, humor, and complex instructions\u201d<\/em>. Several outlets<\/a> have remarked on the advances Anthropic has made from previous models, including operating twice as fast as Claude 3 Opus which is the company\u2019s largest model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Meet Claude 3: The Latest AI Model To Shake The Industry From Anthropic<\/a><\/p>\n\n\n\n

In addition to the new chatbot, Anthropic has released a new feature to enhance user experience. \u201cArtifact\u201d is a preview feature that displays a dedicated window that allows users to see, edit, and build upon Claude\u2019s creations in real-time.<\/p>\n\n\n\n

Users can try out Claude 3.5 Sonnet for free on Claude\u2019s website. Apple users can also access the chatbot for free via the Claude iOS app. Claude Pro and Team plan members can experience the model with higher rate limits. Anthropic has also teased the release of Claude 3.5 Haiku and Claude 3.5 Opus later this year.<\/p>\n","post_title":"Anthropic\u2019s New Claude 3.5 Sonnet The Latest AI Chatbot Claiming To Be The Best","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"anthropics-new-claude-3-5-sonnet-the-latest-ai-chatbot-claiming-to-be-the-best","to_ping":"","pinged":"","post_modified":"2024-07-04 18:30:27","post_modified_gmt":"2024-07-04 08:30:27","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17565","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17252,"post_author":"17","post_date":"2024-06-10 20:05:30","post_date_gmt":"2024-06-10 10:05:30","post_content":"\n

Google\u2019s AI overview feature has come under criticism from users over the past couple of weeks. In response, the American tech giant came out with a statement addressing the issues and assured that the company has \u201cmade more than a dozen technical improvements\u201d to the system.<\/p>\n\n\n\n

During the recently concluded Google I\/O, the company announced that they will make the AI Overview feature available to every person in the US. This feature provides AI-generated answers to any inquiry made by the user. The purpose of AI Overview was to enhance user experience and provide better search results.\u00a0<\/p>\n\n\n\n

See Related: <\/em><\/strong>BlackRock Plans 3% Job Cuts Amidst Bitcoin ETF Anticipation<\/a><\/p>\n\n\n\n

Since then, users have reported multiple<\/a> misleading or outright incorrect responses generated by the AI. Many people have posted these bizarre search results on X (formerly Twitter). This has predictably led to scrutiny about the quality of Google\u2019s products. Experts have also questioned Google\u2019s ability to keep pace with its competitors in the generative AI race. <\/p>\n\n\n\n

Google responded via a blog release,<\/a> saying, <\/em><\/strong>\u201cIn the last week, people on social media have shared some odd and erroneous overviews. We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously. Given the attention AI Overviews received, we wanted to explain what happened and the steps we\u2019ve taken.\u201d.<\/em><\/p>\n\n\n\n

The post goes on to elaborate on some of the corrections it has made. These include better detection mechanisms for nonsensical queries, limiting the use of user-generated content, and restricting queries that were not helpful.<\/p>\n","post_title":"Google Improves AI Overviews In Light Of Recent Controversy","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-improves-ai-overviews-in-light-of-recent-controversy","to_ping":"","pinged":"","post_modified":"2024-06-10 20:05:33","post_modified_gmt":"2024-06-10 10:05:33","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17252","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT

Artificial Intelligence

1 2 3 4 5 15

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT