\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Meta, the company behind Facebook, intends to use social media posts in the UK to train its generative AI models. This will allow Meta\u2019s AI product to \u201creflect British culture, history, and idioms\u201d. The company believes this will facilitate the adoption of generative AI technology by UK businesses and industries. <\/p>\n\n\n\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Starling Bank's advice to agree on a safe phrase is a simple yet effective solution for now, but as AI technology continues to develop, there will be a growing need for more sophisticated safeguards. While innovation promises many benefits, it\u2019s clear that the rapid pace of AI development also poses new challenges, making it crucial for both individuals and institutions to stay one step ahead of cybercriminals.<\/p>\n","post_title":"Starling Bank Warns How Voice-Cloning Technology Puts Millions At Risk","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"starling-bank-warns-how-voice-cloning-technology-puts-millions-at-risk","to_ping":"","pinged":"","post_modified":"2024-09-25 19:10:49","post_modified_gmt":"2024-09-25 09:10:49","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18852","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18746,"post_author":"17","post_date":"2024-09-21 04:11:53","post_date_gmt":"2024-09-20 18:11:53","post_content":"\n

Meta, the company behind Facebook, intends to use social media posts in the UK to train its generative AI models. This will allow Meta\u2019s AI product to \u201creflect British culture, history, and idioms\u201d. The company believes this will facilitate the adoption of generative AI technology by UK businesses and industries. <\/p>\n\n\n\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Looking ahead, the risks associated with AI-driven scams are likely to expand. As technology becomes more advanced and accessible, scammers will find new ways to exploit it. Consumers must remain vigilant, not just in guarding their financial information but in understanding the new vulnerabilities created by digital footprints.<\/p>\n\n\n\n

Starling Bank's advice to agree on a safe phrase is a simple yet effective solution for now, but as AI technology continues to develop, there will be a growing need for more sophisticated safeguards. While innovation promises many benefits, it\u2019s clear that the rapid pace of AI development also poses new challenges, making it crucial for both individuals and institutions to stay one step ahead of cybercriminals.<\/p>\n","post_title":"Starling Bank Warns How Voice-Cloning Technology Puts Millions At Risk","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"starling-bank-warns-how-voice-cloning-technology-puts-millions-at-risk","to_ping":"","pinged":"","post_modified":"2024-09-25 19:10:49","post_modified_gmt":"2024-09-25 09:10:49","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18852","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18746,"post_author":"17","post_date":"2024-09-21 04:11:53","post_date_gmt":"2024-09-20 18:11:53","post_content":"\n

Meta, the company behind Facebook, intends to use social media posts in the UK to train its generative AI models. This will allow Meta\u2019s AI product to \u201creflect British culture, history, and idioms\u201d. The company believes this will facilitate the adoption of generative AI technology by UK businesses and industries. <\/p>\n\n\n\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

The threat posed by AI technology goes beyond voice cloning. Earlier this year, OpenAI, the company behind the popular AI chatbot ChatGPT, introduced a voice replication tool called Voice Engine but chose not to make it widely available due to concerns about misuse. As AI becomes more adept at mimicking human voices, there are growing concerns about its potential for misuse, from financial fraud to spreading misinformation.<\/p>\n\n\n\n

Looking ahead, the risks associated with AI-driven scams are likely to expand. As technology becomes more advanced and accessible, scammers will find new ways to exploit it. Consumers must remain vigilant, not just in guarding their financial information but in understanding the new vulnerabilities created by digital footprints.<\/p>\n\n\n\n

Starling Bank's advice to agree on a safe phrase is a simple yet effective solution for now, but as AI technology continues to develop, there will be a growing need for more sophisticated safeguards. While innovation promises many benefits, it\u2019s clear that the rapid pace of AI development also poses new challenges, making it crucial for both individuals and institutions to stay one step ahead of cybercriminals.<\/p>\n","post_title":"Starling Bank Warns How Voice-Cloning Technology Puts Millions At Risk","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"starling-bank-warns-how-voice-cloning-technology-puts-millions-at-risk","to_ping":"","pinged":"","post_modified":"2024-09-25 19:10:49","post_modified_gmt":"2024-09-25 09:10:49","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18852","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18746,"post_author":"17","post_date":"2024-09-21 04:11:53","post_date_gmt":"2024-09-20 18:11:53","post_content":"\n

Meta, the company behind Facebook, intends to use social media posts in the UK to train its generative AI models. This will allow Meta\u2019s AI product to \u201creflect British culture, history, and idioms\u201d. The company believes this will facilitate the adoption of generative AI technology by UK businesses and industries. <\/p>\n\n\n\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Starling Bank is urging people to take steps to protect themselves by agreeing on a \"safe phrase\" <\/em>with family members. This simple, random phrase can be used to verify the identity of the person on the other end of the call, providing an extra layer of security. However, the bank advises that this phrase should not be shared via text, and if it is, the message should be deleted immediately to prevent it from being intercepted by fraudsters.<\/p>\n\n\n\n

The threat posed by AI technology goes beyond voice cloning. Earlier this year, OpenAI, the company behind the popular AI chatbot ChatGPT, introduced a voice replication tool called Voice Engine but chose not to make it widely available due to concerns about misuse. As AI becomes more adept at mimicking human voices, there are growing concerns about its potential for misuse, from financial fraud to spreading misinformation.<\/p>\n\n\n\n

Looking ahead, the risks associated with AI-driven scams are likely to expand. As technology becomes more advanced and accessible, scammers will find new ways to exploit it. Consumers must remain vigilant, not just in guarding their financial information but in understanding the new vulnerabilities created by digital footprints.<\/p>\n\n\n\n

Starling Bank's advice to agree on a safe phrase is a simple yet effective solution for now, but as AI technology continues to develop, there will be a growing need for more sophisticated safeguards. While innovation promises many benefits, it\u2019s clear that the rapid pace of AI development also poses new challenges, making it crucial for both individuals and institutions to stay one step ahead of cybercriminals.<\/p>\n","post_title":"Starling Bank Warns How Voice-Cloning Technology Puts Millions At Risk","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"starling-bank-warns-how-voice-cloning-technology-puts-millions-at-risk","to_ping":"","pinged":"","post_modified":"2024-09-25 19:10:49","post_modified_gmt":"2024-09-25 09:10:49","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18852","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18746,"post_author":"17","post_date":"2024-09-21 04:11:53","post_date_gmt":"2024-09-20 18:11:53","post_content":"\n

Meta, the company behind Facebook, intends to use social media posts in the UK to train its generative AI models. This will allow Meta\u2019s AI product to \u201creflect British culture, history, and idioms\u201d. The company believes this will facilitate the adoption of generative AI technology by UK businesses and industries. <\/p>\n\n\n\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Preventive Measures By Sterling Bank<\/h2>\n\n\n\n

Starling Bank is urging people to take steps to protect themselves by agreeing on a \"safe phrase\" <\/em>with family members. This simple, random phrase can be used to verify the identity of the person on the other end of the call, providing an extra layer of security. However, the bank advises that this phrase should not be shared via text, and if it is, the message should be deleted immediately to prevent it from being intercepted by fraudsters.<\/p>\n\n\n\n

The threat posed by AI technology goes beyond voice cloning. Earlier this year, OpenAI, the company behind the popular AI chatbot ChatGPT, introduced a voice replication tool called Voice Engine but chose not to make it widely available due to concerns about misuse. As AI becomes more adept at mimicking human voices, there are growing concerns about its potential for misuse, from financial fraud to spreading misinformation.<\/p>\n\n\n\n

Looking ahead, the risks associated with AI-driven scams are likely to expand. As technology becomes more advanced and accessible, scammers will find new ways to exploit it. Consumers must remain vigilant, not just in guarding their financial information but in understanding the new vulnerabilities created by digital footprints.<\/p>\n\n\n\n

Starling Bank's advice to agree on a safe phrase is a simple yet effective solution for now, but as AI technology continues to develop, there will be a growing need for more sophisticated safeguards. While innovation promises many benefits, it\u2019s clear that the rapid pace of AI development also poses new challenges, making it crucial for both individuals and institutions to stay one step ahead of cybercriminals.<\/p>\n","post_title":"Starling Bank Warns How Voice-Cloning Technology Puts Millions At Risk","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"starling-bank-warns-how-voice-cloning-technology-puts-millions-at-risk","to_ping":"","pinged":"","post_modified":"2024-09-25 19:10:49","post_modified_gmt":"2024-09-25 09:10:49","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18852","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18746,"post_author":"17","post_date":"2024-09-21 04:11:53","post_date_gmt":"2024-09-20 18:11:53","post_content":"\n

Meta, the company behind Facebook, intends to use social media posts in the UK to train its generative AI models. This will allow Meta\u2019s AI product to \u201creflect British culture, history, and idioms\u201d. The company believes this will facilitate the adoption of generative AI technology by UK businesses and industries. <\/p>\n\n\n\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related: <\/em><\/strong>OpenAI Has Recently Unveiled Their Latest Voice Engine, Which Is Capable Of Cloning Human Voices<\/a><\/p>\n\n\n\n

Preventive Measures By Sterling Bank<\/h2>\n\n\n\n

Starling Bank is urging people to take steps to protect themselves by agreeing on a \"safe phrase\" <\/em>with family members. This simple, random phrase can be used to verify the identity of the person on the other end of the call, providing an extra layer of security. However, the bank advises that this phrase should not be shared via text, and if it is, the message should be deleted immediately to prevent it from being intercepted by fraudsters.<\/p>\n\n\n\n

The threat posed by AI technology goes beyond voice cloning. Earlier this year, OpenAI, the company behind the popular AI chatbot ChatGPT, introduced a voice replication tool called Voice Engine but chose not to make it widely available due to concerns about misuse. As AI becomes more adept at mimicking human voices, there are growing concerns about its potential for misuse, from financial fraud to spreading misinformation.<\/p>\n\n\n\n

Looking ahead, the risks associated with AI-driven scams are likely to expand. As technology becomes more advanced and accessible, scammers will find new ways to exploit it. Consumers must remain vigilant, not just in guarding their financial information but in understanding the new vulnerabilities created by digital footprints.<\/p>\n\n\n\n

Starling Bank's advice to agree on a safe phrase is a simple yet effective solution for now, but as AI technology continues to develop, there will be a growing need for more sophisticated safeguards. While innovation promises many benefits, it\u2019s clear that the rapid pace of AI development also poses new challenges, making it crucial for both individuals and institutions to stay one step ahead of cybercriminals.<\/p>\n","post_title":"Starling Bank Warns How Voice-Cloning Technology Puts Millions At Risk","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"starling-bank-warns-how-voice-cloning-technology-puts-millions-at-risk","to_ping":"","pinged":"","post_modified":"2024-09-25 19:10:49","post_modified_gmt":"2024-09-25 09:10:49","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18852","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18746,"post_author":"17","post_date":"2024-09-21 04:11:53","post_date_gmt":"2024-09-20 18:11:53","post_content":"\n

Meta, the company behind Facebook, intends to use social media posts in the UK to train its generative AI models. This will allow Meta\u2019s AI product to \u201creflect British culture, history, and idioms\u201d. The company believes this will facilitate the adoption of generative AI technology by UK businesses and industries. <\/p>\n\n\n\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

People frequently post content online, including audio or video recordings of their voice, without considering the potential risk this poses. The ability of AI to mimic voices is advancing rapidly, and it only takes a few seconds of audio for a fraudster to create an effective clone. This makes it easier than ever for scammers to prey on the emotional bonds between family members, tricking people into sending money to what they believe are loved ones in need.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Has Recently Unveiled Their Latest Voice Engine, Which Is Capable Of Cloning Human Voices<\/a><\/p>\n\n\n\n

Preventive Measures By Sterling Bank<\/h2>\n\n\n\n

Starling Bank is urging people to take steps to protect themselves by agreeing on a \"safe phrase\" <\/em>with family members. This simple, random phrase can be used to verify the identity of the person on the other end of the call, providing an extra layer of security. However, the bank advises that this phrase should not be shared via text, and if it is, the message should be deleted immediately to prevent it from being intercepted by fraudsters.<\/p>\n\n\n\n

The threat posed by AI technology goes beyond voice cloning. Earlier this year, OpenAI, the company behind the popular AI chatbot ChatGPT, introduced a voice replication tool called Voice Engine but chose not to make it widely available due to concerns about misuse. As AI becomes more adept at mimicking human voices, there are growing concerns about its potential for misuse, from financial fraud to spreading misinformation.<\/p>\n\n\n\n

Looking ahead, the risks associated with AI-driven scams are likely to expand. As technology becomes more advanced and accessible, scammers will find new ways to exploit it. Consumers must remain vigilant, not just in guarding their financial information but in understanding the new vulnerabilities created by digital footprints.<\/p>\n\n\n\n

Starling Bank's advice to agree on a safe phrase is a simple yet effective solution for now, but as AI technology continues to develop, there will be a growing need for more sophisticated safeguards. While innovation promises many benefits, it\u2019s clear that the rapid pace of AI development also poses new challenges, making it crucial for both individuals and institutions to stay one step ahead of cybercriminals.<\/p>\n","post_title":"Starling Bank Warns How Voice-Cloning Technology Puts Millions At Risk","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"starling-bank-warns-how-voice-cloning-technology-puts-millions-at-risk","to_ping":"","pinged":"","post_modified":"2024-09-25 19:10:49","post_modified_gmt":"2024-09-25 09:10:49","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18852","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18746,"post_author":"17","post_date":"2024-09-21 04:11:53","post_date_gmt":"2024-09-20 18:11:53","post_content":"\n

Meta, the company behind Facebook, intends to use social media posts in the UK to train its generative AI models. This will allow Meta\u2019s AI product to \u201creflect British culture, history, and idioms\u201d. The company believes this will facilitate the adoption of generative AI technology by UK businesses and industries. <\/p>\n\n\n\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

A story originally reported by CNN quoted that according to a recent survey conducted by Starling Bank<\/a> and Mortar Research, more than a quarter of respondents had been targeted by an AI voice-cloning scam within the last year. What\u2019s more worrying is that 46% of those surveyed didn\u2019t even know such scams existed, leaving them vulnerable to deception. In some cases, the survey found that 8% of people would willingly send money even if the phone call seemed suspicious, simply because the voice sounded familiar.<\/p>\n\n\n\n

People frequently post content online, including audio or video recordings of their voice, without considering the potential risk this poses. The ability of AI to mimic voices is advancing rapidly, and it only takes a few seconds of audio for a fraudster to create an effective clone. This makes it easier than ever for scammers to prey on the emotional bonds between family members, tricking people into sending money to what they believe are loved ones in need.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Has Recently Unveiled Their Latest Voice Engine, Which Is Capable Of Cloning Human Voices<\/a><\/p>\n\n\n\n

Preventive Measures By Sterling Bank<\/h2>\n\n\n\n

Starling Bank is urging people to take steps to protect themselves by agreeing on a \"safe phrase\" <\/em>with family members. This simple, random phrase can be used to verify the identity of the person on the other end of the call, providing an extra layer of security. However, the bank advises that this phrase should not be shared via text, and if it is, the message should be deleted immediately to prevent it from being intercepted by fraudsters.<\/p>\n\n\n\n

The threat posed by AI technology goes beyond voice cloning. Earlier this year, OpenAI, the company behind the popular AI chatbot ChatGPT, introduced a voice replication tool called Voice Engine but chose not to make it widely available due to concerns about misuse. As AI becomes more adept at mimicking human voices, there are growing concerns about its potential for misuse, from financial fraud to spreading misinformation.<\/p>\n\n\n\n

Looking ahead, the risks associated with AI-driven scams are likely to expand. As technology becomes more advanced and accessible, scammers will find new ways to exploit it. Consumers must remain vigilant, not just in guarding their financial information but in understanding the new vulnerabilities created by digital footprints.<\/p>\n\n\n\n

Starling Bank's advice to agree on a safe phrase is a simple yet effective solution for now, but as AI technology continues to develop, there will be a growing need for more sophisticated safeguards. While innovation promises many benefits, it\u2019s clear that the rapid pace of AI development also poses new challenges, making it crucial for both individuals and institutions to stay one step ahead of cybercriminals.<\/p>\n","post_title":"Starling Bank Warns How Voice-Cloning Technology Puts Millions At Risk","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"starling-bank-warns-how-voice-cloning-technology-puts-millions-at-risk","to_ping":"","pinged":"","post_modified":"2024-09-25 19:10:49","post_modified_gmt":"2024-09-25 09:10:49","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18852","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18746,"post_author":"17","post_date":"2024-09-21 04:11:53","post_date_gmt":"2024-09-20 18:11:53","post_content":"\n

Meta, the company behind Facebook, intends to use social media posts in the UK to train its generative AI models. This will allow Meta\u2019s AI product to \u201creflect British culture, history, and idioms\u201d. The company believes this will facilitate the adoption of generative AI technology by UK businesses and industries. <\/p>\n\n\n\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

These scams are unsettlingly simple. Fraudsters need only a few seconds of someone's voice, often found in videos posted online, to create a replica. With this AI-generated voice, they can impersonate the victim and make phone calls to friends or family members, requesting money or sensitive information.<\/p>\n\n\n\n

A story originally reported by CNN quoted that according to a recent survey conducted by Starling Bank<\/a> and Mortar Research, more than a quarter of respondents had been targeted by an AI voice-cloning scam within the last year. What\u2019s more worrying is that 46% of those surveyed didn\u2019t even know such scams existed, leaving them vulnerable to deception. In some cases, the survey found that 8% of people would willingly send money even if the phone call seemed suspicious, simply because the voice sounded familiar.<\/p>\n\n\n\n

People frequently post content online, including audio or video recordings of their voice, without considering the potential risk this poses. The ability of AI to mimic voices is advancing rapidly, and it only takes a few seconds of audio for a fraudster to create an effective clone. This makes it easier than ever for scammers to prey on the emotional bonds between family members, tricking people into sending money to what they believe are loved ones in need.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Has Recently Unveiled Their Latest Voice Engine, Which Is Capable Of Cloning Human Voices<\/a><\/p>\n\n\n\n

Preventive Measures By Sterling Bank<\/h2>\n\n\n\n

Starling Bank is urging people to take steps to protect themselves by agreeing on a \"safe phrase\" <\/em>with family members. This simple, random phrase can be used to verify the identity of the person on the other end of the call, providing an extra layer of security. However, the bank advises that this phrase should not be shared via text, and if it is, the message should be deleted immediately to prevent it from being intercepted by fraudsters.<\/p>\n\n\n\n

The threat posed by AI technology goes beyond voice cloning. Earlier this year, OpenAI, the company behind the popular AI chatbot ChatGPT, introduced a voice replication tool called Voice Engine but chose not to make it widely available due to concerns about misuse. As AI becomes more adept at mimicking human voices, there are growing concerns about its potential for misuse, from financial fraud to spreading misinformation.<\/p>\n\n\n\n

Looking ahead, the risks associated with AI-driven scams are likely to expand. As technology becomes more advanced and accessible, scammers will find new ways to exploit it. Consumers must remain vigilant, not just in guarding their financial information but in understanding the new vulnerabilities created by digital footprints.<\/p>\n\n\n\n

Starling Bank's advice to agree on a safe phrase is a simple yet effective solution for now, but as AI technology continues to develop, there will be a growing need for more sophisticated safeguards. While innovation promises many benefits, it\u2019s clear that the rapid pace of AI development also poses new challenges, making it crucial for both individuals and institutions to stay one step ahead of cybercriminals.<\/p>\n","post_title":"Starling Bank Warns How Voice-Cloning Technology Puts Millions At Risk","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"starling-bank-warns-how-voice-cloning-technology-puts-millions-at-risk","to_ping":"","pinged":"","post_modified":"2024-09-25 19:10:49","post_modified_gmt":"2024-09-25 09:10:49","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18852","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18746,"post_author":"17","post_date":"2024-09-21 04:11:53","post_date_gmt":"2024-09-20 18:11:53","post_content":"\n

Meta, the company behind Facebook, intends to use social media posts in the UK to train its generative AI models. This will allow Meta\u2019s AI product to \u201creflect British culture, history, and idioms\u201d. The company believes this will facilitate the adoption of generative AI technology by UK businesses and industries. <\/p>\n\n\n\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

In a growing concern for everyday online users, Starling Bank has issued a warning about a new wave of scams using artificial intelligence (AI) to clone people\u2019s voices. The bank has raised the alarm that millions could be vulnerable to this increasingly sophisticated fraud.<\/p>\n\n\n\n

These scams are unsettlingly simple. Fraudsters need only a few seconds of someone's voice, often found in videos posted online, to create a replica. With this AI-generated voice, they can impersonate the victim and make phone calls to friends or family members, requesting money or sensitive information.<\/p>\n\n\n\n

A story originally reported by CNN quoted that according to a recent survey conducted by Starling Bank<\/a> and Mortar Research, more than a quarter of respondents had been targeted by an AI voice-cloning scam within the last year. What\u2019s more worrying is that 46% of those surveyed didn\u2019t even know such scams existed, leaving them vulnerable to deception. In some cases, the survey found that 8% of people would willingly send money even if the phone call seemed suspicious, simply because the voice sounded familiar.<\/p>\n\n\n\n

People frequently post content online, including audio or video recordings of their voice, without considering the potential risk this poses. The ability of AI to mimic voices is advancing rapidly, and it only takes a few seconds of audio for a fraudster to create an effective clone. This makes it easier than ever for scammers to prey on the emotional bonds between family members, tricking people into sending money to what they believe are loved ones in need.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Has Recently Unveiled Their Latest Voice Engine, Which Is Capable Of Cloning Human Voices<\/a><\/p>\n\n\n\n

Preventive Measures By Sterling Bank<\/h2>\n\n\n\n

Starling Bank is urging people to take steps to protect themselves by agreeing on a \"safe phrase\" <\/em>with family members. This simple, random phrase can be used to verify the identity of the person on the other end of the call, providing an extra layer of security. However, the bank advises that this phrase should not be shared via text, and if it is, the message should be deleted immediately to prevent it from being intercepted by fraudsters.<\/p>\n\n\n\n

The threat posed by AI technology goes beyond voice cloning. Earlier this year, OpenAI, the company behind the popular AI chatbot ChatGPT, introduced a voice replication tool called Voice Engine but chose not to make it widely available due to concerns about misuse. As AI becomes more adept at mimicking human voices, there are growing concerns about its potential for misuse, from financial fraud to spreading misinformation.<\/p>\n\n\n\n

Looking ahead, the risks associated with AI-driven scams are likely to expand. As technology becomes more advanced and accessible, scammers will find new ways to exploit it. Consumers must remain vigilant, not just in guarding their financial information but in understanding the new vulnerabilities created by digital footprints.<\/p>\n\n\n\n

Starling Bank's advice to agree on a safe phrase is a simple yet effective solution for now, but as AI technology continues to develop, there will be a growing need for more sophisticated safeguards. While innovation promises many benefits, it\u2019s clear that the rapid pace of AI development also poses new challenges, making it crucial for both individuals and institutions to stay one step ahead of cybercriminals.<\/p>\n","post_title":"Starling Bank Warns How Voice-Cloning Technology Puts Millions At Risk","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"starling-bank-warns-how-voice-cloning-technology-puts-millions-at-risk","to_ping":"","pinged":"","post_modified":"2024-09-25 19:10:49","post_modified_gmt":"2024-09-25 09:10:49","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18852","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18746,"post_author":"17","post_date":"2024-09-21 04:11:53","post_date_gmt":"2024-09-20 18:11:53","post_content":"\n

Meta, the company behind Facebook, intends to use social media posts in the UK to train its generative AI models. This will allow Meta\u2019s AI product to \u201creflect British culture, history, and idioms\u201d. The company believes this will facilitate the adoption of generative AI technology by UK businesses and industries. <\/p>\n\n\n\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18852,"post_author":"18","post_date":"2024-09-25 19:10:42","post_date_gmt":"2024-09-25 09:10:42","post_content":"\n

In a growing concern for everyday online users, Starling Bank has issued a warning about a new wave of scams using artificial intelligence (AI) to clone people\u2019s voices. The bank has raised the alarm that millions could be vulnerable to this increasingly sophisticated fraud.<\/p>\n\n\n\n

These scams are unsettlingly simple. Fraudsters need only a few seconds of someone's voice, often found in videos posted online, to create a replica. With this AI-generated voice, they can impersonate the victim and make phone calls to friends or family members, requesting money or sensitive information.<\/p>\n\n\n\n

A story originally reported by CNN quoted that according to a recent survey conducted by Starling Bank<\/a> and Mortar Research, more than a quarter of respondents had been targeted by an AI voice-cloning scam within the last year. What\u2019s more worrying is that 46% of those surveyed didn\u2019t even know such scams existed, leaving them vulnerable to deception. In some cases, the survey found that 8% of people would willingly send money even if the phone call seemed suspicious, simply because the voice sounded familiar.<\/p>\n\n\n\n

People frequently post content online, including audio or video recordings of their voice, without considering the potential risk this poses. The ability of AI to mimic voices is advancing rapidly, and it only takes a few seconds of audio for a fraudster to create an effective clone. This makes it easier than ever for scammers to prey on the emotional bonds between family members, tricking people into sending money to what they believe are loved ones in need.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Has Recently Unveiled Their Latest Voice Engine, Which Is Capable Of Cloning Human Voices<\/a><\/p>\n\n\n\n

Preventive Measures By Sterling Bank<\/h2>\n\n\n\n

Starling Bank is urging people to take steps to protect themselves by agreeing on a \"safe phrase\" <\/em>with family members. This simple, random phrase can be used to verify the identity of the person on the other end of the call, providing an extra layer of security. However, the bank advises that this phrase should not be shared via text, and if it is, the message should be deleted immediately to prevent it from being intercepted by fraudsters.<\/p>\n\n\n\n

The threat posed by AI technology goes beyond voice cloning. Earlier this year, OpenAI, the company behind the popular AI chatbot ChatGPT, introduced a voice replication tool called Voice Engine but chose not to make it widely available due to concerns about misuse. As AI becomes more adept at mimicking human voices, there are growing concerns about its potential for misuse, from financial fraud to spreading misinformation.<\/p>\n\n\n\n

Looking ahead, the risks associated with AI-driven scams are likely to expand. As technology becomes more advanced and accessible, scammers will find new ways to exploit it. Consumers must remain vigilant, not just in guarding their financial information but in understanding the new vulnerabilities created by digital footprints.<\/p>\n\n\n\n

Starling Bank's advice to agree on a safe phrase is a simple yet effective solution for now, but as AI technology continues to develop, there will be a growing need for more sophisticated safeguards. While innovation promises many benefits, it\u2019s clear that the rapid pace of AI development also poses new challenges, making it crucial for both individuals and institutions to stay one step ahead of cybercriminals.<\/p>\n","post_title":"Starling Bank Warns How Voice-Cloning Technology Puts Millions At Risk","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"starling-bank-warns-how-voice-cloning-technology-puts-millions-at-risk","to_ping":"","pinged":"","post_modified":"2024-09-25 19:10:49","post_modified_gmt":"2024-09-25 09:10:49","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18852","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18746,"post_author":"17","post_date":"2024-09-21 04:11:53","post_date_gmt":"2024-09-20 18:11:53","post_content":"\n

Meta, the company behind Facebook, intends to use social media posts in the UK to train its generative AI models. This will allow Meta\u2019s AI product to \u201creflect British culture, history, and idioms\u201d. The company believes this will facilitate the adoption of generative AI technology by UK businesses and industries. <\/p>\n\n\n\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18852,"post_author":"18","post_date":"2024-09-25 19:10:42","post_date_gmt":"2024-09-25 09:10:42","post_content":"\n

In a growing concern for everyday online users, Starling Bank has issued a warning about a new wave of scams using artificial intelligence (AI) to clone people\u2019s voices. The bank has raised the alarm that millions could be vulnerable to this increasingly sophisticated fraud.<\/p>\n\n\n\n

These scams are unsettlingly simple. Fraudsters need only a few seconds of someone's voice, often found in videos posted online, to create a replica. With this AI-generated voice, they can impersonate the victim and make phone calls to friends or family members, requesting money or sensitive information.<\/p>\n\n\n\n

A story originally reported by CNN quoted that according to a recent survey conducted by Starling Bank<\/a> and Mortar Research, more than a quarter of respondents had been targeted by an AI voice-cloning scam within the last year. What\u2019s more worrying is that 46% of those surveyed didn\u2019t even know such scams existed, leaving them vulnerable to deception. In some cases, the survey found that 8% of people would willingly send money even if the phone call seemed suspicious, simply because the voice sounded familiar.<\/p>\n\n\n\n

People frequently post content online, including audio or video recordings of their voice, without considering the potential risk this poses. The ability of AI to mimic voices is advancing rapidly, and it only takes a few seconds of audio for a fraudster to create an effective clone. This makes it easier than ever for scammers to prey on the emotional bonds between family members, tricking people into sending money to what they believe are loved ones in need.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Has Recently Unveiled Their Latest Voice Engine, Which Is Capable Of Cloning Human Voices<\/a><\/p>\n\n\n\n

Preventive Measures By Sterling Bank<\/h2>\n\n\n\n

Starling Bank is urging people to take steps to protect themselves by agreeing on a \"safe phrase\" <\/em>with family members. This simple, random phrase can be used to verify the identity of the person on the other end of the call, providing an extra layer of security. However, the bank advises that this phrase should not be shared via text, and if it is, the message should be deleted immediately to prevent it from being intercepted by fraudsters.<\/p>\n\n\n\n

The threat posed by AI technology goes beyond voice cloning. Earlier this year, OpenAI, the company behind the popular AI chatbot ChatGPT, introduced a voice replication tool called Voice Engine but chose not to make it widely available due to concerns about misuse. As AI becomes more adept at mimicking human voices, there are growing concerns about its potential for misuse, from financial fraud to spreading misinformation.<\/p>\n\n\n\n

Looking ahead, the risks associated with AI-driven scams are likely to expand. As technology becomes more advanced and accessible, scammers will find new ways to exploit it. Consumers must remain vigilant, not just in guarding their financial information but in understanding the new vulnerabilities created by digital footprints.<\/p>\n\n\n\n

Starling Bank's advice to agree on a safe phrase is a simple yet effective solution for now, but as AI technology continues to develop, there will be a growing need for more sophisticated safeguards. While innovation promises many benefits, it\u2019s clear that the rapid pace of AI development also poses new challenges, making it crucial for both individuals and institutions to stay one step ahead of cybercriminals.<\/p>\n","post_title":"Starling Bank Warns How Voice-Cloning Technology Puts Millions At Risk","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"starling-bank-warns-how-voice-cloning-technology-puts-millions-at-risk","to_ping":"","pinged":"","post_modified":"2024-09-25 19:10:49","post_modified_gmt":"2024-09-25 09:10:49","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18852","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18746,"post_author":"17","post_date":"2024-09-21 04:11:53","post_date_gmt":"2024-09-20 18:11:53","post_content":"\n

Meta, the company behind Facebook, intends to use social media posts in the UK to train its generative AI models. This will allow Meta\u2019s AI product to \u201creflect British culture, history, and idioms\u201d. The company believes this will facilitate the adoption of generative AI technology by UK businesses and industries. <\/p>\n\n\n\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18852,"post_author":"18","post_date":"2024-09-25 19:10:42","post_date_gmt":"2024-09-25 09:10:42","post_content":"\n

In a growing concern for everyday online users, Starling Bank has issued a warning about a new wave of scams using artificial intelligence (AI) to clone people\u2019s voices. The bank has raised the alarm that millions could be vulnerable to this increasingly sophisticated fraud.<\/p>\n\n\n\n

These scams are unsettlingly simple. Fraudsters need only a few seconds of someone's voice, often found in videos posted online, to create a replica. With this AI-generated voice, they can impersonate the victim and make phone calls to friends or family members, requesting money or sensitive information.<\/p>\n\n\n\n

A story originally reported by CNN quoted that according to a recent survey conducted by Starling Bank<\/a> and Mortar Research, more than a quarter of respondents had been targeted by an AI voice-cloning scam within the last year. What\u2019s more worrying is that 46% of those surveyed didn\u2019t even know such scams existed, leaving them vulnerable to deception. In some cases, the survey found that 8% of people would willingly send money even if the phone call seemed suspicious, simply because the voice sounded familiar.<\/p>\n\n\n\n

People frequently post content online, including audio or video recordings of their voice, without considering the potential risk this poses. The ability of AI to mimic voices is advancing rapidly, and it only takes a few seconds of audio for a fraudster to create an effective clone. This makes it easier than ever for scammers to prey on the emotional bonds between family members, tricking people into sending money to what they believe are loved ones in need.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Has Recently Unveiled Their Latest Voice Engine, Which Is Capable Of Cloning Human Voices<\/a><\/p>\n\n\n\n

Preventive Measures By Sterling Bank<\/h2>\n\n\n\n

Starling Bank is urging people to take steps to protect themselves by agreeing on a \"safe phrase\" <\/em>with family members. This simple, random phrase can be used to verify the identity of the person on the other end of the call, providing an extra layer of security. However, the bank advises that this phrase should not be shared via text, and if it is, the message should be deleted immediately to prevent it from being intercepted by fraudsters.<\/p>\n\n\n\n

The threat posed by AI technology goes beyond voice cloning. Earlier this year, OpenAI, the company behind the popular AI chatbot ChatGPT, introduced a voice replication tool called Voice Engine but chose not to make it widely available due to concerns about misuse. As AI becomes more adept at mimicking human voices, there are growing concerns about its potential for misuse, from financial fraud to spreading misinformation.<\/p>\n\n\n\n

Looking ahead, the risks associated with AI-driven scams are likely to expand. As technology becomes more advanced and accessible, scammers will find new ways to exploit it. Consumers must remain vigilant, not just in guarding their financial information but in understanding the new vulnerabilities created by digital footprints.<\/p>\n\n\n\n

Starling Bank's advice to agree on a safe phrase is a simple yet effective solution for now, but as AI technology continues to develop, there will be a growing need for more sophisticated safeguards. While innovation promises many benefits, it\u2019s clear that the rapid pace of AI development also poses new challenges, making it crucial for both individuals and institutions to stay one step ahead of cybercriminals.<\/p>\n","post_title":"Starling Bank Warns How Voice-Cloning Technology Puts Millions At Risk","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"starling-bank-warns-how-voice-cloning-technology-puts-millions-at-risk","to_ping":"","pinged":"","post_modified":"2024-09-25 19:10:49","post_modified_gmt":"2024-09-25 09:10:49","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18852","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18746,"post_author":"17","post_date":"2024-09-21 04:11:53","post_date_gmt":"2024-09-20 18:11:53","post_content":"\n

Meta, the company behind Facebook, intends to use social media posts in the UK to train its generative AI models. This will allow Meta\u2019s AI product to \u201creflect British culture, history, and idioms\u201d. The company believes this will facilitate the adoption of generative AI technology by UK businesses and industries. <\/p>\n\n\n\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18852,"post_author":"18","post_date":"2024-09-25 19:10:42","post_date_gmt":"2024-09-25 09:10:42","post_content":"\n

In a growing concern for everyday online users, Starling Bank has issued a warning about a new wave of scams using artificial intelligence (AI) to clone people\u2019s voices. The bank has raised the alarm that millions could be vulnerable to this increasingly sophisticated fraud.<\/p>\n\n\n\n

These scams are unsettlingly simple. Fraudsters need only a few seconds of someone's voice, often found in videos posted online, to create a replica. With this AI-generated voice, they can impersonate the victim and make phone calls to friends or family members, requesting money or sensitive information.<\/p>\n\n\n\n

A story originally reported by CNN quoted that according to a recent survey conducted by Starling Bank<\/a> and Mortar Research, more than a quarter of respondents had been targeted by an AI voice-cloning scam within the last year. What\u2019s more worrying is that 46% of those surveyed didn\u2019t even know such scams existed, leaving them vulnerable to deception. In some cases, the survey found that 8% of people would willingly send money even if the phone call seemed suspicious, simply because the voice sounded familiar.<\/p>\n\n\n\n

People frequently post content online, including audio or video recordings of their voice, without considering the potential risk this poses. The ability of AI to mimic voices is advancing rapidly, and it only takes a few seconds of audio for a fraudster to create an effective clone. This makes it easier than ever for scammers to prey on the emotional bonds between family members, tricking people into sending money to what they believe are loved ones in need.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Has Recently Unveiled Their Latest Voice Engine, Which Is Capable Of Cloning Human Voices<\/a><\/p>\n\n\n\n

Preventive Measures By Sterling Bank<\/h2>\n\n\n\n

Starling Bank is urging people to take steps to protect themselves by agreeing on a \"safe phrase\" <\/em>with family members. This simple, random phrase can be used to verify the identity of the person on the other end of the call, providing an extra layer of security. However, the bank advises that this phrase should not be shared via text, and if it is, the message should be deleted immediately to prevent it from being intercepted by fraudsters.<\/p>\n\n\n\n

The threat posed by AI technology goes beyond voice cloning. Earlier this year, OpenAI, the company behind the popular AI chatbot ChatGPT, introduced a voice replication tool called Voice Engine but chose not to make it widely available due to concerns about misuse. As AI becomes more adept at mimicking human voices, there are growing concerns about its potential for misuse, from financial fraud to spreading misinformation.<\/p>\n\n\n\n

Looking ahead, the risks associated with AI-driven scams are likely to expand. As technology becomes more advanced and accessible, scammers will find new ways to exploit it. Consumers must remain vigilant, not just in guarding their financial information but in understanding the new vulnerabilities created by digital footprints.<\/p>\n\n\n\n

Starling Bank's advice to agree on a safe phrase is a simple yet effective solution for now, but as AI technology continues to develop, there will be a growing need for more sophisticated safeguards. While innovation promises many benefits, it\u2019s clear that the rapid pace of AI development also poses new challenges, making it crucial for both individuals and institutions to stay one step ahead of cybercriminals.<\/p>\n","post_title":"Starling Bank Warns How Voice-Cloning Technology Puts Millions At Risk","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"starling-bank-warns-how-voice-cloning-technology-puts-millions-at-risk","to_ping":"","pinged":"","post_modified":"2024-09-25 19:10:49","post_modified_gmt":"2024-09-25 09:10:49","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18852","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18746,"post_author":"17","post_date":"2024-09-21 04:11:53","post_date_gmt":"2024-09-20 18:11:53","post_content":"\n

Meta, the company behind Facebook, intends to use social media posts in the UK to train its generative AI models. This will allow Meta\u2019s AI product to \u201creflect British culture, history, and idioms\u201d. The company believes this will facilitate the adoption of generative AI technology by UK businesses and industries. <\/p>\n\n\n\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18852,"post_author":"18","post_date":"2024-09-25 19:10:42","post_date_gmt":"2024-09-25 09:10:42","post_content":"\n

In a growing concern for everyday online users, Starling Bank has issued a warning about a new wave of scams using artificial intelligence (AI) to clone people\u2019s voices. The bank has raised the alarm that millions could be vulnerable to this increasingly sophisticated fraud.<\/p>\n\n\n\n

These scams are unsettlingly simple. Fraudsters need only a few seconds of someone's voice, often found in videos posted online, to create a replica. With this AI-generated voice, they can impersonate the victim and make phone calls to friends or family members, requesting money or sensitive information.<\/p>\n\n\n\n

A story originally reported by CNN quoted that according to a recent survey conducted by Starling Bank<\/a> and Mortar Research, more than a quarter of respondents had been targeted by an AI voice-cloning scam within the last year. What\u2019s more worrying is that 46% of those surveyed didn\u2019t even know such scams existed, leaving them vulnerable to deception. In some cases, the survey found that 8% of people would willingly send money even if the phone call seemed suspicious, simply because the voice sounded familiar.<\/p>\n\n\n\n

People frequently post content online, including audio or video recordings of their voice, without considering the potential risk this poses. The ability of AI to mimic voices is advancing rapidly, and it only takes a few seconds of audio for a fraudster to create an effective clone. This makes it easier than ever for scammers to prey on the emotional bonds between family members, tricking people into sending money to what they believe are loved ones in need.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Has Recently Unveiled Their Latest Voice Engine, Which Is Capable Of Cloning Human Voices<\/a><\/p>\n\n\n\n

Preventive Measures By Sterling Bank<\/h2>\n\n\n\n

Starling Bank is urging people to take steps to protect themselves by agreeing on a \"safe phrase\" <\/em>with family members. This simple, random phrase can be used to verify the identity of the person on the other end of the call, providing an extra layer of security. However, the bank advises that this phrase should not be shared via text, and if it is, the message should be deleted immediately to prevent it from being intercepted by fraudsters.<\/p>\n\n\n\n

The threat posed by AI technology goes beyond voice cloning. Earlier this year, OpenAI, the company behind the popular AI chatbot ChatGPT, introduced a voice replication tool called Voice Engine but chose not to make it widely available due to concerns about misuse. As AI becomes more adept at mimicking human voices, there are growing concerns about its potential for misuse, from financial fraud to spreading misinformation.<\/p>\n\n\n\n

Looking ahead, the risks associated with AI-driven scams are likely to expand. As technology becomes more advanced and accessible, scammers will find new ways to exploit it. Consumers must remain vigilant, not just in guarding their financial information but in understanding the new vulnerabilities created by digital footprints.<\/p>\n\n\n\n

Starling Bank's advice to agree on a safe phrase is a simple yet effective solution for now, but as AI technology continues to develop, there will be a growing need for more sophisticated safeguards. While innovation promises many benefits, it\u2019s clear that the rapid pace of AI development also poses new challenges, making it crucial for both individuals and institutions to stay one step ahead of cybercriminals.<\/p>\n","post_title":"Starling Bank Warns How Voice-Cloning Technology Puts Millions At Risk","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"starling-bank-warns-how-voice-cloning-technology-puts-millions-at-risk","to_ping":"","pinged":"","post_modified":"2024-09-25 19:10:49","post_modified_gmt":"2024-09-25 09:10:49","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18852","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18746,"post_author":"17","post_date":"2024-09-21 04:11:53","post_date_gmt":"2024-09-20 18:11:53","post_content":"\n

Meta, the company behind Facebook, intends to use social media posts in the UK to train its generative AI models. This will allow Meta\u2019s AI product to \u201creflect British culture, history, and idioms\u201d. The company believes this will facilitate the adoption of generative AI technology by UK businesses and industries. <\/p>\n\n\n\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18852,"post_author":"18","post_date":"2024-09-25 19:10:42","post_date_gmt":"2024-09-25 09:10:42","post_content":"\n

In a growing concern for everyday online users, Starling Bank has issued a warning about a new wave of scams using artificial intelligence (AI) to clone people\u2019s voices. The bank has raised the alarm that millions could be vulnerable to this increasingly sophisticated fraud.<\/p>\n\n\n\n

These scams are unsettlingly simple. Fraudsters need only a few seconds of someone's voice, often found in videos posted online, to create a replica. With this AI-generated voice, they can impersonate the victim and make phone calls to friends or family members, requesting money or sensitive information.<\/p>\n\n\n\n

A story originally reported by CNN quoted that according to a recent survey conducted by Starling Bank<\/a> and Mortar Research, more than a quarter of respondents had been targeted by an AI voice-cloning scam within the last year. What\u2019s more worrying is that 46% of those surveyed didn\u2019t even know such scams existed, leaving them vulnerable to deception. In some cases, the survey found that 8% of people would willingly send money even if the phone call seemed suspicious, simply because the voice sounded familiar.<\/p>\n\n\n\n

People frequently post content online, including audio or video recordings of their voice, without considering the potential risk this poses. The ability of AI to mimic voices is advancing rapidly, and it only takes a few seconds of audio for a fraudster to create an effective clone. This makes it easier than ever for scammers to prey on the emotional bonds between family members, tricking people into sending money to what they believe are loved ones in need.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Has Recently Unveiled Their Latest Voice Engine, Which Is Capable Of Cloning Human Voices<\/a><\/p>\n\n\n\n

Preventive Measures By Sterling Bank<\/h2>\n\n\n\n

Starling Bank is urging people to take steps to protect themselves by agreeing on a \"safe phrase\" <\/em>with family members. This simple, random phrase can be used to verify the identity of the person on the other end of the call, providing an extra layer of security. However, the bank advises that this phrase should not be shared via text, and if it is, the message should be deleted immediately to prevent it from being intercepted by fraudsters.<\/p>\n\n\n\n

The threat posed by AI technology goes beyond voice cloning. Earlier this year, OpenAI, the company behind the popular AI chatbot ChatGPT, introduced a voice replication tool called Voice Engine but chose not to make it widely available due to concerns about misuse. As AI becomes more adept at mimicking human voices, there are growing concerns about its potential for misuse, from financial fraud to spreading misinformation.<\/p>\n\n\n\n

Looking ahead, the risks associated with AI-driven scams are likely to expand. As technology becomes more advanced and accessible, scammers will find new ways to exploit it. Consumers must remain vigilant, not just in guarding their financial information but in understanding the new vulnerabilities created by digital footprints.<\/p>\n\n\n\n

Starling Bank's advice to agree on a safe phrase is a simple yet effective solution for now, but as AI technology continues to develop, there will be a growing need for more sophisticated safeguards. While innovation promises many benefits, it\u2019s clear that the rapid pace of AI development also poses new challenges, making it crucial for both individuals and institutions to stay one step ahead of cybercriminals.<\/p>\n","post_title":"Starling Bank Warns How Voice-Cloning Technology Puts Millions At Risk","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"starling-bank-warns-how-voice-cloning-technology-puts-millions-at-risk","to_ping":"","pinged":"","post_modified":"2024-09-25 19:10:49","post_modified_gmt":"2024-09-25 09:10:49","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18852","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18746,"post_author":"17","post_date":"2024-09-21 04:11:53","post_date_gmt":"2024-09-20 18:11:53","post_content":"\n

Meta, the company behind Facebook, intends to use social media posts in the UK to train its generative AI models. This will allow Meta\u2019s AI product to \u201creflect British culture, history, and idioms\u201d. The company believes this will facilitate the adoption of generative AI technology by UK businesses and industries. <\/p>\n\n\n\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18852,"post_author":"18","post_date":"2024-09-25 19:10:42","post_date_gmt":"2024-09-25 09:10:42","post_content":"\n

In a growing concern for everyday online users, Starling Bank has issued a warning about a new wave of scams using artificial intelligence (AI) to clone people\u2019s voices. The bank has raised the alarm that millions could be vulnerable to this increasingly sophisticated fraud.<\/p>\n\n\n\n

These scams are unsettlingly simple. Fraudsters need only a few seconds of someone's voice, often found in videos posted online, to create a replica. With this AI-generated voice, they can impersonate the victim and make phone calls to friends or family members, requesting money or sensitive information.<\/p>\n\n\n\n

A story originally reported by CNN quoted that according to a recent survey conducted by Starling Bank<\/a> and Mortar Research, more than a quarter of respondents had been targeted by an AI voice-cloning scam within the last year. What\u2019s more worrying is that 46% of those surveyed didn\u2019t even know such scams existed, leaving them vulnerable to deception. In some cases, the survey found that 8% of people would willingly send money even if the phone call seemed suspicious, simply because the voice sounded familiar.<\/p>\n\n\n\n

People frequently post content online, including audio or video recordings of their voice, without considering the potential risk this poses. The ability of AI to mimic voices is advancing rapidly, and it only takes a few seconds of audio for a fraudster to create an effective clone. This makes it easier than ever for scammers to prey on the emotional bonds between family members, tricking people into sending money to what they believe are loved ones in need.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Has Recently Unveiled Their Latest Voice Engine, Which Is Capable Of Cloning Human Voices<\/a><\/p>\n\n\n\n

Preventive Measures By Sterling Bank<\/h2>\n\n\n\n

Starling Bank is urging people to take steps to protect themselves by agreeing on a \"safe phrase\" <\/em>with family members. This simple, random phrase can be used to verify the identity of the person on the other end of the call, providing an extra layer of security. However, the bank advises that this phrase should not be shared via text, and if it is, the message should be deleted immediately to prevent it from being intercepted by fraudsters.<\/p>\n\n\n\n

The threat posed by AI technology goes beyond voice cloning. Earlier this year, OpenAI, the company behind the popular AI chatbot ChatGPT, introduced a voice replication tool called Voice Engine but chose not to make it widely available due to concerns about misuse. As AI becomes more adept at mimicking human voices, there are growing concerns about its potential for misuse, from financial fraud to spreading misinformation.<\/p>\n\n\n\n

Looking ahead, the risks associated with AI-driven scams are likely to expand. As technology becomes more advanced and accessible, scammers will find new ways to exploit it. Consumers must remain vigilant, not just in guarding their financial information but in understanding the new vulnerabilities created by digital footprints.<\/p>\n\n\n\n

Starling Bank's advice to agree on a safe phrase is a simple yet effective solution for now, but as AI technology continues to develop, there will be a growing need for more sophisticated safeguards. While innovation promises many benefits, it\u2019s clear that the rapid pace of AI development also poses new challenges, making it crucial for both individuals and institutions to stay one step ahead of cybercriminals.<\/p>\n","post_title":"Starling Bank Warns How Voice-Cloning Technology Puts Millions At Risk","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"starling-bank-warns-how-voice-cloning-technology-puts-millions-at-risk","to_ping":"","pinged":"","post_modified":"2024-09-25 19:10:49","post_modified_gmt":"2024-09-25 09:10:49","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18852","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18746,"post_author":"17","post_date":"2024-09-21 04:11:53","post_date_gmt":"2024-09-20 18:11:53","post_content":"\n

Meta, the company behind Facebook, intends to use social media posts in the UK to train its generative AI models. This will allow Meta\u2019s AI product to \u201creflect British culture, history, and idioms\u201d. The company believes this will facilitate the adoption of generative AI technology by UK businesses and industries. <\/p>\n\n\n\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

AWS CEO Matt Garman claims customers have responded positively<\/a> to this new development. \u201cThe response from AWS customers who are developing generative AI applications powered by Anthropic in Amazon Bedrock has been remarkable\u201d<\/em>, he added.\u00a0<\/p>\n","post_title":"Amazon Commits $4 Billion Investment In Anthropic To Power The Generation Of AI Development","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"amazon-commits-4-billion-investment-in-anthropic-to-power-the-generation-of-ai-development","to_ping":"","pinged":"","post_modified":"2024-12-03 04:01:03","post_modified_gmt":"2024-12-02 17:01:03","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19759","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18852,"post_author":"18","post_date":"2024-09-25 19:10:42","post_date_gmt":"2024-09-25 09:10:42","post_content":"\n

In a growing concern for everyday online users, Starling Bank has issued a warning about a new wave of scams using artificial intelligence (AI) to clone people\u2019s voices. The bank has raised the alarm that millions could be vulnerable to this increasingly sophisticated fraud.<\/p>\n\n\n\n

These scams are unsettlingly simple. Fraudsters need only a few seconds of someone's voice, often found in videos posted online, to create a replica. With this AI-generated voice, they can impersonate the victim and make phone calls to friends or family members, requesting money or sensitive information.<\/p>\n\n\n\n

A story originally reported by CNN quoted that according to a recent survey conducted by Starling Bank<\/a> and Mortar Research, more than a quarter of respondents had been targeted by an AI voice-cloning scam within the last year. What\u2019s more worrying is that 46% of those surveyed didn\u2019t even know such scams existed, leaving them vulnerable to deception. In some cases, the survey found that 8% of people would willingly send money even if the phone call seemed suspicious, simply because the voice sounded familiar.<\/p>\n\n\n\n

People frequently post content online, including audio or video recordings of their voice, without considering the potential risk this poses. The ability of AI to mimic voices is advancing rapidly, and it only takes a few seconds of audio for a fraudster to create an effective clone. This makes it easier than ever for scammers to prey on the emotional bonds between family members, tricking people into sending money to what they believe are loved ones in need.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Has Recently Unveiled Their Latest Voice Engine, Which Is Capable Of Cloning Human Voices<\/a><\/p>\n\n\n\n

Preventive Measures By Sterling Bank<\/h2>\n\n\n\n

Starling Bank is urging people to take steps to protect themselves by agreeing on a \"safe phrase\" <\/em>with family members. This simple, random phrase can be used to verify the identity of the person on the other end of the call, providing an extra layer of security. However, the bank advises that this phrase should not be shared via text, and if it is, the message should be deleted immediately to prevent it from being intercepted by fraudsters.<\/p>\n\n\n\n

The threat posed by AI technology goes beyond voice cloning. Earlier this year, OpenAI, the company behind the popular AI chatbot ChatGPT, introduced a voice replication tool called Voice Engine but chose not to make it widely available due to concerns about misuse. As AI becomes more adept at mimicking human voices, there are growing concerns about its potential for misuse, from financial fraud to spreading misinformation.<\/p>\n\n\n\n

Looking ahead, the risks associated with AI-driven scams are likely to expand. As technology becomes more advanced and accessible, scammers will find new ways to exploit it. Consumers must remain vigilant, not just in guarding their financial information but in understanding the new vulnerabilities created by digital footprints.<\/p>\n\n\n\n

Starling Bank's advice to agree on a safe phrase is a simple yet effective solution for now, but as AI technology continues to develop, there will be a growing need for more sophisticated safeguards. While innovation promises many benefits, it\u2019s clear that the rapid pace of AI development also poses new challenges, making it crucial for both individuals and institutions to stay one step ahead of cybercriminals.<\/p>\n","post_title":"Starling Bank Warns How Voice-Cloning Technology Puts Millions At Risk","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"starling-bank-warns-how-voice-cloning-technology-puts-millions-at-risk","to_ping":"","pinged":"","post_modified":"2024-09-25 19:10:49","post_modified_gmt":"2024-09-25 09:10:49","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18852","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18746,"post_author":"17","post_date":"2024-09-21 04:11:53","post_date_gmt":"2024-09-20 18:11:53","post_content":"\n

Meta, the company behind Facebook, intends to use social media posts in the UK to train its generative AI models. This will allow Meta\u2019s AI product to \u201creflect British culture, history, and idioms\u201d. The company believes this will facilitate the adoption of generative AI technology by UK businesses and industries. <\/p>\n\n\n\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

The companies will also give AWS customers early access to exclusive customization options for a limited period. Users can fine-tune Claude models on the Amazon Bedrock platform to cater to their needs. Additionally, the companies have set up discrete cloud environments for government customers. <\/p>\n\n\n\n

AWS CEO Matt Garman claims customers have responded positively<\/a> to this new development. \u201cThe response from AWS customers who are developing generative AI applications powered by Anthropic in Amazon Bedrock has been remarkable\u201d<\/em>, he added.\u00a0<\/p>\n","post_title":"Amazon Commits $4 Billion Investment In Anthropic To Power The Generation Of AI Development","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"amazon-commits-4-billion-investment-in-anthropic-to-power-the-generation-of-ai-development","to_ping":"","pinged":"","post_modified":"2024-12-03 04:01:03","post_modified_gmt":"2024-12-02 17:01:03","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19759","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18852,"post_author":"18","post_date":"2024-09-25 19:10:42","post_date_gmt":"2024-09-25 09:10:42","post_content":"\n

In a growing concern for everyday online users, Starling Bank has issued a warning about a new wave of scams using artificial intelligence (AI) to clone people\u2019s voices. The bank has raised the alarm that millions could be vulnerable to this increasingly sophisticated fraud.<\/p>\n\n\n\n

These scams are unsettlingly simple. Fraudsters need only a few seconds of someone's voice, often found in videos posted online, to create a replica. With this AI-generated voice, they can impersonate the victim and make phone calls to friends or family members, requesting money or sensitive information.<\/p>\n\n\n\n

A story originally reported by CNN quoted that according to a recent survey conducted by Starling Bank<\/a> and Mortar Research, more than a quarter of respondents had been targeted by an AI voice-cloning scam within the last year. What\u2019s more worrying is that 46% of those surveyed didn\u2019t even know such scams existed, leaving them vulnerable to deception. In some cases, the survey found that 8% of people would willingly send money even if the phone call seemed suspicious, simply because the voice sounded familiar.<\/p>\n\n\n\n

People frequently post content online, including audio or video recordings of their voice, without considering the potential risk this poses. The ability of AI to mimic voices is advancing rapidly, and it only takes a few seconds of audio for a fraudster to create an effective clone. This makes it easier than ever for scammers to prey on the emotional bonds between family members, tricking people into sending money to what they believe are loved ones in need.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Has Recently Unveiled Their Latest Voice Engine, Which Is Capable Of Cloning Human Voices<\/a><\/p>\n\n\n\n

Preventive Measures By Sterling Bank<\/h2>\n\n\n\n

Starling Bank is urging people to take steps to protect themselves by agreeing on a \"safe phrase\" <\/em>with family members. This simple, random phrase can be used to verify the identity of the person on the other end of the call, providing an extra layer of security. However, the bank advises that this phrase should not be shared via text, and if it is, the message should be deleted immediately to prevent it from being intercepted by fraudsters.<\/p>\n\n\n\n

The threat posed by AI technology goes beyond voice cloning. Earlier this year, OpenAI, the company behind the popular AI chatbot ChatGPT, introduced a voice replication tool called Voice Engine but chose not to make it widely available due to concerns about misuse. As AI becomes more adept at mimicking human voices, there are growing concerns about its potential for misuse, from financial fraud to spreading misinformation.<\/p>\n\n\n\n

Looking ahead, the risks associated with AI-driven scams are likely to expand. As technology becomes more advanced and accessible, scammers will find new ways to exploit it. Consumers must remain vigilant, not just in guarding their financial information but in understanding the new vulnerabilities created by digital footprints.<\/p>\n\n\n\n

Starling Bank's advice to agree on a safe phrase is a simple yet effective solution for now, but as AI technology continues to develop, there will be a growing need for more sophisticated safeguards. While innovation promises many benefits, it\u2019s clear that the rapid pace of AI development also poses new challenges, making it crucial for both individuals and institutions to stay one step ahead of cybercriminals.<\/p>\n","post_title":"Starling Bank Warns How Voice-Cloning Technology Puts Millions At Risk","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"starling-bank-warns-how-voice-cloning-technology-puts-millions-at-risk","to_ping":"","pinged":"","post_modified":"2024-09-25 19:10:49","post_modified_gmt":"2024-09-25 09:10:49","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18852","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18746,"post_author":"17","post_date":"2024-09-21 04:11:53","post_date_gmt":"2024-09-20 18:11:53","post_content":"\n

Meta, the company behind Facebook, intends to use social media posts in the UK to train its generative AI models. This will allow Meta\u2019s AI product to \u201creflect British culture, history, and idioms\u201d. The company believes this will facilitate the adoption of generative AI technology by UK businesses and industries. <\/p>\n\n\n\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

AWS will now also be Anthropic's main training partner. The AI company will utilize AWS Trainium and Inferentia chips to build its foundation models. The aim is to extract the maximum output from these chips to train the most advanced AI systems. <\/p>\n\n\n\n

The companies will also give AWS customers early access to exclusive customization options for a limited period. Users can fine-tune Claude models on the Amazon Bedrock platform to cater to their needs. Additionally, the companies have set up discrete cloud environments for government customers. <\/p>\n\n\n\n

AWS CEO Matt Garman claims customers have responded positively<\/a> to this new development. \u201cThe response from AWS customers who are developing generative AI applications powered by Anthropic in Amazon Bedrock has been remarkable\u201d<\/em>, he added.\u00a0<\/p>\n","post_title":"Amazon Commits $4 Billion Investment In Anthropic To Power The Generation Of AI Development","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"amazon-commits-4-billion-investment-in-anthropic-to-power-the-generation-of-ai-development","to_ping":"","pinged":"","post_modified":"2024-12-03 04:01:03","post_modified_gmt":"2024-12-02 17:01:03","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19759","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18852,"post_author":"18","post_date":"2024-09-25 19:10:42","post_date_gmt":"2024-09-25 09:10:42","post_content":"\n

In a growing concern for everyday online users, Starling Bank has issued a warning about a new wave of scams using artificial intelligence (AI) to clone people\u2019s voices. The bank has raised the alarm that millions could be vulnerable to this increasingly sophisticated fraud.<\/p>\n\n\n\n

These scams are unsettlingly simple. Fraudsters need only a few seconds of someone's voice, often found in videos posted online, to create a replica. With this AI-generated voice, they can impersonate the victim and make phone calls to friends or family members, requesting money or sensitive information.<\/p>\n\n\n\n

A story originally reported by CNN quoted that according to a recent survey conducted by Starling Bank<\/a> and Mortar Research, more than a quarter of respondents had been targeted by an AI voice-cloning scam within the last year. What\u2019s more worrying is that 46% of those surveyed didn\u2019t even know such scams existed, leaving them vulnerable to deception. In some cases, the survey found that 8% of people would willingly send money even if the phone call seemed suspicious, simply because the voice sounded familiar.<\/p>\n\n\n\n

People frequently post content online, including audio or video recordings of their voice, without considering the potential risk this poses. The ability of AI to mimic voices is advancing rapidly, and it only takes a few seconds of audio for a fraudster to create an effective clone. This makes it easier than ever for scammers to prey on the emotional bonds between family members, tricking people into sending money to what they believe are loved ones in need.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Has Recently Unveiled Their Latest Voice Engine, Which Is Capable Of Cloning Human Voices<\/a><\/p>\n\n\n\n

Preventive Measures By Sterling Bank<\/h2>\n\n\n\n

Starling Bank is urging people to take steps to protect themselves by agreeing on a \"safe phrase\" <\/em>with family members. This simple, random phrase can be used to verify the identity of the person on the other end of the call, providing an extra layer of security. However, the bank advises that this phrase should not be shared via text, and if it is, the message should be deleted immediately to prevent it from being intercepted by fraudsters.<\/p>\n\n\n\n

The threat posed by AI technology goes beyond voice cloning. Earlier this year, OpenAI, the company behind the popular AI chatbot ChatGPT, introduced a voice replication tool called Voice Engine but chose not to make it widely available due to concerns about misuse. As AI becomes more adept at mimicking human voices, there are growing concerns about its potential for misuse, from financial fraud to spreading misinformation.<\/p>\n\n\n\n

Looking ahead, the risks associated with AI-driven scams are likely to expand. As technology becomes more advanced and accessible, scammers will find new ways to exploit it. Consumers must remain vigilant, not just in guarding their financial information but in understanding the new vulnerabilities created by digital footprints.<\/p>\n\n\n\n

Starling Bank's advice to agree on a safe phrase is a simple yet effective solution for now, but as AI technology continues to develop, there will be a growing need for more sophisticated safeguards. While innovation promises many benefits, it\u2019s clear that the rapid pace of AI development also poses new challenges, making it crucial for both individuals and institutions to stay one step ahead of cybercriminals.<\/p>\n","post_title":"Starling Bank Warns How Voice-Cloning Technology Puts Millions At Risk","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"starling-bank-warns-how-voice-cloning-technology-puts-millions-at-risk","to_ping":"","pinged":"","post_modified":"2024-09-25 19:10:49","post_modified_gmt":"2024-09-25 09:10:49","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18852","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18746,"post_author":"17","post_date":"2024-09-21 04:11:53","post_date_gmt":"2024-09-20 18:11:53","post_content":"\n

Meta, the company behind Facebook, intends to use social media posts in the UK to train its generative AI models. This will allow Meta\u2019s AI product to \u201creflect British culture, history, and idioms\u201d. The company believes this will facilitate the adoption of generative AI technology by UK businesses and industries. <\/p>\n\n\n\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

AWS Trainium And Inferentia Chips<\/h2>\n\n\n\n

AWS will now also be Anthropic's main training partner. The AI company will utilize AWS Trainium and Inferentia chips to build its foundation models. The aim is to extract the maximum output from these chips to train the most advanced AI systems. <\/p>\n\n\n\n

The companies will also give AWS customers early access to exclusive customization options for a limited period. Users can fine-tune Claude models on the Amazon Bedrock platform to cater to their needs. Additionally, the companies have set up discrete cloud environments for government customers. <\/p>\n\n\n\n

AWS CEO Matt Garman claims customers have responded positively<\/a> to this new development. \u201cThe response from AWS customers who are developing generative AI applications powered by Anthropic in Amazon Bedrock has been remarkable\u201d<\/em>, he added.\u00a0<\/p>\n","post_title":"Amazon Commits $4 Billion Investment In Anthropic To Power The Generation Of AI Development","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"amazon-commits-4-billion-investment-in-anthropic-to-power-the-generation-of-ai-development","to_ping":"","pinged":"","post_modified":"2024-12-03 04:01:03","post_modified_gmt":"2024-12-02 17:01:03","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19759","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18852,"post_author":"18","post_date":"2024-09-25 19:10:42","post_date_gmt":"2024-09-25 09:10:42","post_content":"\n

In a growing concern for everyday online users, Starling Bank has issued a warning about a new wave of scams using artificial intelligence (AI) to clone people\u2019s voices. The bank has raised the alarm that millions could be vulnerable to this increasingly sophisticated fraud.<\/p>\n\n\n\n

These scams are unsettlingly simple. Fraudsters need only a few seconds of someone's voice, often found in videos posted online, to create a replica. With this AI-generated voice, they can impersonate the victim and make phone calls to friends or family members, requesting money or sensitive information.<\/p>\n\n\n\n

A story originally reported by CNN quoted that according to a recent survey conducted by Starling Bank<\/a> and Mortar Research, more than a quarter of respondents had been targeted by an AI voice-cloning scam within the last year. What\u2019s more worrying is that 46% of those surveyed didn\u2019t even know such scams existed, leaving them vulnerable to deception. In some cases, the survey found that 8% of people would willingly send money even if the phone call seemed suspicious, simply because the voice sounded familiar.<\/p>\n\n\n\n

People frequently post content online, including audio or video recordings of their voice, without considering the potential risk this poses. The ability of AI to mimic voices is advancing rapidly, and it only takes a few seconds of audio for a fraudster to create an effective clone. This makes it easier than ever for scammers to prey on the emotional bonds between family members, tricking people into sending money to what they believe are loved ones in need.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Has Recently Unveiled Their Latest Voice Engine, Which Is Capable Of Cloning Human Voices<\/a><\/p>\n\n\n\n

Preventive Measures By Sterling Bank<\/h2>\n\n\n\n

Starling Bank is urging people to take steps to protect themselves by agreeing on a \"safe phrase\" <\/em>with family members. This simple, random phrase can be used to verify the identity of the person on the other end of the call, providing an extra layer of security. However, the bank advises that this phrase should not be shared via text, and if it is, the message should be deleted immediately to prevent it from being intercepted by fraudsters.<\/p>\n\n\n\n

The threat posed by AI technology goes beyond voice cloning. Earlier this year, OpenAI, the company behind the popular AI chatbot ChatGPT, introduced a voice replication tool called Voice Engine but chose not to make it widely available due to concerns about misuse. As AI becomes more adept at mimicking human voices, there are growing concerns about its potential for misuse, from financial fraud to spreading misinformation.<\/p>\n\n\n\n

Looking ahead, the risks associated with AI-driven scams are likely to expand. As technology becomes more advanced and accessible, scammers will find new ways to exploit it. Consumers must remain vigilant, not just in guarding their financial information but in understanding the new vulnerabilities created by digital footprints.<\/p>\n\n\n\n

Starling Bank's advice to agree on a safe phrase is a simple yet effective solution for now, but as AI technology continues to develop, there will be a growing need for more sophisticated safeguards. While innovation promises many benefits, it\u2019s clear that the rapid pace of AI development also poses new challenges, making it crucial for both individuals and institutions to stay one step ahead of cybercriminals.<\/p>\n","post_title":"Starling Bank Warns How Voice-Cloning Technology Puts Millions At Risk","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"starling-bank-warns-how-voice-cloning-technology-puts-millions-at-risk","to_ping":"","pinged":"","post_modified":"2024-09-25 19:10:49","post_modified_gmt":"2024-09-25 09:10:49","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18852","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18746,"post_author":"17","post_date":"2024-09-21 04:11:53","post_date_gmt":"2024-09-20 18:11:53","post_content":"\n

Meta, the company behind Facebook, intends to use social media posts in the UK to train its generative AI models. This will allow Meta\u2019s AI product to \u201creflect British culture, history, and idioms\u201d. The company believes this will facilitate the adoption of generative AI technology by UK businesses and industries. <\/p>\n\n\n\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related: <\/em><\/strong>Amazon Forays Into The World Of Generative AI With Amazon Bedrock<\/a><\/p>\n\n\n\n

AWS Trainium And Inferentia Chips<\/h2>\n\n\n\n

AWS will now also be Anthropic's main training partner. The AI company will utilize AWS Trainium and Inferentia chips to build its foundation models. The aim is to extract the maximum output from these chips to train the most advanced AI systems. <\/p>\n\n\n\n

The companies will also give AWS customers early access to exclusive customization options for a limited period. Users can fine-tune Claude models on the Amazon Bedrock platform to cater to their needs. Additionally, the companies have set up discrete cloud environments for government customers. <\/p>\n\n\n\n

AWS CEO Matt Garman claims customers have responded positively<\/a> to this new development. \u201cThe response from AWS customers who are developing generative AI applications powered by Anthropic in Amazon Bedrock has been remarkable\u201d<\/em>, he added.\u00a0<\/p>\n","post_title":"Amazon Commits $4 Billion Investment In Anthropic To Power The Generation Of AI Development","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"amazon-commits-4-billion-investment-in-anthropic-to-power-the-generation-of-ai-development","to_ping":"","pinged":"","post_modified":"2024-12-03 04:01:03","post_modified_gmt":"2024-12-02 17:01:03","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19759","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18852,"post_author":"18","post_date":"2024-09-25 19:10:42","post_date_gmt":"2024-09-25 09:10:42","post_content":"\n

In a growing concern for everyday online users, Starling Bank has issued a warning about a new wave of scams using artificial intelligence (AI) to clone people\u2019s voices. The bank has raised the alarm that millions could be vulnerable to this increasingly sophisticated fraud.<\/p>\n\n\n\n

These scams are unsettlingly simple. Fraudsters need only a few seconds of someone's voice, often found in videos posted online, to create a replica. With this AI-generated voice, they can impersonate the victim and make phone calls to friends or family members, requesting money or sensitive information.<\/p>\n\n\n\n

A story originally reported by CNN quoted that according to a recent survey conducted by Starling Bank<\/a> and Mortar Research, more than a quarter of respondents had been targeted by an AI voice-cloning scam within the last year. What\u2019s more worrying is that 46% of those surveyed didn\u2019t even know such scams existed, leaving them vulnerable to deception. In some cases, the survey found that 8% of people would willingly send money even if the phone call seemed suspicious, simply because the voice sounded familiar.<\/p>\n\n\n\n

People frequently post content online, including audio or video recordings of their voice, without considering the potential risk this poses. The ability of AI to mimic voices is advancing rapidly, and it only takes a few seconds of audio for a fraudster to create an effective clone. This makes it easier than ever for scammers to prey on the emotional bonds between family members, tricking people into sending money to what they believe are loved ones in need.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Has Recently Unveiled Their Latest Voice Engine, Which Is Capable Of Cloning Human Voices<\/a><\/p>\n\n\n\n

Preventive Measures By Sterling Bank<\/h2>\n\n\n\n

Starling Bank is urging people to take steps to protect themselves by agreeing on a \"safe phrase\" <\/em>with family members. This simple, random phrase can be used to verify the identity of the person on the other end of the call, providing an extra layer of security. However, the bank advises that this phrase should not be shared via text, and if it is, the message should be deleted immediately to prevent it from being intercepted by fraudsters.<\/p>\n\n\n\n

The threat posed by AI technology goes beyond voice cloning. Earlier this year, OpenAI, the company behind the popular AI chatbot ChatGPT, introduced a voice replication tool called Voice Engine but chose not to make it widely available due to concerns about misuse. As AI becomes more adept at mimicking human voices, there are growing concerns about its potential for misuse, from financial fraud to spreading misinformation.<\/p>\n\n\n\n

Looking ahead, the risks associated with AI-driven scams are likely to expand. As technology becomes more advanced and accessible, scammers will find new ways to exploit it. Consumers must remain vigilant, not just in guarding their financial information but in understanding the new vulnerabilities created by digital footprints.<\/p>\n\n\n\n

Starling Bank's advice to agree on a safe phrase is a simple yet effective solution for now, but as AI technology continues to develop, there will be a growing need for more sophisticated safeguards. While innovation promises many benefits, it\u2019s clear that the rapid pace of AI development also poses new challenges, making it crucial for both individuals and institutions to stay one step ahead of cybercriminals.<\/p>\n","post_title":"Starling Bank Warns How Voice-Cloning Technology Puts Millions At Risk","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"starling-bank-warns-how-voice-cloning-technology-puts-millions-at-risk","to_ping":"","pinged":"","post_modified":"2024-09-25 19:10:49","post_modified_gmt":"2024-09-25 09:10:49","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18852","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18746,"post_author":"17","post_date":"2024-09-21 04:11:53","post_date_gmt":"2024-09-20 18:11:53","post_content":"\n

Meta, the company behind Facebook, intends to use social media posts in the UK to train its generative AI models. This will allow Meta\u2019s AI product to \u201creflect British culture, history, and idioms\u201d. The company believes this will facilitate the adoption of generative AI technology by UK businesses and industries. <\/p>\n\n\n\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Amazon first partnered with Anthropic in September 2023 in a deal initially worth $4 billion. As part of the agreement,  Amazon Web Service adopted Anthropic\u2019s Claude family of large language models (LLM). In exchange, AWS became the primary cloud service provider for Anthropic. According to Anthropic, this latest expansion will deepen their strategic collaboration to develop and deploy advanced AI systems. The total cost of this partnership now sits at $8 billion as of 2024.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Amazon Forays Into The World Of Generative AI With Amazon Bedrock<\/a><\/p>\n\n\n\n

AWS Trainium And Inferentia Chips<\/h2>\n\n\n\n

AWS will now also be Anthropic's main training partner. The AI company will utilize AWS Trainium and Inferentia chips to build its foundation models. The aim is to extract the maximum output from these chips to train the most advanced AI systems. <\/p>\n\n\n\n

The companies will also give AWS customers early access to exclusive customization options for a limited period. Users can fine-tune Claude models on the Amazon Bedrock platform to cater to their needs. Additionally, the companies have set up discrete cloud environments for government customers. <\/p>\n\n\n\n

AWS CEO Matt Garman claims customers have responded positively<\/a> to this new development. \u201cThe response from AWS customers who are developing generative AI applications powered by Anthropic in Amazon Bedrock has been remarkable\u201d<\/em>, he added.\u00a0<\/p>\n","post_title":"Amazon Commits $4 Billion Investment In Anthropic To Power The Generation Of AI Development","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"amazon-commits-4-billion-investment-in-anthropic-to-power-the-generation-of-ai-development","to_ping":"","pinged":"","post_modified":"2024-12-03 04:01:03","post_modified_gmt":"2024-12-02 17:01:03","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19759","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18852,"post_author":"18","post_date":"2024-09-25 19:10:42","post_date_gmt":"2024-09-25 09:10:42","post_content":"\n

In a growing concern for everyday online users, Starling Bank has issued a warning about a new wave of scams using artificial intelligence (AI) to clone people\u2019s voices. The bank has raised the alarm that millions could be vulnerable to this increasingly sophisticated fraud.<\/p>\n\n\n\n

These scams are unsettlingly simple. Fraudsters need only a few seconds of someone's voice, often found in videos posted online, to create a replica. With this AI-generated voice, they can impersonate the victim and make phone calls to friends or family members, requesting money or sensitive information.<\/p>\n\n\n\n

A story originally reported by CNN quoted that according to a recent survey conducted by Starling Bank<\/a> and Mortar Research, more than a quarter of respondents had been targeted by an AI voice-cloning scam within the last year. What\u2019s more worrying is that 46% of those surveyed didn\u2019t even know such scams existed, leaving them vulnerable to deception. In some cases, the survey found that 8% of people would willingly send money even if the phone call seemed suspicious, simply because the voice sounded familiar.<\/p>\n\n\n\n

People frequently post content online, including audio or video recordings of their voice, without considering the potential risk this poses. The ability of AI to mimic voices is advancing rapidly, and it only takes a few seconds of audio for a fraudster to create an effective clone. This makes it easier than ever for scammers to prey on the emotional bonds between family members, tricking people into sending money to what they believe are loved ones in need.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Has Recently Unveiled Their Latest Voice Engine, Which Is Capable Of Cloning Human Voices<\/a><\/p>\n\n\n\n

Preventive Measures By Sterling Bank<\/h2>\n\n\n\n

Starling Bank is urging people to take steps to protect themselves by agreeing on a \"safe phrase\" <\/em>with family members. This simple, random phrase can be used to verify the identity of the person on the other end of the call, providing an extra layer of security. However, the bank advises that this phrase should not be shared via text, and if it is, the message should be deleted immediately to prevent it from being intercepted by fraudsters.<\/p>\n\n\n\n

The threat posed by AI technology goes beyond voice cloning. Earlier this year, OpenAI, the company behind the popular AI chatbot ChatGPT, introduced a voice replication tool called Voice Engine but chose not to make it widely available due to concerns about misuse. As AI becomes more adept at mimicking human voices, there are growing concerns about its potential for misuse, from financial fraud to spreading misinformation.<\/p>\n\n\n\n

Looking ahead, the risks associated with AI-driven scams are likely to expand. As technology becomes more advanced and accessible, scammers will find new ways to exploit it. Consumers must remain vigilant, not just in guarding their financial information but in understanding the new vulnerabilities created by digital footprints.<\/p>\n\n\n\n

Starling Bank's advice to agree on a safe phrase is a simple yet effective solution for now, but as AI technology continues to develop, there will be a growing need for more sophisticated safeguards. While innovation promises many benefits, it\u2019s clear that the rapid pace of AI development also poses new challenges, making it crucial for both individuals and institutions to stay one step ahead of cybercriminals.<\/p>\n","post_title":"Starling Bank Warns How Voice-Cloning Technology Puts Millions At Risk","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"starling-bank-warns-how-voice-cloning-technology-puts-millions-at-risk","to_ping":"","pinged":"","post_modified":"2024-09-25 19:10:49","post_modified_gmt":"2024-09-25 09:10:49","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18852","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18746,"post_author":"17","post_date":"2024-09-21 04:11:53","post_date_gmt":"2024-09-20 18:11:53","post_content":"\n

Meta, the company behind Facebook, intends to use social media posts in the UK to train its generative AI models. This will allow Meta\u2019s AI product to \u201creflect British culture, history, and idioms\u201d. The company believes this will facilitate the adoption of generative AI technology by UK businesses and industries. <\/p>\n\n\n\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

\u201cToday we\u2019re announcing an expansion of our collaboration with Amazon Web Services (AWS), deepening our work together to develop and deploy advanced AI systems\u201d<\/em><\/strong>, reads the official blog post on Anthropic\u2019s website<\/a>. <\/p>\n\n\n\n

Amazon first partnered with Anthropic in September 2023 in a deal initially worth $4 billion. As part of the agreement,  Amazon Web Service adopted Anthropic\u2019s Claude family of large language models (LLM). In exchange, AWS became the primary cloud service provider for Anthropic. According to Anthropic, this latest expansion will deepen their strategic collaboration to develop and deploy advanced AI systems. The total cost of this partnership now sits at $8 billion as of 2024.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Amazon Forays Into The World Of Generative AI With Amazon Bedrock<\/a><\/p>\n\n\n\n

AWS Trainium And Inferentia Chips<\/h2>\n\n\n\n

AWS will now also be Anthropic's main training partner. The AI company will utilize AWS Trainium and Inferentia chips to build its foundation models. The aim is to extract the maximum output from these chips to train the most advanced AI systems. <\/p>\n\n\n\n

The companies will also give AWS customers early access to exclusive customization options for a limited period. Users can fine-tune Claude models on the Amazon Bedrock platform to cater to their needs. Additionally, the companies have set up discrete cloud environments for government customers. <\/p>\n\n\n\n

AWS CEO Matt Garman claims customers have responded positively<\/a> to this new development. \u201cThe response from AWS customers who are developing generative AI applications powered by Anthropic in Amazon Bedrock has been remarkable\u201d<\/em>, he added.\u00a0<\/p>\n","post_title":"Amazon Commits $4 Billion Investment In Anthropic To Power The Generation Of AI Development","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"amazon-commits-4-billion-investment-in-anthropic-to-power-the-generation-of-ai-development","to_ping":"","pinged":"","post_modified":"2024-12-03 04:01:03","post_modified_gmt":"2024-12-02 17:01:03","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19759","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18852,"post_author":"18","post_date":"2024-09-25 19:10:42","post_date_gmt":"2024-09-25 09:10:42","post_content":"\n

In a growing concern for everyday online users, Starling Bank has issued a warning about a new wave of scams using artificial intelligence (AI) to clone people\u2019s voices. The bank has raised the alarm that millions could be vulnerable to this increasingly sophisticated fraud.<\/p>\n\n\n\n

These scams are unsettlingly simple. Fraudsters need only a few seconds of someone's voice, often found in videos posted online, to create a replica. With this AI-generated voice, they can impersonate the victim and make phone calls to friends or family members, requesting money or sensitive information.<\/p>\n\n\n\n

A story originally reported by CNN quoted that according to a recent survey conducted by Starling Bank<\/a> and Mortar Research, more than a quarter of respondents had been targeted by an AI voice-cloning scam within the last year. What\u2019s more worrying is that 46% of those surveyed didn\u2019t even know such scams existed, leaving them vulnerable to deception. In some cases, the survey found that 8% of people would willingly send money even if the phone call seemed suspicious, simply because the voice sounded familiar.<\/p>\n\n\n\n

People frequently post content online, including audio or video recordings of their voice, without considering the potential risk this poses. The ability of AI to mimic voices is advancing rapidly, and it only takes a few seconds of audio for a fraudster to create an effective clone. This makes it easier than ever for scammers to prey on the emotional bonds between family members, tricking people into sending money to what they believe are loved ones in need.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Has Recently Unveiled Their Latest Voice Engine, Which Is Capable Of Cloning Human Voices<\/a><\/p>\n\n\n\n

Preventive Measures By Sterling Bank<\/h2>\n\n\n\n

Starling Bank is urging people to take steps to protect themselves by agreeing on a \"safe phrase\" <\/em>with family members. This simple, random phrase can be used to verify the identity of the person on the other end of the call, providing an extra layer of security. However, the bank advises that this phrase should not be shared via text, and if it is, the message should be deleted immediately to prevent it from being intercepted by fraudsters.<\/p>\n\n\n\n

The threat posed by AI technology goes beyond voice cloning. Earlier this year, OpenAI, the company behind the popular AI chatbot ChatGPT, introduced a voice replication tool called Voice Engine but chose not to make it widely available due to concerns about misuse. As AI becomes more adept at mimicking human voices, there are growing concerns about its potential for misuse, from financial fraud to spreading misinformation.<\/p>\n\n\n\n

Looking ahead, the risks associated with AI-driven scams are likely to expand. As technology becomes more advanced and accessible, scammers will find new ways to exploit it. Consumers must remain vigilant, not just in guarding their financial information but in understanding the new vulnerabilities created by digital footprints.<\/p>\n\n\n\n

Starling Bank's advice to agree on a safe phrase is a simple yet effective solution for now, but as AI technology continues to develop, there will be a growing need for more sophisticated safeguards. While innovation promises many benefits, it\u2019s clear that the rapid pace of AI development also poses new challenges, making it crucial for both individuals and institutions to stay one step ahead of cybercriminals.<\/p>\n","post_title":"Starling Bank Warns How Voice-Cloning Technology Puts Millions At Risk","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"starling-bank-warns-how-voice-cloning-technology-puts-millions-at-risk","to_ping":"","pinged":"","post_modified":"2024-09-25 19:10:49","post_modified_gmt":"2024-09-25 09:10:49","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18852","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18746,"post_author":"17","post_date":"2024-09-21 04:11:53","post_date_gmt":"2024-09-20 18:11:53","post_content":"\n

Meta, the company behind Facebook, intends to use social media posts in the UK to train its generative AI models. This will allow Meta\u2019s AI product to \u201creflect British culture, history, and idioms\u201d. The company believes this will facilitate the adoption of generative AI technology by UK businesses and industries. <\/p>\n\n\n\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Amazon has announced a $4 billion investment in AI company Anthropic to facilitate the development of generative AI models. This is the second significant commitment between Amazon Web Services (AWS) and Anthropic since 2023. Both companies released separate statements confirming the news.<\/p>\n\n\n\n

\u201cToday we\u2019re announcing an expansion of our collaboration with Amazon Web Services (AWS), deepening our work together to develop and deploy advanced AI systems\u201d<\/em><\/strong>, reads the official blog post on Anthropic\u2019s website<\/a>. <\/p>\n\n\n\n

Amazon first partnered with Anthropic in September 2023 in a deal initially worth $4 billion. As part of the agreement,  Amazon Web Service adopted Anthropic\u2019s Claude family of large language models (LLM). In exchange, AWS became the primary cloud service provider for Anthropic. According to Anthropic, this latest expansion will deepen their strategic collaboration to develop and deploy advanced AI systems. The total cost of this partnership now sits at $8 billion as of 2024.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Amazon Forays Into The World Of Generative AI With Amazon Bedrock<\/a><\/p>\n\n\n\n

AWS Trainium And Inferentia Chips<\/h2>\n\n\n\n

AWS will now also be Anthropic's main training partner. The AI company will utilize AWS Trainium and Inferentia chips to build its foundation models. The aim is to extract the maximum output from these chips to train the most advanced AI systems. <\/p>\n\n\n\n

The companies will also give AWS customers early access to exclusive customization options for a limited period. Users can fine-tune Claude models on the Amazon Bedrock platform to cater to their needs. Additionally, the companies have set up discrete cloud environments for government customers. <\/p>\n\n\n\n

AWS CEO Matt Garman claims customers have responded positively<\/a> to this new development. \u201cThe response from AWS customers who are developing generative AI applications powered by Anthropic in Amazon Bedrock has been remarkable\u201d<\/em>, he added.\u00a0<\/p>\n","post_title":"Amazon Commits $4 Billion Investment In Anthropic To Power The Generation Of AI Development","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"amazon-commits-4-billion-investment-in-anthropic-to-power-the-generation-of-ai-development","to_ping":"","pinged":"","post_modified":"2024-12-03 04:01:03","post_modified_gmt":"2024-12-02 17:01:03","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19759","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18852,"post_author":"18","post_date":"2024-09-25 19:10:42","post_date_gmt":"2024-09-25 09:10:42","post_content":"\n

In a growing concern for everyday online users, Starling Bank has issued a warning about a new wave of scams using artificial intelligence (AI) to clone people\u2019s voices. The bank has raised the alarm that millions could be vulnerable to this increasingly sophisticated fraud.<\/p>\n\n\n\n

These scams are unsettlingly simple. Fraudsters need only a few seconds of someone's voice, often found in videos posted online, to create a replica. With this AI-generated voice, they can impersonate the victim and make phone calls to friends or family members, requesting money or sensitive information.<\/p>\n\n\n\n

A story originally reported by CNN quoted that according to a recent survey conducted by Starling Bank<\/a> and Mortar Research, more than a quarter of respondents had been targeted by an AI voice-cloning scam within the last year. What\u2019s more worrying is that 46% of those surveyed didn\u2019t even know such scams existed, leaving them vulnerable to deception. In some cases, the survey found that 8% of people would willingly send money even if the phone call seemed suspicious, simply because the voice sounded familiar.<\/p>\n\n\n\n

People frequently post content online, including audio or video recordings of their voice, without considering the potential risk this poses. The ability of AI to mimic voices is advancing rapidly, and it only takes a few seconds of audio for a fraudster to create an effective clone. This makes it easier than ever for scammers to prey on the emotional bonds between family members, tricking people into sending money to what they believe are loved ones in need.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Has Recently Unveiled Their Latest Voice Engine, Which Is Capable Of Cloning Human Voices<\/a><\/p>\n\n\n\n

Preventive Measures By Sterling Bank<\/h2>\n\n\n\n

Starling Bank is urging people to take steps to protect themselves by agreeing on a \"safe phrase\" <\/em>with family members. This simple, random phrase can be used to verify the identity of the person on the other end of the call, providing an extra layer of security. However, the bank advises that this phrase should not be shared via text, and if it is, the message should be deleted immediately to prevent it from being intercepted by fraudsters.<\/p>\n\n\n\n

The threat posed by AI technology goes beyond voice cloning. Earlier this year, OpenAI, the company behind the popular AI chatbot ChatGPT, introduced a voice replication tool called Voice Engine but chose not to make it widely available due to concerns about misuse. As AI becomes more adept at mimicking human voices, there are growing concerns about its potential for misuse, from financial fraud to spreading misinformation.<\/p>\n\n\n\n

Looking ahead, the risks associated with AI-driven scams are likely to expand. As technology becomes more advanced and accessible, scammers will find new ways to exploit it. Consumers must remain vigilant, not just in guarding their financial information but in understanding the new vulnerabilities created by digital footprints.<\/p>\n\n\n\n

Starling Bank's advice to agree on a safe phrase is a simple yet effective solution for now, but as AI technology continues to develop, there will be a growing need for more sophisticated safeguards. While innovation promises many benefits, it\u2019s clear that the rapid pace of AI development also poses new challenges, making it crucial for both individuals and institutions to stay one step ahead of cybercriminals.<\/p>\n","post_title":"Starling Bank Warns How Voice-Cloning Technology Puts Millions At Risk","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"starling-bank-warns-how-voice-cloning-technology-puts-millions-at-risk","to_ping":"","pinged":"","post_modified":"2024-09-25 19:10:49","post_modified_gmt":"2024-09-25 09:10:49","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18852","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18746,"post_author":"17","post_date":"2024-09-21 04:11:53","post_date_gmt":"2024-09-20 18:11:53","post_content":"\n

Meta, the company behind Facebook, intends to use social media posts in the UK to train its generative AI models. This will allow Meta\u2019s AI product to \u201creflect British culture, history, and idioms\u201d. The company believes this will facilitate the adoption of generative AI technology by UK businesses and industries. <\/p>\n\n\n\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Keeping in view the fact that OpenAl considers this unreleased opt-out tool the solution to all copyright-related issues, critics think it wouldn't be able to address all existing complicated problems. Although the self-imposed deadline for the launch of the opt-out tool has been surpassed, it can only be hoped that OpenAI will break its silence soon.<\/p>\n","post_title":"OpenAI failed To Deliver The Opt-Out Tool It Promised By 2025","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"openai-failed-to-deliver-the-opt-out-tool-it-promised-by-2025","to_ping":"","pinged":"","post_modified":"2025-01-13 04:13:51","post_modified_gmt":"2025-01-12 17:13:51","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=20054","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19759,"post_author":"17","post_date":"2024-12-03 04:00:54","post_date_gmt":"2024-12-02 17:00:54","post_content":"\n

Amazon has announced a $4 billion investment in AI company Anthropic to facilitate the development of generative AI models. This is the second significant commitment between Amazon Web Services (AWS) and Anthropic since 2023. Both companies released separate statements confirming the news.<\/p>\n\n\n\n

\u201cToday we\u2019re announcing an expansion of our collaboration with Amazon Web Services (AWS), deepening our work together to develop and deploy advanced AI systems\u201d<\/em><\/strong>, reads the official blog post on Anthropic\u2019s website<\/a>. <\/p>\n\n\n\n

Amazon first partnered with Anthropic in September 2023 in a deal initially worth $4 billion. As part of the agreement,  Amazon Web Service adopted Anthropic\u2019s Claude family of large language models (LLM). In exchange, AWS became the primary cloud service provider for Anthropic. According to Anthropic, this latest expansion will deepen their strategic collaboration to develop and deploy advanced AI systems. The total cost of this partnership now sits at $8 billion as of 2024.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Amazon Forays Into The World Of Generative AI With Amazon Bedrock<\/a><\/p>\n\n\n\n

AWS Trainium And Inferentia Chips<\/h2>\n\n\n\n

AWS will now also be Anthropic's main training partner. The AI company will utilize AWS Trainium and Inferentia chips to build its foundation models. The aim is to extract the maximum output from these chips to train the most advanced AI systems. <\/p>\n\n\n\n

The companies will also give AWS customers early access to exclusive customization options for a limited period. Users can fine-tune Claude models on the Amazon Bedrock platform to cater to their needs. Additionally, the companies have set up discrete cloud environments for government customers. <\/p>\n\n\n\n

AWS CEO Matt Garman claims customers have responded positively<\/a> to this new development. \u201cThe response from AWS customers who are developing generative AI applications powered by Anthropic in Amazon Bedrock has been remarkable\u201d<\/em>, he added.\u00a0<\/p>\n","post_title":"Amazon Commits $4 Billion Investment In Anthropic To Power The Generation Of AI Development","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"amazon-commits-4-billion-investment-in-anthropic-to-power-the-generation-of-ai-development","to_ping":"","pinged":"","post_modified":"2024-12-03 04:01:03","post_modified_gmt":"2024-12-02 17:01:03","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19759","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18852,"post_author":"18","post_date":"2024-09-25 19:10:42","post_date_gmt":"2024-09-25 09:10:42","post_content":"\n

In a growing concern for everyday online users, Starling Bank has issued a warning about a new wave of scams using artificial intelligence (AI) to clone people\u2019s voices. The bank has raised the alarm that millions could be vulnerable to this increasingly sophisticated fraud.<\/p>\n\n\n\n

These scams are unsettlingly simple. Fraudsters need only a few seconds of someone's voice, often found in videos posted online, to create a replica. With this AI-generated voice, they can impersonate the victim and make phone calls to friends or family members, requesting money or sensitive information.<\/p>\n\n\n\n

A story originally reported by CNN quoted that according to a recent survey conducted by Starling Bank<\/a> and Mortar Research, more than a quarter of respondents had been targeted by an AI voice-cloning scam within the last year. What\u2019s more worrying is that 46% of those surveyed didn\u2019t even know such scams existed, leaving them vulnerable to deception. In some cases, the survey found that 8% of people would willingly send money even if the phone call seemed suspicious, simply because the voice sounded familiar.<\/p>\n\n\n\n

People frequently post content online, including audio or video recordings of their voice, without considering the potential risk this poses. The ability of AI to mimic voices is advancing rapidly, and it only takes a few seconds of audio for a fraudster to create an effective clone. This makes it easier than ever for scammers to prey on the emotional bonds between family members, tricking people into sending money to what they believe are loved ones in need.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Has Recently Unveiled Their Latest Voice Engine, Which Is Capable Of Cloning Human Voices<\/a><\/p>\n\n\n\n

Preventive Measures By Sterling Bank<\/h2>\n\n\n\n

Starling Bank is urging people to take steps to protect themselves by agreeing on a \"safe phrase\" <\/em>with family members. This simple, random phrase can be used to verify the identity of the person on the other end of the call, providing an extra layer of security. However, the bank advises that this phrase should not be shared via text, and if it is, the message should be deleted immediately to prevent it from being intercepted by fraudsters.<\/p>\n\n\n\n

The threat posed by AI technology goes beyond voice cloning. Earlier this year, OpenAI, the company behind the popular AI chatbot ChatGPT, introduced a voice replication tool called Voice Engine but chose not to make it widely available due to concerns about misuse. As AI becomes more adept at mimicking human voices, there are growing concerns about its potential for misuse, from financial fraud to spreading misinformation.<\/p>\n\n\n\n

Looking ahead, the risks associated with AI-driven scams are likely to expand. As technology becomes more advanced and accessible, scammers will find new ways to exploit it. Consumers must remain vigilant, not just in guarding their financial information but in understanding the new vulnerabilities created by digital footprints.<\/p>\n\n\n\n

Starling Bank's advice to agree on a safe phrase is a simple yet effective solution for now, but as AI technology continues to develop, there will be a growing need for more sophisticated safeguards. While innovation promises many benefits, it\u2019s clear that the rapid pace of AI development also poses new challenges, making it crucial for both individuals and institutions to stay one step ahead of cybercriminals.<\/p>\n","post_title":"Starling Bank Warns How Voice-Cloning Technology Puts Millions At Risk","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"starling-bank-warns-how-voice-cloning-technology-puts-millions-at-risk","to_ping":"","pinged":"","post_modified":"2024-09-25 19:10:49","post_modified_gmt":"2024-09-25 09:10:49","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18852","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18746,"post_author":"17","post_date":"2024-09-21 04:11:53","post_date_gmt":"2024-09-20 18:11:53","post_content":"\n

Meta, the company behind Facebook, intends to use social media posts in the UK to train its generative AI models. This will allow Meta\u2019s AI product to \u201creflect British culture, history, and idioms\u201d. The company believes this will facilitate the adoption of generative AI technology by UK businesses and industries. <\/p>\n\n\n\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

However, no signs of the launch of Media Manager can be seen in the dawn of 2025. OpenAl hasn't yet broken its silence over the matter. However, an employee on the condition of anonymity told TechCrunch\u2013a media outlet that \u201cI don't think it [Media Manager] was a priority. To be honest, I don't think I remember anyone working on it.\u201d This shows how developing opt-out tools was never the priority of stakeholders of OpenAl.<\/p>\n\n\n\n

Keeping in view the fact that OpenAl considers this unreleased opt-out tool the solution to all copyright-related issues, critics think it wouldn't be able to address all existing complicated problems. Although the self-imposed deadline for the launch of the opt-out tool has been surpassed, it can only be hoped that OpenAI will break its silence soon.<\/p>\n","post_title":"OpenAI failed To Deliver The Opt-Out Tool It Promised By 2025","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"openai-failed-to-deliver-the-opt-out-tool-it-promised-by-2025","to_ping":"","pinged":"","post_modified":"2025-01-13 04:13:51","post_modified_gmt":"2025-01-12 17:13:51","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=20054","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19759,"post_author":"17","post_date":"2024-12-03 04:00:54","post_date_gmt":"2024-12-02 17:00:54","post_content":"\n

Amazon has announced a $4 billion investment in AI company Anthropic to facilitate the development of generative AI models. This is the second significant commitment between Amazon Web Services (AWS) and Anthropic since 2023. Both companies released separate statements confirming the news.<\/p>\n\n\n\n

\u201cToday we\u2019re announcing an expansion of our collaboration with Amazon Web Services (AWS), deepening our work together to develop and deploy advanced AI systems\u201d<\/em><\/strong>, reads the official blog post on Anthropic\u2019s website<\/a>. <\/p>\n\n\n\n

Amazon first partnered with Anthropic in September 2023 in a deal initially worth $4 billion. As part of the agreement,  Amazon Web Service adopted Anthropic\u2019s Claude family of large language models (LLM). In exchange, AWS became the primary cloud service provider for Anthropic. According to Anthropic, this latest expansion will deepen their strategic collaboration to develop and deploy advanced AI systems. The total cost of this partnership now sits at $8 billion as of 2024.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Amazon Forays Into The World Of Generative AI With Amazon Bedrock<\/a><\/p>\n\n\n\n

AWS Trainium And Inferentia Chips<\/h2>\n\n\n\n

AWS will now also be Anthropic's main training partner. The AI company will utilize AWS Trainium and Inferentia chips to build its foundation models. The aim is to extract the maximum output from these chips to train the most advanced AI systems. <\/p>\n\n\n\n

The companies will also give AWS customers early access to exclusive customization options for a limited period. Users can fine-tune Claude models on the Amazon Bedrock platform to cater to their needs. Additionally, the companies have set up discrete cloud environments for government customers. <\/p>\n\n\n\n

AWS CEO Matt Garman claims customers have responded positively<\/a> to this new development. \u201cThe response from AWS customers who are developing generative AI applications powered by Anthropic in Amazon Bedrock has been remarkable\u201d<\/em>, he added.\u00a0<\/p>\n","post_title":"Amazon Commits $4 Billion Investment In Anthropic To Power The Generation Of AI Development","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"amazon-commits-4-billion-investment-in-anthropic-to-power-the-generation-of-ai-development","to_ping":"","pinged":"","post_modified":"2024-12-03 04:01:03","post_modified_gmt":"2024-12-02 17:01:03","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19759","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18852,"post_author":"18","post_date":"2024-09-25 19:10:42","post_date_gmt":"2024-09-25 09:10:42","post_content":"\n

In a growing concern for everyday online users, Starling Bank has issued a warning about a new wave of scams using artificial intelligence (AI) to clone people\u2019s voices. The bank has raised the alarm that millions could be vulnerable to this increasingly sophisticated fraud.<\/p>\n\n\n\n

These scams are unsettlingly simple. Fraudsters need only a few seconds of someone's voice, often found in videos posted online, to create a replica. With this AI-generated voice, they can impersonate the victim and make phone calls to friends or family members, requesting money or sensitive information.<\/p>\n\n\n\n

A story originally reported by CNN quoted that according to a recent survey conducted by Starling Bank<\/a> and Mortar Research, more than a quarter of respondents had been targeted by an AI voice-cloning scam within the last year. What\u2019s more worrying is that 46% of those surveyed didn\u2019t even know such scams existed, leaving them vulnerable to deception. In some cases, the survey found that 8% of people would willingly send money even if the phone call seemed suspicious, simply because the voice sounded familiar.<\/p>\n\n\n\n

People frequently post content online, including audio or video recordings of their voice, without considering the potential risk this poses. The ability of AI to mimic voices is advancing rapidly, and it only takes a few seconds of audio for a fraudster to create an effective clone. This makes it easier than ever for scammers to prey on the emotional bonds between family members, tricking people into sending money to what they believe are loved ones in need.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Has Recently Unveiled Their Latest Voice Engine, Which Is Capable Of Cloning Human Voices<\/a><\/p>\n\n\n\n

Preventive Measures By Sterling Bank<\/h2>\n\n\n\n

Starling Bank is urging people to take steps to protect themselves by agreeing on a \"safe phrase\" <\/em>with family members. This simple, random phrase can be used to verify the identity of the person on the other end of the call, providing an extra layer of security. However, the bank advises that this phrase should not be shared via text, and if it is, the message should be deleted immediately to prevent it from being intercepted by fraudsters.<\/p>\n\n\n\n

The threat posed by AI technology goes beyond voice cloning. Earlier this year, OpenAI, the company behind the popular AI chatbot ChatGPT, introduced a voice replication tool called Voice Engine but chose not to make it widely available due to concerns about misuse. As AI becomes more adept at mimicking human voices, there are growing concerns about its potential for misuse, from financial fraud to spreading misinformation.<\/p>\n\n\n\n

Looking ahead, the risks associated with AI-driven scams are likely to expand. As technology becomes more advanced and accessible, scammers will find new ways to exploit it. Consumers must remain vigilant, not just in guarding their financial information but in understanding the new vulnerabilities created by digital footprints.<\/p>\n\n\n\n

Starling Bank's advice to agree on a safe phrase is a simple yet effective solution for now, but as AI technology continues to develop, there will be a growing need for more sophisticated safeguards. While innovation promises many benefits, it\u2019s clear that the rapid pace of AI development also poses new challenges, making it crucial for both individuals and institutions to stay one step ahead of cybercriminals.<\/p>\n","post_title":"Starling Bank Warns How Voice-Cloning Technology Puts Millions At Risk","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"starling-bank-warns-how-voice-cloning-technology-puts-millions-at-risk","to_ping":"","pinged":"","post_modified":"2024-09-25 19:10:49","post_modified_gmt":"2024-09-25 09:10:49","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18852","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18746,"post_author":"17","post_date":"2024-09-21 04:11:53","post_date_gmt":"2024-09-20 18:11:53","post_content":"\n

Meta, the company behind Facebook, intends to use social media posts in the UK to train its generative AI models. This will allow Meta\u2019s AI product to \u201creflect British culture, history, and idioms\u201d. The company believes this will facilitate the adoption of generative AI technology by UK businesses and industries. <\/p>\n\n\n\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Launch Of Media Manager<\/h2>\n\n\n\n

However, no signs of the launch of Media Manager can be seen in the dawn of 2025. OpenAl hasn't yet broken its silence over the matter. However, an employee on the condition of anonymity told TechCrunch\u2013a media outlet that \u201cI don't think it [Media Manager] was a priority. To be honest, I don't think I remember anyone working on it.\u201d This shows how developing opt-out tools was never the priority of stakeholders of OpenAl.<\/p>\n\n\n\n

Keeping in view the fact that OpenAl considers this unreleased opt-out tool the solution to all copyright-related issues, critics think it wouldn't be able to address all existing complicated problems. Although the self-imposed deadline for the launch of the opt-out tool has been surpassed, it can only be hoped that OpenAI will break its silence soon.<\/p>\n","post_title":"OpenAI failed To Deliver The Opt-Out Tool It Promised By 2025","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"openai-failed-to-deliver-the-opt-out-tool-it-promised-by-2025","to_ping":"","pinged":"","post_modified":"2025-01-13 04:13:51","post_modified_gmt":"2025-01-12 17:13:51","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=20054","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19759,"post_author":"17","post_date":"2024-12-03 04:00:54","post_date_gmt":"2024-12-02 17:00:54","post_content":"\n

Amazon has announced a $4 billion investment in AI company Anthropic to facilitate the development of generative AI models. This is the second significant commitment between Amazon Web Services (AWS) and Anthropic since 2023. Both companies released separate statements confirming the news.<\/p>\n\n\n\n

\u201cToday we\u2019re announcing an expansion of our collaboration with Amazon Web Services (AWS), deepening our work together to develop and deploy advanced AI systems\u201d<\/em><\/strong>, reads the official blog post on Anthropic\u2019s website<\/a>. <\/p>\n\n\n\n

Amazon first partnered with Anthropic in September 2023 in a deal initially worth $4 billion. As part of the agreement,  Amazon Web Service adopted Anthropic\u2019s Claude family of large language models (LLM). In exchange, AWS became the primary cloud service provider for Anthropic. According to Anthropic, this latest expansion will deepen their strategic collaboration to develop and deploy advanced AI systems. The total cost of this partnership now sits at $8 billion as of 2024.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Amazon Forays Into The World Of Generative AI With Amazon Bedrock<\/a><\/p>\n\n\n\n

AWS Trainium And Inferentia Chips<\/h2>\n\n\n\n

AWS will now also be Anthropic's main training partner. The AI company will utilize AWS Trainium and Inferentia chips to build its foundation models. The aim is to extract the maximum output from these chips to train the most advanced AI systems. <\/p>\n\n\n\n

The companies will also give AWS customers early access to exclusive customization options for a limited period. Users can fine-tune Claude models on the Amazon Bedrock platform to cater to their needs. Additionally, the companies have set up discrete cloud environments for government customers. <\/p>\n\n\n\n

AWS CEO Matt Garman claims customers have responded positively<\/a> to this new development. \u201cThe response from AWS customers who are developing generative AI applications powered by Anthropic in Amazon Bedrock has been remarkable\u201d<\/em>, he added.\u00a0<\/p>\n","post_title":"Amazon Commits $4 Billion Investment In Anthropic To Power The Generation Of AI Development","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"amazon-commits-4-billion-investment-in-anthropic-to-power-the-generation-of-ai-development","to_ping":"","pinged":"","post_modified":"2024-12-03 04:01:03","post_modified_gmt":"2024-12-02 17:01:03","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19759","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18852,"post_author":"18","post_date":"2024-09-25 19:10:42","post_date_gmt":"2024-09-25 09:10:42","post_content":"\n

In a growing concern for everyday online users, Starling Bank has issued a warning about a new wave of scams using artificial intelligence (AI) to clone people\u2019s voices. The bank has raised the alarm that millions could be vulnerable to this increasingly sophisticated fraud.<\/p>\n\n\n\n

These scams are unsettlingly simple. Fraudsters need only a few seconds of someone's voice, often found in videos posted online, to create a replica. With this AI-generated voice, they can impersonate the victim and make phone calls to friends or family members, requesting money or sensitive information.<\/p>\n\n\n\n

A story originally reported by CNN quoted that according to a recent survey conducted by Starling Bank<\/a> and Mortar Research, more than a quarter of respondents had been targeted by an AI voice-cloning scam within the last year. What\u2019s more worrying is that 46% of those surveyed didn\u2019t even know such scams existed, leaving them vulnerable to deception. In some cases, the survey found that 8% of people would willingly send money even if the phone call seemed suspicious, simply because the voice sounded familiar.<\/p>\n\n\n\n

People frequently post content online, including audio or video recordings of their voice, without considering the potential risk this poses. The ability of AI to mimic voices is advancing rapidly, and it only takes a few seconds of audio for a fraudster to create an effective clone. This makes it easier than ever for scammers to prey on the emotional bonds between family members, tricking people into sending money to what they believe are loved ones in need.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Has Recently Unveiled Their Latest Voice Engine, Which Is Capable Of Cloning Human Voices<\/a><\/p>\n\n\n\n

Preventive Measures By Sterling Bank<\/h2>\n\n\n\n

Starling Bank is urging people to take steps to protect themselves by agreeing on a \"safe phrase\" <\/em>with family members. This simple, random phrase can be used to verify the identity of the person on the other end of the call, providing an extra layer of security. However, the bank advises that this phrase should not be shared via text, and if it is, the message should be deleted immediately to prevent it from being intercepted by fraudsters.<\/p>\n\n\n\n

The threat posed by AI technology goes beyond voice cloning. Earlier this year, OpenAI, the company behind the popular AI chatbot ChatGPT, introduced a voice replication tool called Voice Engine but chose not to make it widely available due to concerns about misuse. As AI becomes more adept at mimicking human voices, there are growing concerns about its potential for misuse, from financial fraud to spreading misinformation.<\/p>\n\n\n\n

Looking ahead, the risks associated with AI-driven scams are likely to expand. As technology becomes more advanced and accessible, scammers will find new ways to exploit it. Consumers must remain vigilant, not just in guarding their financial information but in understanding the new vulnerabilities created by digital footprints.<\/p>\n\n\n\n

Starling Bank's advice to agree on a safe phrase is a simple yet effective solution for now, but as AI technology continues to develop, there will be a growing need for more sophisticated safeguards. While innovation promises many benefits, it\u2019s clear that the rapid pace of AI development also poses new challenges, making it crucial for both individuals and institutions to stay one step ahead of cybercriminals.<\/p>\n","post_title":"Starling Bank Warns How Voice-Cloning Technology Puts Millions At Risk","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"starling-bank-warns-how-voice-cloning-technology-puts-millions-at-risk","to_ping":"","pinged":"","post_modified":"2024-09-25 19:10:49","post_modified_gmt":"2024-09-25 09:10:49","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18852","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18746,"post_author":"17","post_date":"2024-09-21 04:11:53","post_date_gmt":"2024-09-20 18:11:53","post_content":"\n

Meta, the company behind Facebook, intends to use social media posts in the UK to train its generative AI models. This will allow Meta\u2019s AI product to \u201creflect British culture, history, and idioms\u201d. The company believes this will facilitate the adoption of generative AI technology by UK businesses and industries. <\/p>\n\n\n\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

See Related: <\/em><\/strong>Top Canadian Media Outlets Sue OpenAI In Copyright Case Potentially Worth Billions<\/a><\/p>\n\n\n\n

Launch Of Media Manager<\/h2>\n\n\n\n

However, no signs of the launch of Media Manager can be seen in the dawn of 2025. OpenAl hasn't yet broken its silence over the matter. However, an employee on the condition of anonymity told TechCrunch\u2013a media outlet that \u201cI don't think it [Media Manager] was a priority. To be honest, I don't think I remember anyone working on it.\u201d This shows how developing opt-out tools was never the priority of stakeholders of OpenAl.<\/p>\n\n\n\n

Keeping in view the fact that OpenAl considers this unreleased opt-out tool the solution to all copyright-related issues, critics think it wouldn't be able to address all existing complicated problems. Although the self-imposed deadline for the launch of the opt-out tool has been surpassed, it can only be hoped that OpenAI will break its silence soon.<\/p>\n","post_title":"OpenAI failed To Deliver The Opt-Out Tool It Promised By 2025","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"openai-failed-to-deliver-the-opt-out-tool-it-promised-by-2025","to_ping":"","pinged":"","post_modified":"2025-01-13 04:13:51","post_modified_gmt":"2025-01-12 17:13:51","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=20054","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19759,"post_author":"17","post_date":"2024-12-03 04:00:54","post_date_gmt":"2024-12-02 17:00:54","post_content":"\n

Amazon has announced a $4 billion investment in AI company Anthropic to facilitate the development of generative AI models. This is the second significant commitment between Amazon Web Services (AWS) and Anthropic since 2023. Both companies released separate statements confirming the news.<\/p>\n\n\n\n

\u201cToday we\u2019re announcing an expansion of our collaboration with Amazon Web Services (AWS), deepening our work together to develop and deploy advanced AI systems\u201d<\/em><\/strong>, reads the official blog post on Anthropic\u2019s website<\/a>. <\/p>\n\n\n\n

Amazon first partnered with Anthropic in September 2023 in a deal initially worth $4 billion. As part of the agreement,  Amazon Web Service adopted Anthropic\u2019s Claude family of large language models (LLM). In exchange, AWS became the primary cloud service provider for Anthropic. According to Anthropic, this latest expansion will deepen their strategic collaboration to develop and deploy advanced AI systems. The total cost of this partnership now sits at $8 billion as of 2024.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Amazon Forays Into The World Of Generative AI With Amazon Bedrock<\/a><\/p>\n\n\n\n

AWS Trainium And Inferentia Chips<\/h2>\n\n\n\n

AWS will now also be Anthropic's main training partner. The AI company will utilize AWS Trainium and Inferentia chips to build its foundation models. The aim is to extract the maximum output from these chips to train the most advanced AI systems. <\/p>\n\n\n\n

The companies will also give AWS customers early access to exclusive customization options for a limited period. Users can fine-tune Claude models on the Amazon Bedrock platform to cater to their needs. Additionally, the companies have set up discrete cloud environments for government customers. <\/p>\n\n\n\n

AWS CEO Matt Garman claims customers have responded positively<\/a> to this new development. \u201cThe response from AWS customers who are developing generative AI applications powered by Anthropic in Amazon Bedrock has been remarkable\u201d<\/em>, he added.\u00a0<\/p>\n","post_title":"Amazon Commits $4 Billion Investment In Anthropic To Power The Generation Of AI Development","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"amazon-commits-4-billion-investment-in-anthropic-to-power-the-generation-of-ai-development","to_ping":"","pinged":"","post_modified":"2024-12-03 04:01:03","post_modified_gmt":"2024-12-02 17:01:03","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19759","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18852,"post_author":"18","post_date":"2024-09-25 19:10:42","post_date_gmt":"2024-09-25 09:10:42","post_content":"\n

In a growing concern for everyday online users, Starling Bank has issued a warning about a new wave of scams using artificial intelligence (AI) to clone people\u2019s voices. The bank has raised the alarm that millions could be vulnerable to this increasingly sophisticated fraud.<\/p>\n\n\n\n

These scams are unsettlingly simple. Fraudsters need only a few seconds of someone's voice, often found in videos posted online, to create a replica. With this AI-generated voice, they can impersonate the victim and make phone calls to friends or family members, requesting money or sensitive information.<\/p>\n\n\n\n

A story originally reported by CNN quoted that according to a recent survey conducted by Starling Bank<\/a> and Mortar Research, more than a quarter of respondents had been targeted by an AI voice-cloning scam within the last year. What\u2019s more worrying is that 46% of those surveyed didn\u2019t even know such scams existed, leaving them vulnerable to deception. In some cases, the survey found that 8% of people would willingly send money even if the phone call seemed suspicious, simply because the voice sounded familiar.<\/p>\n\n\n\n

People frequently post content online, including audio or video recordings of their voice, without considering the potential risk this poses. The ability of AI to mimic voices is advancing rapidly, and it only takes a few seconds of audio for a fraudster to create an effective clone. This makes it easier than ever for scammers to prey on the emotional bonds between family members, tricking people into sending money to what they believe are loved ones in need.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Has Recently Unveiled Their Latest Voice Engine, Which Is Capable Of Cloning Human Voices<\/a><\/p>\n\n\n\n

Preventive Measures By Sterling Bank<\/h2>\n\n\n\n

Starling Bank is urging people to take steps to protect themselves by agreeing on a \"safe phrase\" <\/em>with family members. This simple, random phrase can be used to verify the identity of the person on the other end of the call, providing an extra layer of security. However, the bank advises that this phrase should not be shared via text, and if it is, the message should be deleted immediately to prevent it from being intercepted by fraudsters.<\/p>\n\n\n\n

The threat posed by AI technology goes beyond voice cloning. Earlier this year, OpenAI, the company behind the popular AI chatbot ChatGPT, introduced a voice replication tool called Voice Engine but chose not to make it widely available due to concerns about misuse. As AI becomes more adept at mimicking human voices, there are growing concerns about its potential for misuse, from financial fraud to spreading misinformation.<\/p>\n\n\n\n

Looking ahead, the risks associated with AI-driven scams are likely to expand. As technology becomes more advanced and accessible, scammers will find new ways to exploit it. Consumers must remain vigilant, not just in guarding their financial information but in understanding the new vulnerabilities created by digital footprints.<\/p>\n\n\n\n

Starling Bank's advice to agree on a safe phrase is a simple yet effective solution for now, but as AI technology continues to develop, there will be a growing need for more sophisticated safeguards. While innovation promises many benefits, it\u2019s clear that the rapid pace of AI development also poses new challenges, making it crucial for both individuals and institutions to stay one step ahead of cybercriminals.<\/p>\n","post_title":"Starling Bank Warns How Voice-Cloning Technology Puts Millions At Risk","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"starling-bank-warns-how-voice-cloning-technology-puts-millions-at-risk","to_ping":"","pinged":"","post_modified":"2024-09-25 19:10:49","post_modified_gmt":"2024-09-25 09:10:49","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18852","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18746,"post_author":"17","post_date":"2024-09-21 04:11:53","post_date_gmt":"2024-09-20 18:11:53","post_content":"\n

Meta, the company behind Facebook, intends to use social media posts in the UK to train its generative AI models. This will allow Meta\u2019s AI product to \u201creflect British culture, history, and idioms\u201d. The company believes this will facilitate the adoption of generative AI technology by UK businesses and industries. <\/p>\n\n\n\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

Media Manager, the opt-out tool was also expected to make things easy for the parent company, OpenAl<\/a>. Not surprising, but if you still wonder how? Let us break it to you OpenAl has been facing legal challenges and accusations by several creators for exploiting their content to train the Al model without consent. Creators from all walks of life including visual artists, YouTubers, Computer Scientists, Designers, photographers, and even distinguished authors like Sarah Silverman are among the petitioners who sued OpenAl for unauthorized use of their work to train Al model. So, the Media Manager was expected to protect the openAl from Intellectual property-related lawsuits.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Top Canadian Media Outlets Sue OpenAI In Copyright Case Potentially Worth Billions<\/a><\/p>\n\n\n\n

Launch Of Media Manager<\/h2>\n\n\n\n

However, no signs of the launch of Media Manager can be seen in the dawn of 2025. OpenAl hasn't yet broken its silence over the matter. However, an employee on the condition of anonymity told TechCrunch\u2013a media outlet that \u201cI don't think it [Media Manager] was a priority. To be honest, I don't think I remember anyone working on it.\u201d This shows how developing opt-out tools was never the priority of stakeholders of OpenAl.<\/p>\n\n\n\n

Keeping in view the fact that OpenAl considers this unreleased opt-out tool the solution to all copyright-related issues, critics think it wouldn't be able to address all existing complicated problems. Although the self-imposed deadline for the launch of the opt-out tool has been surpassed, it can only be hoped that OpenAI will break its silence soon.<\/p>\n","post_title":"OpenAI failed To Deliver The Opt-Out Tool It Promised By 2025","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"openai-failed-to-deliver-the-opt-out-tool-it-promised-by-2025","to_ping":"","pinged":"","post_modified":"2025-01-13 04:13:51","post_modified_gmt":"2025-01-12 17:13:51","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=20054","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19759,"post_author":"17","post_date":"2024-12-03 04:00:54","post_date_gmt":"2024-12-02 17:00:54","post_content":"\n

Amazon has announced a $4 billion investment in AI company Anthropic to facilitate the development of generative AI models. This is the second significant commitment between Amazon Web Services (AWS) and Anthropic since 2023. Both companies released separate statements confirming the news.<\/p>\n\n\n\n

\u201cToday we\u2019re announcing an expansion of our collaboration with Amazon Web Services (AWS), deepening our work together to develop and deploy advanced AI systems\u201d<\/em><\/strong>, reads the official blog post on Anthropic\u2019s website<\/a>. <\/p>\n\n\n\n

Amazon first partnered with Anthropic in September 2023 in a deal initially worth $4 billion. As part of the agreement,  Amazon Web Service adopted Anthropic\u2019s Claude family of large language models (LLM). In exchange, AWS became the primary cloud service provider for Anthropic. According to Anthropic, this latest expansion will deepen their strategic collaboration to develop and deploy advanced AI systems. The total cost of this partnership now sits at $8 billion as of 2024.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Amazon Forays Into The World Of Generative AI With Amazon Bedrock<\/a><\/p>\n\n\n\n

AWS Trainium And Inferentia Chips<\/h2>\n\n\n\n

AWS will now also be Anthropic's main training partner. The AI company will utilize AWS Trainium and Inferentia chips to build its foundation models. The aim is to extract the maximum output from these chips to train the most advanced AI systems. <\/p>\n\n\n\n

The companies will also give AWS customers early access to exclusive customization options for a limited period. Users can fine-tune Claude models on the Amazon Bedrock platform to cater to their needs. Additionally, the companies have set up discrete cloud environments for government customers. <\/p>\n\n\n\n

AWS CEO Matt Garman claims customers have responded positively<\/a> to this new development. \u201cThe response from AWS customers who are developing generative AI applications powered by Anthropic in Amazon Bedrock has been remarkable\u201d<\/em>, he added.\u00a0<\/p>\n","post_title":"Amazon Commits $4 Billion Investment In Anthropic To Power The Generation Of AI Development","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"amazon-commits-4-billion-investment-in-anthropic-to-power-the-generation-of-ai-development","to_ping":"","pinged":"","post_modified":"2024-12-03 04:01:03","post_modified_gmt":"2024-12-02 17:01:03","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19759","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18852,"post_author":"18","post_date":"2024-09-25 19:10:42","post_date_gmt":"2024-09-25 09:10:42","post_content":"\n

In a growing concern for everyday online users, Starling Bank has issued a warning about a new wave of scams using artificial intelligence (AI) to clone people\u2019s voices. The bank has raised the alarm that millions could be vulnerable to this increasingly sophisticated fraud.<\/p>\n\n\n\n

These scams are unsettlingly simple. Fraudsters need only a few seconds of someone's voice, often found in videos posted online, to create a replica. With this AI-generated voice, they can impersonate the victim and make phone calls to friends or family members, requesting money or sensitive information.<\/p>\n\n\n\n

A story originally reported by CNN quoted that according to a recent survey conducted by Starling Bank<\/a> and Mortar Research, more than a quarter of respondents had been targeted by an AI voice-cloning scam within the last year. What\u2019s more worrying is that 46% of those surveyed didn\u2019t even know such scams existed, leaving them vulnerable to deception. In some cases, the survey found that 8% of people would willingly send money even if the phone call seemed suspicious, simply because the voice sounded familiar.<\/p>\n\n\n\n

People frequently post content online, including audio or video recordings of their voice, without considering the potential risk this poses. The ability of AI to mimic voices is advancing rapidly, and it only takes a few seconds of audio for a fraudster to create an effective clone. This makes it easier than ever for scammers to prey on the emotional bonds between family members, tricking people into sending money to what they believe are loved ones in need.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Has Recently Unveiled Their Latest Voice Engine, Which Is Capable Of Cloning Human Voices<\/a><\/p>\n\n\n\n

Preventive Measures By Sterling Bank<\/h2>\n\n\n\n

Starling Bank is urging people to take steps to protect themselves by agreeing on a \"safe phrase\" <\/em>with family members. This simple, random phrase can be used to verify the identity of the person on the other end of the call, providing an extra layer of security. However, the bank advises that this phrase should not be shared via text, and if it is, the message should be deleted immediately to prevent it from being intercepted by fraudsters.<\/p>\n\n\n\n

The threat posed by AI technology goes beyond voice cloning. Earlier this year, OpenAI, the company behind the popular AI chatbot ChatGPT, introduced a voice replication tool called Voice Engine but chose not to make it widely available due to concerns about misuse. As AI becomes more adept at mimicking human voices, there are growing concerns about its potential for misuse, from financial fraud to spreading misinformation.<\/p>\n\n\n\n

Looking ahead, the risks associated with AI-driven scams are likely to expand. As technology becomes more advanced and accessible, scammers will find new ways to exploit it. Consumers must remain vigilant, not just in guarding their financial information but in understanding the new vulnerabilities created by digital footprints.<\/p>\n\n\n\n

Starling Bank's advice to agree on a safe phrase is a simple yet effective solution for now, but as AI technology continues to develop, there will be a growing need for more sophisticated safeguards. While innovation promises many benefits, it\u2019s clear that the rapid pace of AI development also poses new challenges, making it crucial for both individuals and institutions to stay one step ahead of cybercriminals.<\/p>\n","post_title":"Starling Bank Warns How Voice-Cloning Technology Puts Millions At Risk","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"starling-bank-warns-how-voice-cloning-technology-puts-millions-at-risk","to_ping":"","pinged":"","post_modified":"2024-09-25 19:10:49","post_modified_gmt":"2024-09-25 09:10:49","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18852","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18746,"post_author":"17","post_date":"2024-09-21 04:11:53","post_date_gmt":"2024-09-20 18:11:53","post_content":"\n

Meta, the company behind Facebook, intends to use social media posts in the UK to train its generative AI models. This will allow Meta\u2019s AI product to \u201creflect British culture, history, and idioms\u201d. The company believes this will facilitate the adoption of generative AI technology by UK businesses and industries. <\/p>\n\n\n\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT
\n

The ink isn't dry on the claim of OpenAl to launch a new opt-out tool. Not long ago but in May 2024, OpenAl broke the internet with news of launching \u201cMedia Manager\u201d, the tool of its kind, by 2025. It was anticipated that the opt-out tool, Media Manager, would address the grievances of content creators by providing them security for their content and intellectual property. Back in May 2024, OpenAl claimed that Media Manager would give the authority to content owners to inform OpenAl that this particular content belongs to them. That's how OpenAl couldn't use their content but only if allowed by the creators.<\/p>\n\n\n\n

Media Manager, the opt-out tool was also expected to make things easy for the parent company, OpenAl<\/a>. Not surprising, but if you still wonder how? Let us break it to you OpenAl has been facing legal challenges and accusations by several creators for exploiting their content to train the Al model without consent. Creators from all walks of life including visual artists, YouTubers, Computer Scientists, Designers, photographers, and even distinguished authors like Sarah Silverman are among the petitioners who sued OpenAl for unauthorized use of their work to train Al model. So, the Media Manager was expected to protect the openAl from Intellectual property-related lawsuits.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Top Canadian Media Outlets Sue OpenAI In Copyright Case Potentially Worth Billions<\/a><\/p>\n\n\n\n

Launch Of Media Manager<\/h2>\n\n\n\n

However, no signs of the launch of Media Manager can be seen in the dawn of 2025. OpenAl hasn't yet broken its silence over the matter. However, an employee on the condition of anonymity told TechCrunch\u2013a media outlet that \u201cI don't think it [Media Manager] was a priority. To be honest, I don't think I remember anyone working on it.\u201d This shows how developing opt-out tools was never the priority of stakeholders of OpenAl.<\/p>\n\n\n\n

Keeping in view the fact that OpenAl considers this unreleased opt-out tool the solution to all copyright-related issues, critics think it wouldn't be able to address all existing complicated problems. Although the self-imposed deadline for the launch of the opt-out tool has been surpassed, it can only be hoped that OpenAI will break its silence soon.<\/p>\n","post_title":"OpenAI failed To Deliver The Opt-Out Tool It Promised By 2025","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"openai-failed-to-deliver-the-opt-out-tool-it-promised-by-2025","to_ping":"","pinged":"","post_modified":"2025-01-13 04:13:51","post_modified_gmt":"2025-01-12 17:13:51","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=20054","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":19759,"post_author":"17","post_date":"2024-12-03 04:00:54","post_date_gmt":"2024-12-02 17:00:54","post_content":"\n

Amazon has announced a $4 billion investment in AI company Anthropic to facilitate the development of generative AI models. This is the second significant commitment between Amazon Web Services (AWS) and Anthropic since 2023. Both companies released separate statements confirming the news.<\/p>\n\n\n\n

\u201cToday we\u2019re announcing an expansion of our collaboration with Amazon Web Services (AWS), deepening our work together to develop and deploy advanced AI systems\u201d<\/em><\/strong>, reads the official blog post on Anthropic\u2019s website<\/a>. <\/p>\n\n\n\n

Amazon first partnered with Anthropic in September 2023 in a deal initially worth $4 billion. As part of the agreement,  Amazon Web Service adopted Anthropic\u2019s Claude family of large language models (LLM). In exchange, AWS became the primary cloud service provider for Anthropic. According to Anthropic, this latest expansion will deepen their strategic collaboration to develop and deploy advanced AI systems. The total cost of this partnership now sits at $8 billion as of 2024.<\/p>\n\n\n\n

See Related: <\/em><\/strong>Amazon Forays Into The World Of Generative AI With Amazon Bedrock<\/a><\/p>\n\n\n\n

AWS Trainium And Inferentia Chips<\/h2>\n\n\n\n

AWS will now also be Anthropic's main training partner. The AI company will utilize AWS Trainium and Inferentia chips to build its foundation models. The aim is to extract the maximum output from these chips to train the most advanced AI systems. <\/p>\n\n\n\n

The companies will also give AWS customers early access to exclusive customization options for a limited period. Users can fine-tune Claude models on the Amazon Bedrock platform to cater to their needs. Additionally, the companies have set up discrete cloud environments for government customers. <\/p>\n\n\n\n

AWS CEO Matt Garman claims customers have responded positively<\/a> to this new development. \u201cThe response from AWS customers who are developing generative AI applications powered by Anthropic in Amazon Bedrock has been remarkable\u201d<\/em>, he added.\u00a0<\/p>\n","post_title":"Amazon Commits $4 Billion Investment In Anthropic To Power The Generation Of AI Development","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"amazon-commits-4-billion-investment-in-anthropic-to-power-the-generation-of-ai-development","to_ping":"","pinged":"","post_modified":"2024-12-03 04:01:03","post_modified_gmt":"2024-12-02 17:01:03","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=19759","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18870,"post_author":"17","post_date":"2024-09-25 19:56:24","post_date_gmt":"2024-09-25 09:56:24","post_content":"\n

Social media company YouTube has announced its plan to integrate generative AI into YouTube Shorts. In a blog post, YouTube confirmed that users will be able to use Google\u2019s VEO to create backgrounds for their Shorts. <\/p>\n\n\n\n

\u201cWe\u2019ll start integrating Google DeepMind's most capable model for generating video, Veo, into YouTube Shorts later this year<\/em><\/strong>\u201d, the post stated<\/a>. <\/p>\n\n\n\n

Google also confirmed<\/a> this development, stating. \u201cOver the next few months, we\u2019re bringing our advanced generative AI models, Veo and Imagen 3, to YouTube creators through Dream Screen\u201d<\/em><\/strong>. <\/p>\n\n\n\n

In 2023, YouTube introduced Dream Screen, an AI tool that allows users to create backgrounds for short content via text prompts. With the integration of VEO, the company claims users will be able to generate \u201ceven more incredible video backgrounds\u201d and visualize improbable concepts. <\/p>\n\n\n\n

See Related:<\/em><\/strong> From Samsung Unpacked: Samsung Brings AI To Fashion With 2 New Smart Watches<\/a><\/p>\n\n\n\n

Additionally, YouTube plans to add a feature that can generate 6-second video clips with the help of VEO. The AI will create images in 4 images in different styles from a single text prompt. Users can then choose one of the images and the AI will create a 6-second clip with the same art style. However, this feature will not be available until 2025. <\/p>\n\n\n\n

The videos generated with the help of AI will have a watermark created by SynthID, another one of Google\u2019s creations. YouTube also plans on labeling Shorts that feature AI-generated content.<\/p>\n","post_title":"Youtube Shorts To Harness The Power Of Generative AI By Integrating Google\u2019s VEO Video Generator","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"youtube-shorts-to-harness-the-power-of-generative-ai-by-integrating-googles-veo-video-generator","to_ping":"","pinged":"","post_modified":"2024-09-25 19:56:29","post_modified_gmt":"2024-09-25 09:56:29","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18870","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18852,"post_author":"18","post_date":"2024-09-25 19:10:42","post_date_gmt":"2024-09-25 09:10:42","post_content":"\n

In a growing concern for everyday online users, Starling Bank has issued a warning about a new wave of scams using artificial intelligence (AI) to clone people\u2019s voices. The bank has raised the alarm that millions could be vulnerable to this increasingly sophisticated fraud.<\/p>\n\n\n\n

These scams are unsettlingly simple. Fraudsters need only a few seconds of someone's voice, often found in videos posted online, to create a replica. With this AI-generated voice, they can impersonate the victim and make phone calls to friends or family members, requesting money or sensitive information.<\/p>\n\n\n\n

A story originally reported by CNN quoted that according to a recent survey conducted by Starling Bank<\/a> and Mortar Research, more than a quarter of respondents had been targeted by an AI voice-cloning scam within the last year. What\u2019s more worrying is that 46% of those surveyed didn\u2019t even know such scams existed, leaving them vulnerable to deception. In some cases, the survey found that 8% of people would willingly send money even if the phone call seemed suspicious, simply because the voice sounded familiar.<\/p>\n\n\n\n

People frequently post content online, including audio or video recordings of their voice, without considering the potential risk this poses. The ability of AI to mimic voices is advancing rapidly, and it only takes a few seconds of audio for a fraudster to create an effective clone. This makes it easier than ever for scammers to prey on the emotional bonds between family members, tricking people into sending money to what they believe are loved ones in need.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Has Recently Unveiled Their Latest Voice Engine, Which Is Capable Of Cloning Human Voices<\/a><\/p>\n\n\n\n

Preventive Measures By Sterling Bank<\/h2>\n\n\n\n

Starling Bank is urging people to take steps to protect themselves by agreeing on a \"safe phrase\" <\/em>with family members. This simple, random phrase can be used to verify the identity of the person on the other end of the call, providing an extra layer of security. However, the bank advises that this phrase should not be shared via text, and if it is, the message should be deleted immediately to prevent it from being intercepted by fraudsters.<\/p>\n\n\n\n

The threat posed by AI technology goes beyond voice cloning. Earlier this year, OpenAI, the company behind the popular AI chatbot ChatGPT, introduced a voice replication tool called Voice Engine but chose not to make it widely available due to concerns about misuse. As AI becomes more adept at mimicking human voices, there are growing concerns about its potential for misuse, from financial fraud to spreading misinformation.<\/p>\n\n\n\n

Looking ahead, the risks associated with AI-driven scams are likely to expand. As technology becomes more advanced and accessible, scammers will find new ways to exploit it. Consumers must remain vigilant, not just in guarding their financial information but in understanding the new vulnerabilities created by digital footprints.<\/p>\n\n\n\n

Starling Bank's advice to agree on a safe phrase is a simple yet effective solution for now, but as AI technology continues to develop, there will be a growing need for more sophisticated safeguards. While innovation promises many benefits, it\u2019s clear that the rapid pace of AI development also poses new challenges, making it crucial for both individuals and institutions to stay one step ahead of cybercriminals.<\/p>\n","post_title":"Starling Bank Warns How Voice-Cloning Technology Puts Millions At Risk","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"starling-bank-warns-how-voice-cloning-technology-puts-millions-at-risk","to_ping":"","pinged":"","post_modified":"2024-09-25 19:10:49","post_modified_gmt":"2024-09-25 09:10:49","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18852","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":18746,"post_author":"17","post_date":"2024-09-21 04:11:53","post_date_gmt":"2024-09-20 18:11:53","post_content":"\n

Meta, the company behind Facebook, intends to use social media posts in the UK to train its generative AI models. This will allow Meta\u2019s AI product to \u201creflect British culture, history, and idioms\u201d. The company believes this will facilitate the adoption of generative AI technology by UK businesses and industries. <\/p>\n\n\n\n

\u201cWe will begin training for AI at Meta using public content shared by adults on Facebook and Instagram in the UK over the coming months\u201d<\/em><\/strong>, the company has stated<\/a>. <\/p>\n\n\n\n

The operation was originally announced in 2023 but soon met significant backlash owing to security and privacy concerns. Various groups such as the Open Rights Group (ORG) and None of Your Business (NOYB) opposed such an initiative<\/a>. It was subsequently halted by the Information Commissioner\u2019s Office (ICO) in the United Kingdom. This plan has also been banned in the EU. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Meta Introduces Advanced AI Chatbots To All Its Apps, Revolutionizing User Interactions<\/a><\/p>\n\n\n\n

ICO Guidelines And First-party Data<\/h2>\n\n\n\n

Meta states it has \u201cengaged positively with the Information Commissioner\u2019s Office (ICO) and welcomes the constructive approach that the ICO has taken\u201d.<\/em> Meta added that the guidance provided by the ICO would help form the basis for \u201clegitimate interests\u201d, allowing the company to collect certain first-party data.\u00a0<\/p>\n\n\n\n

Meta also clarified what data they will collect from users. The company said, \u201cWe do not use people\u2019s private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We\u2019ll use public information \u2013 such as public posts and comments, or public photos and captions\u201d<\/em><\/strong>.<\/p>\n\n\n\n

As part of this program, adult users of FaceBook and Instagram in the UK will receive notifications about the data mining process, including access to an objection form. Meta claims it will not contact any user who submits an objection.<\/p>\n","post_title":"Meta To Implement Controversial Plan To Use Social Media Posts To Train Generative AI","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"meta-to-implement-controversial-plan-to-use-social-media-posts-to-train-generative-ai","to_ping":"","pinged":"\nhttps:\/\/about.fb.com\/news\/2024\/09\/building-ai-technology-for-the-uk-in-a-responsible-and-transparent-way\/","post_modified":"2024-09-21 04:12:00","post_modified_gmt":"2024-09-20 18:12:00","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=18746","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17781,"post_author":"17","post_date":"2024-07-13 05:15:33","post_date_gmt":"2024-07-12 19:15:33","post_content":"\n

American payment card service Mastercard is implementing generative AI technology to combat credit card fraud. As one of the largest credit card companies in America, the company believes AI can protect its vast clientele from potential threats. <\/p>\n\n\n\n

\u201cMastercard, a world leader in cyber security, is now better able to predict the full card detail of these compromised cards on its network, enabling banks to block them far faster than previously.\u201d<\/em><\/strong>, the company revealed on its official website<\/a>. <\/p>\n\n\n\n

The company will use AI to scan \u201ctransaction data across billions of cards and millions of merchants\u201d. The AI will then alert banks and regulators when a card is suspected to be compromised. Using AI will allow them to predict the complete details of compromised cards, which enables banks to promptly remove these cards from their network. <\/p>\n\n\n\n

See Related:<\/em><\/strong> Sandbox Issues Security Alerts Involving Phishing Scam Emails<\/a><\/p>\n\n\n\n

The company hopes that generative AI will better protect future transactions from emerging threats. Some of the initiatives include doubling the detection rate of compromised cards, reducing false positives during the detection of fraudulent transactions, and identifying at-risk merchants more rapidly.<\/p>\n\n\n\n

\u201cThanks to our world-leading cyber technology we can now piece together the jigsaw \u2013 enhancing trust to banks, their customers, and the digital ecosystem as a whole,\u201d<\/em><\/strong> said Johan Gerber, Executive Vice President of Security & Cyber Innovation at MasterCard.<\/p>\n","post_title":"Mastercard To Use Generative AI For Card Fraud Detection","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"mastercard-to-use-generative-ai-for-card-fraud-detection","to_ping":"","pinged":"","post_modified":"2024-07-15 03:02:54","post_modified_gmt":"2024-07-14 17:02:54","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17781","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":17141,"post_author":"17","post_date":"2024-06-02 21:45:58","post_date_gmt":"2024-06-02 11:45:58","post_content":"\n

American tech giant Google is expanding its generative AI catalog with PaliGemma, a brand-new AI model. Announced during the recently concluded Google I\/O, PaliGemma is a vision-language model (VLM) that understands both visual and text prompts simultaneously. <\/p>\n\n\n\n

\u201cToday, we're excited to further expand the Gemma family with the introduction of PaliGemma, a powerful open vision-language model (VLM)\u201d<\/em><\/strong>, the company stated during the event<\/a>. The model was inspired by PaLI-3, a small-scale VLM developed by Cornell University. It integrates open components from both SigLIP (Sigmoid Language Image Pre-training) and the Gemma language model.<\/p>\n\n\n\n

See Related: <\/em><\/strong>OpenAI Launches ChatGPT Plus Subscription In India; Includes GPT-4<\/a><\/p>\n\n\n\n

According to Google, the model is designed for \u201cclass-leading fine-tune performance\u201d on several tasks including writing captions for images, answering visual questions, and understanding texts in images. Google further added, \"We're providing both pre-trained and fine-tuned checkpoints at multiple resolutions, as well as checkpoints specifically tuned to a mixture of tasks for immediate exploration\u201d<\/em><\/strong>.<\/p>\n\n\n\n

Unlike many of Google\u2019s other AI models, PaliGemma is an open model. It is available to developers and researchers on various platforms such as GitHub, Hugging Face models, Kaggle, Vertex AI Model Garden, and ai.nvidia.com<\/a>. Interested developers can also interact with the model via this Hugging Face Space. The launch of PaliGemma coincides with other AI tools released by Google like Gemma 2 and Gemini 1.5 Flash. <\/p>\n","post_title":"Google Launches Brand New Vision Language Model: PaliGemma","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-launches-brand-new-vision-language-model-paligemma","to_ping":"","pinged":"","post_modified":"2024-06-02 21:46:01","post_modified_gmt":"2024-06-02 11:46:01","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=17141","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":16998,"post_author":"17","post_date":"2024-05-27 09:08:35","post_date_gmt":"2024-05-26 23:08:35","post_content":"\n

Tech giant Google has unveiled its newest multimodal Large Language Model (LLM) called Gemini Flash. The announcement came during the recently concluded Google I\/O, the annual developer conference organized by Google.<\/p>\n\n\n\n

\u201cToday, we\u2019re introducing Gemini 1.5 Flash: a model that\u2019s lighter-weight than 1.5 Pro, and designed to be fast and efficient to serve at scale\u201d<\/em><\/strong>, stated Demis Hassabis<\/a> CEO and Co-Founder of Google DeepMind. He goes on to explain that Flash is \u201coptimized for high-volume, high-frequency tasks at scale\u201d. Although this new model is a comparatively lighter weight model, it was still trained using the Gemini 1.5 pro model. <\/p>\n\n\n\n

See Related: <\/em><\/strong>Google Launches Its Largest And Most Capable AI Model Yet - Google Gemini<\/a><\/p>\n\n\n\n

Gemini Flash has been noted for its performance in summarization, chat applications, image and video captioning, data extraction from long documents and tables. The context window for the new model has also increased up to 1 million. This means the model can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.<\/p>\n\n\n\n

Gemini Flash is accessible for public preview in more than 200 regions across the globe. Currently<\/a>, the model is available in 2 price plans. The \u201cFree of charge\u201d plan has a limit of 15 requests per minute (RPM) and 1,500 requests per day (RPD). The \u201cpay-as-you-go\u201d plan will cost users $0.35 to $0.70 per 1 million input token and $1.05 to $2.10 per 1 million output token. The paid version allows 360 RPM and 10,000 RPD.<\/p>\n","post_title":"Google Announces Gemini Flash As It Attempts To Top The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"google-announces-gemini-flash-as-it-attempts-to-top-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-05-27 09:08:38","post_modified_gmt":"2024-05-26 23:08:38","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=16998","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15185,"post_author":"17","post_date":"2024-01-31 02:35:31","post_date_gmt":"2024-01-30 15:35:31","post_content":"\n

Google recently revealed a demo trailer for their new Lumiere AI, an AI-powered tool designed to generate videos from simple text prompts. The software was developed by the team at Google Research.<\/p>\n\n\n\n

Inbar MosseriInbar, Team Lead and Senior Staff Software Engineer at Google Research\u00a0announced on X<\/a>\u00a0(formerly Twitter),\u00a0\u201cThrilled to announce \"Lumiere\" - the new text-to-video model we've been working on! Lumiere generates coherent, high-quality videos using simple text prompts.\u201d.<\/em><\/p>\n\n\n\n

See Related: WIN NFT HERO from TRON\u2019s Metaverse Gears Up for the GameFi Stage<\/a><\/p>\n\n\n\n

Capabilities Of Lumiere<\/h2>\n\n\n\n

As well as a research paper, the company also released a trailer video showcasing some of the capabilities of the new model. The AI is capable of generating \u201crealistic, diverse and coherent motion\u201d from texts such as \u201ca dog driving a car wearing funny glasses\u201d. Additionally, Lumiere can also make videos from existing photos, using texts as guidelines.<\/p>\n\n\n\n

Google also demonstrates the AI\u2019s ability for stylized generation, where it uses any photo as a reference and creates a video in the same art style.<\/p>\n\n\n\n

In the research paper<\/a>, Google claims its model is superior to existing video generation models as it uses \u201cSpace-Time U-Net architecture that generates the entire temporal duration of the video at once\u201d. <\/p>\n\n\n\n

At the time of writing, Google\u2019s Lumiere is not available to the public. Interested parties can find samples of its work on Lumiere\u2019s GitHub page<\/a>.<\/p>\n","post_title":"A Glimpse Into The Future Of Generative AI: Google\u2019s New AI Model Lumiere","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"a-glimpse-into-the-future-of-generative-ai-googles-new-ai-model-lumiere","to_ping":"","pinged":"","post_modified":"2024-01-31 02:39:06","post_modified_gmt":"2024-01-30 15:39:06","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15185","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"},{"ID":15106,"post_author":"17","post_date":"2024-01-25 02:20:53","post_date_gmt":"2024-01-24 15:20:53","post_content":"\n

Samsung recently unveiled the Galaxy S24 series of smartphones at the company\u2019s biannual Galaxy Unpacked expo. Among the new technologies revealed on the day, Samsung introduced its proprietary AI tool called \u201cGalaxy AI\u201d.<\/p>\n\n\n\n

\u201cEmpowering everyday experiences, from barrier-free communication to awe-inspiring creativity to the power for even more possibilities, Galaxy AI transforms the iconic S series for the future\u201d<\/em>,\u00a0said the official statement released by the company<\/a>.<\/p>\n\n\n\n

The AI will power several features exclusive to Galaxy smartphones. With \u201cLive Translate\u201d users can translate texts and voice calls to their native language in real-time. The \u201cInterpreter\u201d feature translates live conversations into text and displays it on a split screen.<\/p>\n\n\n\n

See Related:<\/strong><\/em> Samsung Ban Employees From Using AI Tools Like ChatGPT<\/a><\/p>\n\n\n\n

Circle To Search Feature<\/h2>\n\n\n\n

Another notable addition is the \u201cCircle to Search\u201d feature with the help of Google. Users can \u201ccircle, highlight, scribble on or tap anything on Galaxy S24\u2019s screen\u201d and generate search results. Extra attention has gone to Galaxy S24 series\u2019 ProVisual Engine and AI editing tools, which the company claims will offer users the optimum image capturing and editing experience. <\/p>\n\n\n\n

\u201cThe Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation\u201d<\/em>, said TM Roh, the president and head of Mobile Experience (MX) Business at Samsung Electronics.\u00a0<\/p>\n\n\n\n

The Galaxy AI is currently only available on the S24 series of smartphones, including the Galaxy S24, Galaxy S24+, and Galaxy S24 Ultra. The company states that the AI services\u00a0will be free until 2025<\/a>.\u00a0<\/p>\n","post_title":"Introducing Samsung Galaxy S24 Series with Galaxy AI: Samsung\u2019s Official Foray Into The Generative AI Race","post_excerpt":"","post_status":"publish","comment_status":"closed","ping_status":"closed","post_password":"","post_name":"introducing-samsung-galaxy-s24-series-with-galaxy-ai-samsungs-official-foray-into-the-generative-ai-race","to_ping":"","pinged":"","post_modified":"2024-01-25 02:20:57","post_modified_gmt":"2024-01-24 15:20:57","post_content_filtered":"","post_parent":0,"guid":"https:\/\/www.thedistributed.co\/?p=15106","menu_order":0,"post_type":"post","post_mime_type":"","comment_count":"0","filter":"raw"}],"next":false,"total_page":false},"paged":1,"class":"jblog_block_13"};

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT

Generative AI

Most Read

Subscribe To Our Newsletter

By subscribing, you agree with our privacy and terms.

Follow The Distributed

ADVERTISEMENT