IT・テクノロジーキニ速

Gemini、ハルシネーションが酷すぎて使い物にならない

3行In 3 Lines

Reports indicate that Google's highly anticipated AI, Gemini, is frequently generating "hallucinations" – information that deviates from facts.

Its severe inaccuracy has led to widespread online sentiment that it's "unusable," causing significant user disappointment and disillusionment.

Given the high expectations, this situation is raising serious questions about Google's overall AI strategy.

Read full article →
AD

Related Keywords

What is Gemini?

Gemini is a large-scale generative AI model developed by Google. It is characterized by its "multimodal" capabilities, meaning it can understand and generate multiple forms of information such as text, images, audio, and video. Announced in December 2023, Gemini was positioned as the core of Google's AI strategy, with high expectations. It particularly emphasized natural conversation indistinguishable from human writing and logical reasoning capabilities for complex questions. Google positioned Gemini as the culmination of its AI development and a strong competitor to OpenAI's ChatGPT. However, reports of its actual use indicate that it has not consistently delivered the expected performance. Specifically, the problem of "hallucinations" – generating incorrect or inappropriate content, especially in situations requiring advanced reasoning – has become apparent, detracting from the user experience. Given that it is positioned as the culmination of Google's long-standing search technology and AI research, the instability of its performance has sparked considerable debate. As a crucial product that could influence Google's competitiveness in the AI market, significant improvements are strongly desired for the future.

What is Hallucination (Generative AI)?

In generative AI, "hallucination" refers to the phenomenon where AI generates information that does not exist in its training data or deviates from facts, presenting it as if it were true. It is likened to humans experiencing hallucinations, characterized by content that is completely fabricated or contradicts existing information. Specific examples reported include creating biographies of non-existent individuals, presenting incorrect scientific facts, or generating fictitious citations. The main causes of this phenomenon include biases or deficiencies in training data, a mismatch between the model's knowledge and user requirements, or the limits of the model's ability to confidently respond with "I don't know." For instance, if the AI does not have a direct answer in its training data for a certain question, it might "create" plausible but incorrect information. This phenomenon severely undermines the reliability of AI and makes it difficult to utilize as a decision-making support or information-gathering tool in business. Particularly in fields where accuracy is paramount, such as medicine, law, and finance, misinformation carries the risk of severe consequences. Therefore, the development of technologies like RAG (Retrieval Augmented Generation) that link with external databases to improve accuracy, and the importance of fact-checking by users, are being emphasized.

What is Large Language Model (LLM)?

A Large Language Model (LLM) is an AI model that possesses the ability to understand and generate human-like natural language by learning from vast amounts of text data. Google's Gemini and OpenAI's ChatGPT are prime examples, typically having billions, hundreds of billions, or even more parameters (values adjusted by the model through learning). These models can perform a wide range of tasks based on a given prompt (instruction), such as summarizing text, translating, answering questions, generating poetry or code, and sentiment analysis. The underlying technology has evolved significantly, especially with the advent of the neural network architecture called "Transformer." Transformer enabled efficient learning of relationships between words in a sentence, contributing to the high performance of LLMs. In the LLM training process, the model reads vast amounts of text data from the internet (books, web pages, academic papers, etc.) and statistically learns word appearance patterns and contexts. This allows it to develop the ability to predict the next word, which leads to fluent text generation. While LLMs are expected to have applications in various fields such as business, education, and research due to their versatility and advanced language processing capabilities, they also face challenges that need to be addressed, including the problem of hallucinations, high computational costs, and ethical issues (e.g., generating discriminatory content, copyright concerns).

Trending Now

アメリカ人にBBQに誘われた日本人、手ブラもなんだからとビールとさつま揚げを持っていった結果……
you1news15:09
今更モテたってしょうがないのに木村拓哉がカッコつけを止めない理由
nandemoiiyoch15:45
【話題】Fakeコラボイベ、配布はあのキャラかな???
xn--fgo-gh8fn72e12:58
ナフサ輸入「中東から切り替える」 高市早苗首相、Xで表明
watch-health15:05
キンコン西野、映画「プペル」“苦戦”で本音を吐露「言いたくないけどともに寝られず食欲もない」
watch-health15:05
【期待】Fakeコラボを2年連続でやるって可能性もなくはない?
xn--fgo-gh8fn72e14:59
【花騎士】まだまだ続くバニーイベ!おへそ部分が💗で丸見えな衣装に!
df-browser-games14:54
元ジャンポケ・斉藤容疑者、今度は代々木公園でバウムクーヘン販売を告知wwwww
pikasoku14:24
天竺川原、3月末で吉本興業を退社 エージェント契約を終了
nandemoiiyoch14:23
日本人絵師が描いたアメリカBBQ画像が向こう側で爆ウケ、それに対してリベラル派アメリカ人が不満な様子を見せている模様
you1news12:09
婚活女性「婚活市場の男性、容姿が厳しい」発言に賛否www
pikasoku12:24
ツエーゲン金沢GK山ノ井拓己が移籍前提の準備のためチームを離脱 昨季公式戦33試合出場も今季は出場無し
blog.domesoccer11:42
日本政府さん、女性が喜ぶ動画を作ってバズってしまうwww
表現の自由ちゃんねる15:30
【動画】参政党・神谷「”ショックドクトリン”的な改憲は危険」高市
表現の自由ちゃんねる16:15
【パロディ】為替チャート見ながらガチャ回す民、金融監視庁まで動かしてしまう
パロディ速報13:59
【超かぐや姫】彩葉のこの表情めっちゃ好き
ぎあちゃんねる(仮)17:45
池袋刺殺容疑者と「同姓同名の別人」大学が注意喚起 →「事件と無関係」の卒業生への臆測で
News@フレ速15:10
【パトレイバー】特車2課も現場猫案件ばっかだな
ぎあちゃんねる(仮)16:45
輸出額60倍増の企業も 海外で納豆ブーム起こした四つの追い風
明日は何を食べようか15:00
【パロディ】ナイグ公式「逆ロゴに価値はありません、本当に」→転売市場さん、値段をさらに上げてしまう
パロディ速報11:56