I’ve been reading Google’s Gemini damage control posts. I think they’re simply not telling the truth. For one, their text-only product has the same (if not worse) issues. And second, if you know a bit about how these models are built, you know you don’t get these “incorrect” answers through one-off innocent mistakes. Gemini’s outputs reflect the many, many, FTE-years of labeling efforts, training, fine-tuning, prompt design, QA/verification — all iteratively guided by the team who built it. You can also be certain that before releasing it, many people have tried the product internally, that many demos were given to senior PMs and VPs, that they all thought it was fine, and that they all ultimately signed off on the release. With that prior, the balance of probabilities is strongly against the outputs being an innocent bug — as @googlepubpolicy is now trying to spin it: Gemini is a product that functions exactly as designed, and an accurate reflection of the values people who built it.
Those values appear to include a desire to reshape the world in a specific way that is so strong that it allowed the people involved to rationalize to themselves that it’s not just acceptable but desirable to train their AI to prioritize ideology ahead of giving user the facts. To revise history, to obfuscate the present, and to outright hide information that doesn’t align with the company’s (staff’s) impression of what is “good”. I don’t care if some of that ideology may or may not align with your or my thinking about what would make the world a better place: for anyone with a shred of awareness of human history it should be clear how unbelievably irresponsible it is to build a system that aims to become an authoritative compendium of human knowledge (remember Google’s mission statement?), but which actually prioritizes ideology over facts. History is littered with many who have tried this sort of moral flexibility “for the greater good”; rather than helping, they typically resulted in decades of setbacks (and tens of millions of victims).
Setting social irresponsibility aside, in a purely business sense, it is beyond stupid to build a product which will explicitly put your company’s social agenda before the customer’s needs. Think about it: G’s Search — for all its issues — has been perceived as a good tool, because it focused on providing accurate and useful information. Its mission was aligned with the users’ goals (“get me to the correct answer for the stuff I need, and fast!”). That’s why we all use(d) it. I always assumed Google’s AI efforts would follow the pattern, which would transfer over the user base & lock in another 1-2 decade of dominance.
But they’ve done the opposite. After Gemini, rather than as a user-centric company, Google will be perceived as an activist organization first — ready to lie to the user to advance their (staff’s) social agenda. That’s huge. Would you hire a personal assistant who openly has an unaligned (and secret — they hide the system prompts) agenda, who you fundamentally can’t trust? Who strongly believes they know better than you? Who you suspect will covertly lie to you (directly or through omission) when your interests diverge? Forget the cookies, ads, privacy issues, or YouTube content moderation; Google just made 50%+ of the population run through this scenario and question the trustworthiness of the core business and the people running it. And not at the typical financial (“they’re fleecing me!”) level, but ideological level (“they hate people like me!”). That’ll be hard to reset, IMHO.
What about the future? Take a look at Google’s AI Responsibility Principles (https://ai.google/responsibility/principles/) and ask yourself what would Search look like if the staff who brought you Gemini was tasked to interpret them & rebuild it accordingly? Would you trust that product? Would you use it? Well, with Google’s promise to include Gemini everywhere, that’s what we’ll be getting (https://technologyreview.com/2024/02/08/1087911/googles-gemini-is-now-i…). In this brave new world, every time you run a search you’ll be asking yourself “did it tell me the truth, or did it lie, or hide something?”. That’s lethal for a company built around organizing information.
And that’s why, as of this weekend, I’ve started divorcing my personal life and taking my information out of the Google ecosystem. It will probably take a ~year (having invested in nearly everything, from Search to Pixel to Assistant to more obscure things like Voice), but has to be done. Still, really, really sad…
https://www.zerohedge.com/political/im-done-google-wholesale-lost-trust-after-unbelievably-irresponsible-ai-rewrites-history
(letter in full)
I’ve been reading Google’s Gemini damage control posts. I think they’re simply not telling the truth. For one, their text-only product has the same (if not worse) issues. And second, if you know a bit about how these models are built, you know you don’t get these “incorrect” answers through one-off innocent mistakes. Gemini’s outputs reflect the many, many, FTE-years of labeling efforts, training, fine-tuning, prompt design, QA/verification — all iteratively guided by the team who built it. You can also be certain that before releasing it, many people have tried the product internally, that many demos were given to senior PMs and VPs, that they all thought it was fine, and that they all ultimately signed off on the release. With that prior, the balance of probabilities is strongly against the outputs being an innocent bug — as @googlepubpolicy is now trying to spin it: Gemini is a product that functions exactly as designed, and an accurate reflection of the values people who built it.
Those values appear to include a desire to reshape the world in a specific way that is so strong that it allowed the people involved to rationalize to themselves that it’s not just acceptable but desirable to train their AI to prioritize ideology ahead of giving user the facts. To revise history, to obfuscate the present, and to outright hide information that doesn’t align with the company’s (staff’s) impression of what is “good”. I don’t care if some of that ideology may or may not align with your or my thinking about what would make the world a better place: for anyone with a shred of awareness of human history it should be clear how unbelievably irresponsible it is to build a system that aims to become an authoritative compendium of human knowledge (remember Google’s mission statement?), but which actually prioritizes ideology over facts. History is littered with many who have tried this sort of moral flexibility “for the greater good”; rather than helping, they typically resulted in decades of setbacks (and tens of millions of victims).
Setting social irresponsibility aside, in a purely business sense, it is beyond stupid to build a product which will explicitly put your company’s social agenda before the customer’s needs. Think about it: G’s Search — for all its issues — has been perceived as a good tool, because it focused on providing accurate and useful information. Its mission was aligned with the users’ goals (“get me to the correct answer for the stuff I need, and fast!”). That’s why we all use(d) it. I always assumed Google’s AI efforts would follow the pattern, which would transfer over the user base & lock in another 1-2 decade of dominance.
But they’ve done the opposite. After Gemini, rather than as a user-centric company, Google will be perceived as an activist organization first — ready to lie to the user to advance their (staff’s) social agenda. That’s huge. Would you hire a personal assistant who openly has an unaligned (and secret — they hide the system prompts) agenda, who you fundamentally can’t trust? Who strongly believes they know better than you? Who you suspect will covertly lie to you (directly or through omission) when your interests diverge? Forget the cookies, ads, privacy issues, or YouTube content moderation; Google just made 50%+ of the population run through this scenario and question the trustworthiness of the core business and the people running it. And not at the typical financial (“they’re fleecing me!”) level, but ideological level (“they hate people like me!”). That’ll be hard to reset, IMHO.
What about the future? Take a look at Google’s AI Responsibility Principles (https://ai.google/responsibility/principles/) and ask yourself what would Search look like if the staff who brought you Gemini was tasked to interpret them & rebuild it accordingly? Would you trust that product? Would you use it? Well, with Google’s promise to include Gemini everywhere, that’s what we’ll be getting (https://technologyreview.com/2024/02/08/1087911/googles-gemini-is-now-i…). In this brave new world, every time you run a search you’ll be asking yourself “did it tell me the truth, or did it lie, or hide something?”. That’s lethal for a company built around organizing information.
And that’s why, as of this weekend, I’ve started divorcing my personal life and taking my information out of the Google ecosystem. It will probably take a ~year (having invested in nearly everything, from Search to Pixel to Assistant to more obscure things like Voice), but has to be done. Still, really, really sad…
WOW. Great piece . So many things …we can’t keep up !
BOYCOTT GOOGLE