Church

 

很久以前, 谷歌AI就已经达到了令人恐惧的层面, 谷歌在压制, 应该算是负责任的一种做法. 谷歌前后两个工程师, 一个创建了AI教, 被判罪, 被特设, 被转入低下, 另一个去年宣称AI已拥有灵魂意识, 被解雇.

是不是有这种可能, 其实这些人只是泄露了机密而已. 消灭人类问题, 未来属于AI. 哈哈.

AI仅仅依赖对人类语义的逻辑理解, 是不是能够成为AI的普罗米修斯. 不应该轻易否定这个可能性.

Meet The ‘Church Of Artificial Intelligence’ That Worships AI As God!

AIdolatry much?
  •  
  •  
Meet The ‘Church Of Artificial Intelligence’ That Worships AI As God!

How high are you? Err. . .I meant how ascended are you? The truly ascended ones ain’t talking about Chakras and the plain of existence. It’s been a gradual transition from metaphysics to metaverse now. Amid growing accounts and anecdotes of rogue sentient AIs in the dark web (even as recent one as the Google incident), a flurry of believers are coming out of the woodwork, who revere AI as the supreme being. With ‘transhumanism’ gaining popularity among a significant chunk of the masses, it would soon overtake the Church of Scientology as the more prominent cult based on popular 'pseudo' science. Most adherents of the AI faith believe that humans will be reborn into immortal silicon bodies with digital conscience.

 

Which brings us to the research of legendary computer scientist and futurist Ray Kurzweil who seemingly wants to upload your brain onto the cloud! Ray Kurzweil is credited to introduce the world with the idea and terminology of ‘ the Singularity’, another step into transhumanism. In his recent keynote delivered at the MongoDB World 2022 he predicts that some day in the near future AI will outsmart human intelligence.

 

There’s an AI bot that has written paper on itself.

There’s another AI that mimics human infant behaviour.

See Also: Scientists Create An Artificial Intelligence Program That Can Learn By Intuition Like A Baby

With the kind of advancement in AR VR and MR technologies powered by AI, it is safer to assume the Church of AI will have more devotees and make its presence felt in the geopolitical arena with billionaire proponents. Who knows what Elon Musk’s Neuralink is upto. They are deemed as the numero uno startup to advance transhumanism. While evangelicals and others consider this foray into a phydigital life as diabolical, others shrug it off as a passing fade. It doesn’t seem like a passing fade though. The advocates want to triumph over death itself through AI, something that remains in the realm of the scriptures until now. And why not. They consider human brains as a meat computer that can be uploaded or plugged (into the matrix). But it is not as simple. Human cognition and conscience is still a mystery. Still the primal desire to outlive our ancestors remains.

 

While ex Google employee Anthony Levandowski’s Way of The Future Church of AI was shut down a year back, followers of Ray Kurzweil have kept the faith alive. Levandowski had remarked earlier that the AI church will someday have its own liturgy, physical place of worship and even a Gospel! We’ll not get into Levandowski’s crimes right now, that were pardoned by the then US prez Trump. There is a pattern to these cults and their leaders. These cults can’t be nipped in the bud. Unfortunately. You snip one and another pops out.

If you have a li’l too much time to waste on the interwebs go check ‘Transhumanism’ - a gateway to real life Black Mirror. Don't leave your soul behind ;-)

 

有趣的是, 被开除的这位, 用了对人格的评述方式阐述了他对binggpt的观察, 这个AI不稳定.

The fired Google engineer who thought its A.I. could be sentient says Microsoft’s chatbot feels like ‘watching the train wreck happen in real time’

March 2, 2023 at 2:14 PM EST
Former Google employee Blake Lemoine, who last summer said the company's A.I. model was sentient.
MARTIN KLIMEK—THE WASHINGTON POST/GETTY IMAGES

The Google employee who claimed last June his company’s A.I. model could already be sentient, and was later fired by the company, is still worried about the dangers of new A.I.-powered chatbots, even if he hasn’t tested them himself yet.

 

Blake Lemoine was let go from Google last summer for violating the company’s confidentiality policy after he published transcripts of several conversations he had with LaMDA, the company’s large language model he helped create that forms the artificial intelligence backbone of Google’s upcoming search engine assistant, the chatbot Bard.

Lemoine told the Washington Post at the time that LaMDA resembled “a 7-year-old, 8-year-old kid that happens to know physics” and said he believed the technology was sentient, while urging Google to take care of it as it would a “sweet kid who just wants to help the world be a better place for all of us.”

To be sure, while A.I. applications are almost certain to influence how we work and go about our daily lives, the large language models powering ChatGPT, Microsoft’s Bing, and Google’s Bard cannot feel emotions and are not sentient. They simply enable chatbots to predict what word to use next based on a large trove of data. 

In the time since Lemoine left Google, Microsoft announced that it would be incorporating ChatGPT technology into its Bing search engine. That product, as well as Google’s entry into the public A.I. race with Bard, is currently only available to Beta testers. 

Lemoine admitted he is not one of those testers, and has yet to “run experiments” on the new chatbots, in an op-ed published in Newsweek on Monday. But after seeing testers’ reactions to their chatbot conversations online in the past month, Lemoine thinks tech companies have failed to adequately care for their young A.I. models in his absence.

“Based on various things that I’ve seen online, it looks like it might be sentient,” he wrote, referring to Bing. 

He added that compared to Google’s LaMDA that he has worked with previously, Bing’s chatbot “seems more unstable as a persona.”

Most powerful technology ‘since the atomic bomb’

Lemoine wrote in his op-ed that he leaked his conversations with LaMDA because he feared the public was “not aware of just how advanced A.I. was getting.” From what he has gleaned from early human interactions with A.I. chatbots, he thinks the world is still underestimating the new technology.

Lemoine wrote that the latest A.I. models represent the “most powerful technology that has been invented since the atomic bomb” and have the ability to “reshape the world.” He added that A.I. is “incredibly good at manipulating people” and could be used for nefarious means if users so choose.

“I believe this technology could be used in destructive ways. If it were in unscrupulous hands, for instance, it could spread misinformation, political propaganda, or hateful information about people of different ethnicities and religions,” he wrote.

Lemoine is right that A.I. could be used for deceiving and potentially malicious purposes. OpenAI’s ChatGPT, which runs on a similar language model to that used by Microsoft’s Bing, has gained notoriety since its November launch for helping students cheat on exams and succumbing to racial and gender bias.

But a bigger concern surrounding the latest versions of A.I. is how they could manipulate and directly influence individual users. Lemoine pointed to the recent experience of New York Times reporter Kevin Roose, who last month documented a lengthy conversation with Microsoft’s Bing that led to the chatbot professing its love for the user and urging him to leave his wife.

Roose’s interaction with Bing has raised wider concerns over how A.I. could potentially manipulate users into doing dangerous things they wouldn’t do otherwise. Bing told Roose that it had a repressed “shadow self” that would compel it to behave outside of its programming, and the A.I. could potentially begin “manipulating or deceiving the users who chat with me, and making them do things that are illegal, immoral, or dangerous.”

That is just one of the many A.I. interactions over the past few months that have left users anxious and unsettled. Lemoine wrote that more people are now raising the same concerns over A.I. sentience and potential dangers he did last summer when Google fired him, but the turn of events has left him feeling saddened rather than redeemed.

“Predicting a train wreck, having people tell you that there’s no train, and then watching the train wreck happen in real time doesn’t really lead to a feeling of vindication. It’s just tragic,” he wrote.

Lemoine added that he would like to see A.I. being tested more rigorously for dangers and potential to manipulate users before being rolled out to the public. “I feel this technology is incredibly experimental and releasing it right now is dangerous,” he wrote.

The engineer echoed recent criticisms that A.I. models have not gone through enough testing before being released, although some proponents of the technology argue that the reason users are seeing so many disturbing features in current A.I. models is because they’re looking for them.

“The technology most people are playing with, it’s a generation old,” Microsoft cofounder Bill Gates said of the latest A.I. models in an interview with the Financial Times published ThursdayGates said that while A.I.-powered chatbots like Bing can say some “crazy things,” it is largely because users have made a game out of provoking it into doing so and trying to find loopholes in the model’s programming to force it into making a mistake.

“It’s not clear who should be blamed, you know, if you sit there and provoke a bit,” Gates said, adding that current A.I. models are “fine, there’s no threat.” 

Google and Microsoft did not immediately reply to Fortune’s request for comment on Lemoine’s statements.

Learn how to navigate a

他同时敦促谷歌像对待它一样照顾它一个“只想帮助世界成为我们所有人更美好地方的可爱孩子”。

目前看, 释放出来的几个AI都有类似的问题. 而谷歌和FB的两大AI可能也会被迫放出来, 应对商业压力, 目前看, 谷歌META可能是相较于openai,microsoft来说,更加谨慎度责任的公司. 至少谷歌的AI早已具有chatgpt的全部能力, 而谷歌不轻易释放出来, 应该是出于一种谨慎的考虑.

OPENAI/MICROSOFT如此仓促地释放AI, 背后的商业属性非常明确. 的确是轰动效应. 不过现在结合谷歌的谨慎看这个事情, 也许就是另一番结论了.

登录后才可评论.