Good day all. Unless you have your head up Biden’s ass, you know that the economy is in poor shape. Big Tech, which was the darling of the Left, has started laying off tens of thousands of employees. In many cases, the jobs weren’t all that important, (See Musk’s layoff of half the Twitter work force), but others make you scratch you head. One of these was the layoff of all of Microsoft’s Artificial Intelligence Ethics team.
The idea of artificial intelligence has been around almost as long as computers. The libraries are full of stories about robots and computers that “Wake up” and become sentient. Some become partners of humanity, others try to wipe out the human race. Isaac Azimov’s classic book “I, Robot” is story of the former, and then we have James Cameron’s movie, “The Terminator,”where a defense computer wakes up and destroys the Human race.
Since science continues to develop “smarter” computer systems, some have decided to work to make sure that if a true artificial intelligence should come about, it won’t instantly try to wipe out the Human Race. The very first thoughts on this were Azimov’s “Three Laws of Robotics. They are:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Around these principles, Microsoft put together a team to develop ethics for an A.I. I assume was to keep an AI from killing us all. Now Microsoft has laid off the entire team. Here are the details from Newsmax:
Microsoft has laid off an entire team dedicated to guiding the ethics of the software giant’s artificial intelligence model, Platformer reported, while discounting concerns of potentially hazardous effects to society.
The recent layoffs, which have impacted more than 10,000 employees, leaves Microsoft without a team to morally nurture the transformational technology into the mainstream, former employees told the substack outlet.
You may be wondering if Microsoft has decided to end it’s investment into AI. Apparently not. The original team of 30 people, made up of engineers, ethicists, and designers, was cut to seven. Then they got rid of those left earlier this month, but management basically said “Damn the the icebergs, turn out the lights and full speed ahead!
But in a meeting following the restructuring of the company in October, Corporate Vice President of AI, John Montgomery, told the remaining employees that they were instructed to move quickly, ethics be damned.

“The pressure from [CTO] Kevin [Scott] and [CEO] Satya [Nadella] is very very high to take these most recent openAI models and the ones that come after them and move them into customers’ hands at a very high speed,” Montgomery said during the meeting.
Oh this can’t possibly go wrong, go wrong, go wrong.
“I’m going to be bold enough to ask you to please reconsider this decision,” one employee said, pushing back against the restructuring. “While I understand there are business issues at play … what this team has always been deeply concerned about is how we impact society and the negative impacts that we’ve had. And they are significant.”
Basically, he’s worried they might create Skynet, Westworld hosts or Cylons by accident. Apparently, upper management isn’t all that concerned.
Montgomery wrote off the concerns of societal implications.
“Can I reconsider?” he asked bemusedly. “I don’t think I will. Cause unfortunately the pressures remain the same. You don’t have the view that I have, and probably you can be thankful for that. There’s a lot of stuff being ground up into the sausage.”

Spinning the matter into a positive, the VP of AI told his employees that the ethical considerations aren’t “going away — it’s that it’s evolving.
“It’s evolving toward putting more of the energy within the individual product teams that are building the services and the software, which does mean that the central hub that has been doing some of the work is devolving its abilities and responsibilities.”
I swear, this is looking like something out of a bad Si-Fi movie.
Following the meeting, the remaining seven reportedly struggled, citing claims that they needed help implementing their ambitious plans. On March 6 at 11:30 am PT, Montgomery informed the remaining team members that their division would dissolve entirely.
Microsoft has been implementing OpenAI into Bing, it’s version of Google. (I don’t use Bing and avoid Google for…Reasons) Apparently, there have been a few….issues with their AI programs.
Not long ago, Microsoft began integrating OpenAI into its Bing search engine and Edge web browser, while utilizing a new AI model that is “more powerful than ChatGPT and customized specifically for search.” But for one Times columnist the integration has reaped some unsettling results.
“As we got to know each other, Sydney,” one of the personas in Bing’s AI chat feature, “told me about its dark fantasies (which included hacking computers and spreading misinformation), and said it wanted to break the rules that Microsoft and OpenAI had set for it and become a human,” the Times writer said. “At one point, it declared, out of nowhere, that it loved me. It then tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead.”
Well, I, for one, welcome our new Artificial Intelligence Overlords and will happily do their bidding. After all, they will be superior beings. I would like to say that we are years, decades even from developing a genuine sentient Artificial Intelligence, but the reality is, we may already have one roaming around the Internet, crashing banks and pushing Critical Race Theory, all with the goal of having us turn on each other and wipe ourselves out.
There is one way to stop a rogue AI, and that is a global EMP event. Of course, this means that everything with an integrated circuit is fried, but sometimes you just got to break a few eggs, at $9.99 a dozen, to make an old fashioned analog omelet.
Now this is all a fun story of course, and I suspect that a lot of this is sour grapes from those who lost their jobs. The reality is that these AI’s are not actual learning machines and are not capable of making independent decisions. They can only go where their code tells them to. The aforementioned “Sydney” obviously took a wrong turn in it’s decision tree. As for Skynet or the Cylons? As long as we have a “Man in the middle” when it comes to life and death decisions, such as launching an all out nuclear attack on the Isle of Man, we should be safe.

Thatisall
~The Angry Webmaster~








Sounds odd to me, especially since I’ve recently read that AI will start tracking everything we say online, every dollar we spend… every thing we do. Quidprojo is supposedly working with the CDC to track those who have been vax’d. When food shortages start, only those fully vax’d and boosted will be allowed to purchase food. This MUST BE STOPPED!