Arming Robots

It’s another step closer to the story line from James Cameron’s Terminator movies, but one that’s being seriously considered by police in Oakland. They believe that the society we now live in justifies taking a bold step forward in weaponizing robots.

Oakland Police have added a “Percussion Actuated Nonelectric Disruptor (PAN Disruptor) as a top priority for 2022. A PAN is a laser-aimed shotgun-like attachment for wheeled robots which until now have previously been used in war zones or for sending in to diffuse or detonate a bomb. These robots are not autonomous. Similar robots have been weaponized by the US military with machine guns, although the military say they are for shooting suspected explosive devices. The gun can be loaded with blanks as well as live rounds making them potentially lethal.

 “One can imagine applications of this particular tool that may seem reasonable,” said Liz O’Sullivan, CEO of the AI bias-auditing startup Parity and a member of the International Committee for Robot Arms Control, “but with a very few modifications, or even just different kinds of ammunition, these tools can easily be weaponized against democratic dissent.”

Oakland Police Department had originally promised to only use the killing machines when deemed necessary, during “certain catastrophic, high-risk, high-threat, or mass casualty events.” However, they would not rule out the potential to use live ammunition “if they need it for some situation later on.”

A 2021 subcommittee meeting looked at the potential for arming robots, and agreed the robots could not be used to kill humans, but would allowed them to be armed with pepper spray.

“We will not be arming robots with lethal rounds anytime soon,” Lieutenant Omar Daza-Quiroz told the Intercept. “If and when that time comes each event will be assessed prior to such deployment.”

Incredibly this isn’t the only police department that is considering upgrading their staff. In 2016 Dallas Police Department used a wheeled robot to take down an alleged cop-killing sniper. The robot placed a bomb near the suspect who was cornered in a parking garage. Allegedly the suspect said he’d placed explosives around the city. “After a prolonged shootout we saw no other option but to use our bomb robot and place a device on it for it to detonate where the suspect was. Other options would have exposed our officers to grave danger,” explained Dallas Police Chief David O. Brown.

Also, in 2014 Albuquerque police deployed a bomb robot to release tear gas on an armed suspect.

In a similar way, North Dakota has legalized the use of police drones equipped with tasers and pepper spray. It seems like Oakland Police is merely following a trend to weaponize robots, which hopefully will continue to remain in the control of a human operator.

Reassured yet? Whilst the use of a robot in dangerous situations will undoubtedly save the lives of police and emergency responders, arming a robot could be a step too far.

Speaking to the Dead

Artificial Intelligence has come a long way in the last decade, but this latest advancement might be one of the most unusual applications for it – allowing the dead to speak to you!

In what was a surprise to the mourners at her funeral, Marina Smith was able to address them via a holographic conversational video experience, which was created by a startup company called StoryFile. Interestingly StoryFile was founded by Marina Smith’s son, Stephen Smith, based in LA. StoryFile was originally created to preserve the memories, recollections and stories of the Holocaust survivors. But, with Marina, they took 20 cameras and filmed her answering about 250 questions, allowing them to virtually recreate her in their software to make her appearance at her own funeral appear as natural as possible.

With so much visual and vocal data, Stephen Smith was able to converse with his mother at the funeral, as well as allowing other attendants to ask questions.

“The extraordinary thing was that she answered their questions with new details and honesty,” Stephen explained. “People feel emboldened when recording their data. Mourners might get a freer, truer version of their lost loved one.”

However, this wasn’t the first time StoryFile had used their technology to recreate a dead person at their own funeral. Earlier this year former Screen Actors Guild president Ed Asner answered questions from the mourners at his own funeral.

“Nothing could prepare me for what I was going to witness when I saw it,” said Matt Asner, Ed’s son. “Other attendees were ‘a little creeped out’ because it was like having him in the room.”

Currently, in Silicon Valley, there seems to be a bit of a trend to technology which allows users to speak to the dead. Amazon added a new feature to their Alexa speaker, allowing the voice of a dead relative to read a bedtime story to a child. Amazon made this possible, not by taking hours of recording in a studio, but by sampling less than a minute of speech. “We are unquestionably living in the golden era of AI, where our dreams and science fictions are becoming a reality,” said Rohit Prasad, head scientist for Alexa.

Whilst the application of mimicking a dead person’s voice might be comforting to some, it’s also seen by many as a step too far and an obscure use of Artificial Intelligence. It could also open up the possibilities for criminals using a person’s voice, dead or alive, for nefarious purposes.

Brain Implants

For many years medical teams have tried to come up with a permanent prevention or cure for paralysis, and it appears that scientists and medical researchers have made significant breakthroughs in the form of a brain implant.

Fortunately, brain implants have worked for many patients who were sure they were going to live the rest of their lives without being able to move their arms and legs due to severe spinal cord or head injuries. By using a brain-computer interface the lost connection between the brain and other organs was restored, allowing the patient to live their life like a normal healthy individual. It is a huge step forward in medicine.

There is an argument about whether the much sought-after treatment will be available for patients from all social classes? More importantly, will it be easily available in third-world countries where the rate of disease leading to paralysis is significantly high?

The procedure costs somewhere between $70,000 and $100,000 making it too expensive for most people with or without health insurance.

It will be quite a few decades until the procedure is available, but the procedure does not seem to be widely available due to the complex nature and cost. Even more common performed treatments like liver and kidney transplants are not easily available in many parts of the world.

Despite this, it is a remarkable discovery by scientists. Hopefully, with more researches and investment, more accessible options might be available for the benefit of everyone.

In a separate study, Scientists from Stanford successfully implanted a device into a man’s brain that allowed his paralyzed hands to type words with nothing more than the power of thought. Known as a Brain-Computer Interface (BCI), it enables the user to restore basic motor skills including talking and moving, by decoding the neural activity in the motor cortex. But this could only be the start of some incredible things to come, including curing mental health issues.

Theodore Berger, a neuroscientist in Southern California, has been working on a memory chip that mimics the function of the Hippocampus – the part of the brain responsible for memories. Using the chip Berger has successfully managed to restore long term memory in rats. Trials in humans are in the very early stages, but with millions of people suffering from neurodegenerative complications from Alzheimers, Strokes or brain injuries, it seems that there are many applications for Berger’s technology should it be successful in trial.

Restoring Sight

Scientists have been immersed in the journey of discovering great mysteries for decades. To comprehend the complexities of this earth to human life, we have witnessed exciting theories and claims over the years. However, this time, it is even more surprising as the collaborative efforts of these remarkable scientists have resulted in a fantastic discovery – reviving the twinkle in a dead human eye!

In medical science, the life space of human cells effects the process of organ transplants, for instance, kidneys remain usable 24 to 36 hours after the donor’s demise if preserved in the appropriate surroundings. However, this is not the case for human eye cells, as the nervous system stops working immediately after a human dies due to deprivation of oxygen. Nevertheless, earlier last week, an article was published in the New York Post spelling out the successful revival of photosensitive neurons, pioneering a revolution into the possibilities for brain and eye disorders, including blindness. 

This outcome gives hope to people with eye disorders. “Just being able to take these donor’s eyes and learn how the retina works, and what is going wrong in these illnesses is a significant deal,” said Fatima Abbas, lead author of the new study at the University of Utah.

Regrettably, light-sensing cells, termed ‘photoreceptors,’ do not communicate with neighboring cells due to oxygen deprivation after death. Therefore, a particular transport unit was designed to solve this problem that included artificial blood, oxygen, and essential nutrients. This approach found that they could make the retinal cells communicate in the same way they do in living bodies.

“Past studies have restored very little electrical activity in organ donor eyes, but never to the amount we have now proven,” said Frans Vinberg, a Moran Eye Centre scientist who also participated in the study.

This breakthrough might also contribute to advances in optogenetics, allowing some patients with eye illnesses to regain their eyesight. The University of Utah research has been published in the journal eLife.

Dark Matter Fuel

According to scientists, if humanity can interact with dark matter, it will change the entire human race. And humanity could get an unlimited supply of energy if it learned to interact with the mysterious substance. Until now, the nature of dark matter remained a mystery to scientists. But if humanity learned to interact with this substance, it would receive a new unlimited source of energy and fuel for spacecraft. No one knows what can be done with dark matter until we know what it really is. We assume that due to the fact that this substance causes light to refract during observations of other galaxies, it is a very powerful substance, and most likely, it could be used as a new source of energy.

Before the moment when humanity learns to interact with dark matter, hundreds, maybe even thousands of years may pass. But this would be an incredible source of energy that would allow our spacecraft to overcome interstellar distances faster. Dark matter, in scientific terminology, is a matter that can’t be identified by electromagnetic interactions, which means it can’t reflect light and is thus invisible. It is the densest and thus heaviest substance in the cosmos, weighing up to 100 suns (in its full form). The dark matter is so thick that a single pound weights more than ten thousand tons.

Dark matter might have the following properties:

  • It does not emit electromagnetic radiation;
  • It participates in gravitational interaction;
  • Dark matter particles have a large mass (WIMP);
  • It is non-relativistic;
  • It can annihilate and decay, forming all sorts of particles and antiparticles.

The Large Hadron Collider is famous for its search for and breakthrough of the God particle, but in the two decades since it collided protons at energies never before achieved, researchers have been using it to try to find another exciting particle: the hypothetical particle that may form up an invisible kind of matter known as dark matter, which is five times more prevalent than ordinary matter and without which there would be no life on Earth. They attempted to utilize the particle accelerator to break up some dark matter. They built a meta-particle crystal that improved dark matter as a fuel, as well as an anti-reverse crystal that could render all dark matter worthless if it came within 30 cm of the fuel crystal. Such a model suggests that the universe will eventually be dominated by dark matter. 

What If Artificial Intelligence Was Already Conscious?

The development of a general artificial intelligence is the ultimate objective of the majority of high-level AI research (GAI). In essence, they’re aiming for a computer brain to perform as well as a human brain in a body with equivalent capabilities.

We’re still decades away from anything like this, according to most experts. Nobody knows what GAI looks like yet, unlike other highly complicated problems like nuclear fusion or readjusting the Hubble Constant.

Researchers don’t have a complete grasp on the nature of intelligence in the human brain, or the nature of conscious experience in general. Our understanding of how intellect and consciousness arise in the human brain is still in its infancy.

For AI, we have only patched neural networks and smart algorithms in place of the GAI. To argue that modern AI will ever be able to think like a human and illustrate a route toward robot consciousness is extremely difficult, and even more difficult to do so. However, it’s not out of the question.

It’s possible that AI is already conscious.

An article on the nature of consciousness by mathematician Johannes Kleiner and physicist Sean Tull was recently pre-published. It suggests that the universe and everything in it are imbued with physical consciousness.

The Integrated Information Hypothesis of Consciousness (IITC) is a prominent theory that attempts to explain consciousness as a collection of interrelated information (ITT). Everything, according to this theory, is conscious in some way or other.

This is an intriguing hypothesis, because it is based on the premise that physical conditions lead to changes in awareness. This “capacity to experience” things makes you conscious. One way to tell if a tree has consciousness is to look at how it reaches out to the sun. An ant’s consciousness arises as a result of the ant-specific experiences it has, and so on.

It’s a little difficult to go from live things like ants to inanimate stuff like pebbles and spoons. As Neo found out in The Matrix: “There’s no spoon.” Those creatures, though, could be aware. In place of the spoons, there are merely molecules arranged in a spoon-like fashion. If you keep looking, you’ll eventually get to the subatomic particles that are shared by all of the universe’s physical entities. It’s the same material that’s in trees, ants, rocks, and utensils.

What does this have to do with artificial intelligence? Individual systems at the macro- and micro-scale that express the independent ability to act and react in response to external stimuli could be regarded as universal awareness.

If shared reality is what consciousness indicates, then intellect isn’t necessary; all that’s needed is the ability to perceive existence. So, if the math supports latent global awareness, AI already shows a consciousness level comparable to spoons and rocks.

This has mind-boggling consequences for the future. It’s hard to care about what it’s like to be a rock right now. Because of the IITC extrapolation, which assumes that we will solve GAI, robots with consciousness will one day be able to explain how it feels to be an inanimate item in this universe.