Robotic Warfare

8482farm

Pharmacy Technician
Verified Military
Joined
Sep 25, 2017
Messages
68
Wiki:

"Boston Dynamics is a Japanese engineering and robotics design company that is best known for the development of BigDog, a quadruped robot designed for the U.S. military with funding from Defense Advanced Research Projects Agency (DARPA), and DI-Guy, software for realistic human simulation. Early in the company's history, it worked with the American Systems Corporation under a contract from the Naval Air Warfare Center Training Systems Division (NAWCTSD) to replace naval training videos for aircraft launch operations with interactive 3D computer simulations featuring DI-Guy characters."

Since the 1920s, robotics has come a long way. We've already seen remotely operated machines like UAVs/drones who've replaced actual pilots in the sky and the significant benefits they provide. How do you see these robots be integrated in the ranks or utilized?


 
How do you see these robots be integrated in the ranks or utilized?

You don't reference Artificial Intelligence, maybe purposefully, but one imponderable is what AI would learn and apply from contact with the enemy. That goes perhaps beyond whatever intent the manufacturer(s) and users may have.
 
You don't reference Artificial Intelligence, maybe purposefully, but one imponderable is what AI would learn and apply from contact with the enemy. That goes perhaps beyond whatever intent the manufacturer(s) and users may have.

I swear I thought I heard someone say skynet for a second, must have been the wind.


But in all honesty that opens up serious questions in regards to learning, "feeling", and whether we actually "control" an AI aside from just turning a power switch on or off.

Take for example
Facebook shuts down robots after they invent their own language


This is a very "dumb" robot in the sense it's designed for a basic function (as a chatbot), however through learning it can rapidly go past what the original goals of the experiment were to begin with. Maybe I've read to much sci-fi growing up, but what could a "smart" machine (AI) accomplish?


Either way, time to dive into some Asimov.
 
Interestingly enough, the real life Cyberdyne is based in Japan and Boston Dynamics was just acquired by a Japanese Telecom group in June. Coincidence? I think not.

You don't reference Artificial Intelligence, maybe purposefully, but one imponderable is what AI would learn and apply from contact with the enemy. That goes perhaps beyond whatever intent the manufacturer(s) and users may have.

I didn't want to mention AI because I don't think the military would want to relinquish that much control.
 
I didn't want to mention AI because I don't think the military would want to relinquish that much control.

Do you think robots would then be limited to say, certain roles in logistics? I think that if self-driving vehicles are just a few years away, autonomous robots in combat roles also are.
 
Either way, time to dive into some Asimov.

Or see if you can get your hands on "A Classification of Degenerate Loop Agreement" by Xingwu Liu, Jiangshong Pan and Juhua Pu. Sensory deprivation can lead to hallucination and insanity in humans, but isn't it the starting point when we talk about computers/robotic interface with the world? Google "inceptionism gallery" and have a look at what/how Google's AI "sees".
I wonder among other things what it would create when asked to picture "enemy", depending on what it is presented with.
 
Interestingly enough, the real life Cyberdyne is based in Japan and Boston Dynamics was just acquired by a Japanese Telecom group in June. Coincidence? I think not.



I didn't want to mention AI because I don't think the military would want to relinquish that much control.

Given the evolution and integration of robotics, how can you discuss this without including AI, be it military or non-military?

You are aware of Stephen Hawking's thoughts on AI.
 
Or see if you can get your hands on "A Classification of Degenerate Loop Agreement" by Xingwu Liu, Jiangshong Pan and Juhua Pu. Sensory deprivation can lead to hallucination and insanity in humans, but isn't it the starting point when we talk about computers/robotic interface with the world? Google "inceptionism gallery" and have a look at what/how Google's AI "sees".
I wonder among other things what it would create when asked to picture "enemy", depending on what it is presented with.


It's interesting you bring up the effects of sensory deprivation, the fact that our mind tries to create images, smells, sounds to cope with the immediate loss of the senses is nothing short of astounding.

But to avoid going down that rabbit hole, I'm amazed at what Google has been able to do with it's Neural Network by creating organic images out of nothingness. However it raises the question of how much of it is on its own versus the input provided by the researchers. Or more to the point is it actually possible to create a full on Artificial Intelligence, something that can be free thinking and make decisions on its own instead of a previously coded set of guidelines and inputs that in effect take away it's ability of free thinking and understanding (I could be completely wrong here and I gladly welcome being corrected if I am).

I think that the Chinese Room thought experiment hits what I'm trying to say better than how I'm doing.


But to your last statement, I don't think it could do it accurately enough to be safe. Yes, you can put in a myriad if inputs for what you would describe as an enemy. But what sets humans apart from programs and machines is that we can understand the subtle nuances and discrepancies in our communication and actions with one another.

This goes back to the thought experiment of whether a program or machine can actually understand and think, or whether it's just reliant on the inputs you set it up with.

I did look up the research paper you mentioned..I thought I was reading ancient latin half way through it. Guess I need to read up on computer science and programming more in my sparetime.
 
Or more to the point is it actually possible to create a full on Artificial Intelligence, something that can be free thinking and make decisions on its own instead of a previously coded set of guidelines and inputs that in effect take away it's ability of free thinking and understanding

We'll know once it generates itself sua sponte.

But what sets humans apart from programs and machines is that we can understand the subtle nuances and discrepancies in our communication and actions with one another.

We can?
 
This was a big splash in the local news here in Da'burgh the other day. Interestingly, they don't specifically mention military applications of AI in this announcement. Considering the amount of funding CMU gets from the DoD seems a bit odd. Or maybe they just want to keep the on campus protests to a minimum. ;-)
Carnegie Mellon Launches Undergraduate Degree in Artificial Intelligence - News - Carnegie Mellon University

Carnegie Mellon University's School of Computer Science (SCS) will offer a new undergraduate degree in artificial intelligence beginning this fall, providing students with in-depth knowledge of how to transform large amounts of data into actionable decisions.

SCS has created the new AI degree, the first offered by a U.S. university, in response to extraordinary technical breakthroughs in AI and the growing demand by students and employers for training that prepares people for careers in AI.

"Specialists in artificial intelligence have never been more important, in shorter supply or in greater demand by employers," said Andrew Moore, dean of the School of Computer Science. "Carnegie Mellon has an unmatched depth of expertise in AI, making us uniquely qualified to address this need for graduates who understand how the power of AI can be leveraged to help people."

U.S. News and World Report this spring ranked SCS as the No. 1 graduate school for artificial intelligence.


The bachelor's degree in AI will focus more on how complex inputs, such as vision, language and huge databases, are used to make decisions or enhance human capabilities, he added. AI majors will receive the same solid grounding in computer science and math courses as other computer science students. In addition, they will have additional course work in AI-related subjectsstatistics and probability, computational modeling, machine learning and symbolic computation.

Simmons said the program also would include a strong emphasis on ethics and social responsibility. This will include independent study opportunities in using AI for social good, such as improving transportation, health care or education
 
DIUx currently has a posting out for their HACQer program. If it weren't in Cali for 4 months, it sounds like a pretty cool program to get involved with. OTA warranting authority is the next big thing for contracting.
 
Back
Top