Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

"Smarter" robots

This site may earn commission on affiliate links.
Absolutely. I loved his posts as well and can't understand why a mod had to chime in to warn people of "advertising" when all Garlan did was show how passionate he is about his work (and rightly so imho). He never mentioned any company or product, nor did he invite PMs or any other form of possible advertisement.
About the subject itself: I find it fascinating, to say the least, what modern robots are capable of, proper software provided. I live near the headquarters of the former "Reis Robotics", now also part of Kuka, and was there on several public open days year ago. Even back then, it was hard not to be fascinated by these precision machines.
 
Is this being done in an academic environment? I've done quite a lot with industrial robots, including a limited amount with Kuka, and have never seen a system where robots are collaboratively self optimized in this way. What software is being used to do this? I'm not aware of any Kuka software that does this.

Same here. I do this for a living too and what OP describes is not my experience. Yes the robots have some intelligence and can automate some path finding. But actually piecing interactions from multiple moving elements together (some may be robots, some may be other equipment) is always a fairly manual process.
 
  • Informative
Reactions: neroden
There is no central intelligence unit in our lineups. Each robot can be the chief steward which can be manually selected or automatically selected in our self healing design. This company essentially had 32 central computers each capable of handling the entire line.
conciousness is an emergent phenomena, and perhaps there are gradiations of conciousness.
we live in a singularity
 
Not that it's a big deal, but you have previously mentioned working in AR for a specific company (that you name). Also mentioned working on Samsung AR.

I'm not sure if that's the same as this work though.
My previous work in writing AR scripting is still very much in play concerning what I do today. The library's I write now are fully capable of using my AR scripting as I have converted it all to python.
I'm headed up to Rego Park NY soon to write some libraries for a company with yaskawa machines. These new machines are excellent in that they don't have ANY bearings at all. They all have fully magnetic articulating attachments and elbows. I went to a demonstration yesterday and they are dead silent in their operation. They are located net to a child day care center who used to complain about all of the noise coming from next door. Or hopefully its appropriate that I can say ( after we finish )....that they "USED " to complain about the noise. Anyway.....the company we are going out to write hand-in-glove HIG libraries for has a new double line configuration that we have never heard of.
So. There are 2 assembly lines that sit almost side by side except for the fact that there is a row of robots separating them in a serpentine fashion. So....the sandwich looks lie this from a top view. Far left 16 robots....then assembly line....then 14 robots .....then assembly line then 16 robots. The 14 robots in the middle work both assembly lines. Its almost like having one line making Model S's and the other line making Model X's with the middle row of robots knowing how to make both while spinning around from line to line performing functions where needed. Actually in this scenario.... each line could make both S's and X's together, however the company doesn't want to do that. Anywhoo...the first thing that is unique about these robots is that the outside robots are mobile. They leave the assembly line and go out to retrieve the parts they need for the line. Parts won't be brought to them. Its pretty easy for us in that all we have to do is go over to the parts bin and pick up the part from the front of the bin ( as the parts slide down to the front each time the front one is removed) with our HIG and then go over to the assembly line and install the part. There is no welding or anything here...its an adhesive operation that joins parts together. Keep in mind that these assembly lines don't require the part to be put onto a belt or anything .
<----- We didn't do this, but this looks like a HIG program. The robot is moving like a hand.....not like a program. The robot would have to figure out how to move it's arm to follow the HIG movement. It looks like a demo....which is probably why its not really touching the case....but you get the picture.
The most unique part of the programing that we will have to write HIG scripts for is the fact that each robot will have to be able to swap out parts on each other. If any one of a robots sensors, or servos or such goes bad...each robot can request a part replacement from another and have that robot replace said part on the broken robot. We haven't arrived to implement this yet, however I believe everyone in this forum can see where this is going. ( closer to "Tesla level 5" AP for robots).
The company we are going to be working for here adequately ensured us that they are not a robot company and are not looking to become one nor hire anyone on staff to watch robots. They want to finance all of the self healing they can and turn the lights off and have the robots work 24 hours a day in the dark. We'll see.

We don't sell or promote yaskawa or anyone. We just write libraries and code and sometimes install whatever hardware and libraries people want.
 
Last edited:
One last post concerning the robot assembly lines I write python libraries for.
I'd be interested to know more about the code underpinnings.

It sounds to me like you're moving heavily towards declarative programming, i.e. specifying results, with the generic programs running on the robots looking for the best ways to reach the results. (I've always liked declarative programming.) You must have quite a lot of goal-seeking library routines already written and running.

I've been criticising people who think "level 5 self-driving" will be ready soon because the problem of specifying the desired behavior has not been solved, making the declarative programming problem... unsolved. (In essence, we do not have anywhere near an accurate and complete description of how a self-driving car should behave.) In industrial automation, by contrast, that declaration problem pretty much has been entirely solved; we do know the specs of the resulting product. You can simply state that the weld needs to be this strong, input the properties of the materials, etc.
 
Last edited:
Thank you for all of the interesting details! I do software development on surgical simulators which use haptic devices (much, much smaller robotic arms) to provide force feedback to the user, and I’m very interested in the details about their much larger cousins.

From a layman's perspective it does seem pretty sketchy to have the robots be self optimizing if they don’t have a complete understanding of the various parts they are manipulating. I.e. I can see that the robot could figure out that adding a rotational force removes the need for a counterweight, but what if that same rotation force stresses a weld in a way that it wasn’t designed for? I guess that’s why they are suggestions only, and they require approval before they are put into practice? Otherwise it seems like a recipe for unintended consequences.
 
  • Like
Reactions: neroden
Thank you for all of the interesting details! I do software development on surgical simulators which use haptic devices (much, much smaller robotic arms) to provide force feedback to the user, and I’m very interested in the details about their much larger cousins.

From a layman's perspective it does seem pretty sketchy to have the robots be self optimizing if they don’t have a complete understanding of the various parts they are manipulating. I.e. I can see that the robot could figure out that adding a rotational force removes the need for a counterweight, but what if that same rotation force stresses a weld in a way that it wasn’t designed for? I guess that’s why they are suggestions only, and they require approval before they are put into practice? Otherwise it seems like a recipe for unintended consequences.
I agree with your assessment.

There are some 72 factors on that counterweight assembly line that constitute a more efficient assembly line and the robots constantly tally up the level of effectiveness of all 72 factors. What we are seeing more and more from our customers is a concern about their robots becoming more and more humans. We constantly have to alert our customers that our programs and libraries are still extremely linear. In other words....the robots aren't thinking.....they are just calculating a limited amount of data in a defined amount of time and then produces an alert if needed.
The final robot performs a rigidity test of the final frame before it makes it way to paint. This final robot attempts to twist and turn the frame with powerful amounts of stress every now and again. We use constant data from the line to determine if a more efficient line can be had by the customer.

For instance: 5 months ago robots on a line that we built came up with a 22% reduction in energy and a 18% reduction in repair costs which would have primarily come from a 2.3% reduction in assembly line speed. The customer said no. The customer actually wanted to increase the output speed of the line. We provided several options...even to the point of purchasing 2 more robots that would keep their throughput where it is and at the same time reducing the net assembly line speed. It was beautiful. All we had to do was install the additional robots at the end of the line and networked them in. No additional programing was needed as the original robots handed out instructions to the 2 new ones. That was one of the first times we did that and it was beautiful

Again, the robots we programed on this line has approx. 72 measurable and we have set an improvement target of 10% to determine an improvement alert back to us. In other words if only 7 of the 72 parameters find that their number can improve....we won't receive an alert for improvement. However there are customers who allow the robots to make automatic improvements in 1% increments as long as the final product isn't impacted.

I really don't think that the software that we both write is very different in scope even though the size and function might be totally different as well as the fact that we probably write code in different languages.
 
Last edited:
Thanks for posting about how advanced the robots are getting. I didn't realize industrial robots were so advanced. I am an electronics tech but work in a different industry it would be a large jump to get into industrial robots for me. Might have to think about making the jump means a drastic pay cut for a few years as my experience isn't directly applicable but would make learning quick.
 
I've been criticising people who think "level 5 self-driving" will be ready soon because the problem of specifying the desired behavior has not been solved, making the declarative programming problem... unsolved. (In essence, we do not have anywhere near an accurate and complete description of how a self-driving car should behave.)
Although, we have a very good idea of how a self driving car should not behave.
 
I agree with your assessment.

There are some 72 factors on that counterweight assembly line that constitute a more efficient assembly line and the robots constantly tally up the level of effectiveness of all 72 factors. What we are seeing more and more from our customers is a concern about their robots becoming more and more humans. We constantly have to alert our customers that our programs and libraries are still extremely linear. In other words....the robots aren't thinking.....they are just calculating a limited amount of data in a defined amount of time and then produces an alert if needed.
The final robot performs a rigidity test of the final frame before it makes it way to paint. This final robot attempts to twist and turn the frame with powerful amounts of stress every now and again. We use constant data from the line to determine if a more efficient line can be had by the customer.

For instance: 5 months ago robots on a line that we built came up with a 22% reduction in energy and a 18% reduction in repair costs which would have primarily come from a 2.3% reduction in assembly line speed. The customer said no. The customer actually wanted to increase the output speed of the line. We provided several options...even to the point of purchasing 2 more robots that would keep their throughput where it is and at the same time reducing the net assembly line speed. It was beautiful. All we had to do was install the additional robots at the end of the line and networked them in. No additional programing was needed as the original robots handed out instructions to the 2 new ones. That was one of the first times we did that and it was beautiful

Again, the robots we programed on this line has approx. 72 measurable and we have set an improvement target of 10% to determine an improvement alert back to us. In other words if only 7 of the 72 parameters find that their number can improve....we won't receive an alert for improvement. However there are customers who allow the robots to make automatic improvements in 1% increments as long as the final product isn't impacted.

I really don't think that the software that we both write is very different in scope even though the size and function might be totally different as well as the fact that we probably write code in different languages.
Wow- simply WOW. Never knew, or even had the reason to ask about how robots "think". Very informative, instructional and entertaining.
Thank you Garlan for the clear depictions in your writing. You are a gem.
 
I own Global X Robotics & Artificial Intelligence ETF (BOTZ), they have a Infographic on this topic Robotics and Artificial Intelligence
And on to thin ice-----your robots will take up the slack in a community with aging workers. good. Your robots may take away work opportunities/jobs in a market that is not gentrifying (Detroit?).bad.. How do you reconcile the need for "more jobs" - a common political banter with the shifting to "overseas" and robots.
 
And on to thin ice-----your robots will take up the slack in a community with aging workers. good. Your robots may take away work opportunities/jobs in a market that is not gentrifying (Detroit?).bad.. How do you reconcile the need for "more jobs" - a common political banter with the shifting to "overseas" and robots.
Sorry when I invested in BOTZ I didn't see "Doom & Gloom" that you some how found instead I saw company's like Intuitive Surgical, they help surgeons perform more complex operations while being less intrusive, other top holding are NVDA & MBLY both in the field of AI & autopilot hoping to reduce car accidents. ABB is a large company involved in solar, wind & battery storage along with making robots etc...
Robotics & Artificial Intelligence ETF
 
Sorry when I invested in BOTZ I didn't see "Doom & Gloom" that you some how found instead I saw company's like Intuitive Surgical, they help surgeons perform more complex operations while being less intrusive, other top holding are NVDA & MBLY both in the field of AI & autopilot hoping to reduce car accidents. ABB is a large company involved in solar, wind & battery storage along with making robots etc...
Robotics & Artificial Intelligence ETF
I LOVE robots. I've had DaVinci surgery. I've invested in early robot startups . And the description of Kula bots on the Tesla line is fascinating . I was looking for talking points to counter the Job Theft banter that could gather momentum.