|A brief history of AI||22/04/2020|
With the use of AI growing at a seemingly exponential rate in today's industrial landscape, you could be forgiven for thinking that the technology is a relatively new phenomenon, however it's been around for longer than you might think. Here, Sophie Hand summarises the history of AI, highlighting some important milestones
Alan Turing is one of the most well-known names in the history of computing. During the Second World War, Turing’s work on cracking the Enigma code, used by the German army to send secure messages, formed the foundation of machine learning.
Turing suggested that machines, like people, could use reasoning to solve problems or make decisions. In 1950, he described a way of measuring whether we can declare that a machine is intelligent — Turing called it The Imitation Game. Commonly referred to as The Turing Test, the method involves a human, a machine and a participant that determines which is which. If a computer converses with a human without the individual realising it is a machine, the computer passes the test.
However, Turing and the rest of the industry were held back by the limitations of computers at the time, computers suffered from a lack of memory and storage and were extremely expensive. They were therefore limited to big companies and top universities.
The Dartmouth Conference
In 1956, John McCarthy, an American computer scientist, organised The Dartmouth Conference, an event where top minds from leading universities came together to brainstorm. It was here that the term artificial intelligence was officially coined, bringing together several terms including cybernetics, automata theory and information processing.
In the years following the conference, AI development went from strength to strength. One promising development occurred in 1966, the first chatbot. Known as ELIZA and developed by Joseph Weizenbaum at the Massachusetts Institute of Technology (MIT). ELIZA communicating via text in human language, rather than in computer code, was an early example of natural language processing. ELIZA was an early ancestor of today’s chatbots, such as Alexa and Siri, who can now communicate using speech as well as text.
Due to the progress achieved during the time between 1956 and 1973, this period became known as the first AI summer. Researchers were optimistic in their predictions about the future of AI and computers were performing more and more tasks, from speaking English to solving algebraic equations.
The period from 1974 to 1980 became known as the first AI winter
Based on early successes, research and funding were channelled into furthering AI, but at the time, computers still could not process the amount of data required for successful application. For example, one program for analysing the English language could only handle an unhelpful vocabulary of 20 words. The period from 1974 to 1980 became known as the first AI winter — funders realised that research was under-delivering on its goals and withdrew their support.
Summer comes back around
In 1981, a valuable commercial purpose for AI was found and this attracted investment back into the field. Ed Feigenbaum and others invented a new type of AI, called expert systems. Instead of focussing on general intelligence, expert systems were focussed on using a series of rules to automate specific tasks and decisions in the real world. The first successful implementation, known as the RI, was introduced by the Digital Equipment Corporation to configure the company’s orders and improve accuracy. Japan also heavily invested into computers designed to apply AI and America, the UK and the rest of Europe followed suit.
Unfortunately, the excitement ended in disappointment. Apple and IBM introduced general purpose computers more powerful than those designed for AI, demolishing the AI industry. Funding in America dried up, as it did in Japan following the failure of a flagship project.
A change of approach
In 1988, researchers at IBM published a paper that introduced principles of probability when tackling automated translation between French and English. The approach then switched to determining the probability of outcomes based on data, rather than training them to determine rules. This is closer to the cognitive processes of humans and has formed the basis of today’s machine learning.
AI developed radically during the 1990s, particularly due to the increasing level of computational power. One highlight was in 1997, when a computer software known as Deep Blue beat the world chess champion, Garry Kasparov. Another AI vs expert milestone came in 2016, when Deep Mind’s AlphaGo beat the 18-time world champion Lee Sedol.
The future of AI
The developments of new technology, such as autonomous vehicles, provide high profile use cases for AI. In 2018, Waymo commercialised the first driverless taxi service in Phoenix. AI has become a common day-to-day technology for smartphone users, Googlers and manufacturing professionals.
The history of AI has featured peaks and troughs, with both interest and funding fluctuating. It hasn’t been easy to get to where we are today, but AI is only going to get bigger.
Sophie Hand is UK country manager at automation equipment supplier EU Automation
|Has the strainwave gear had its day?||11/10/2019|
Sophie Hand gives her take on the state of the robotic gearing industry and what is to come.
Most robots, from industrial arms to space exploration to customer, are actuated by an electric motor coupled with a gear box on their joints. As the motor spins, the gear box acts as a speed reducer and torque multiplier. There are numerous types of gears, all suited to different applications – spur, worm, planetary, cycloidal and planetary gears all have their place.
A major issue facing robot manufacturers operating geared motors is backlash, otherwise known as the lost motion between the gear teeth, which is a design trait built into many speed reducers so that the gears mesh without binding and so that lubrication can be applied to lessen the chance of overheating or damaging the teeth. While typical industrial robots have an accuracy of around ±0.1mm – and many can be even more accurate – gear backlash can reduce this accuracy in robots and machine tools, which is why it is concerning to manufacturers.
For many robotics applications, strain wave gears, also known as Harmonic Drives after the company that trademarked the technology, are the first choice. This is because they boast zero backlash, which leads to a higher accuracy robot – important in precision manufacturing or medical applications – and are lightweight and compact. They also have high gear ratios when compared with planetary gears. They are made up of three major parts, a wave generator, a flex spline and a circular spline.
high-precision machining of gears with minimal backlash isn’t cheap
The strain wave gear is therefore in hot demand, but its high precision manufacturing process, combined with its popularity, means that there is a long lead time, creating a bottleneck for robot manufacturing. Also, high-precision machining of gears with minimal backlash isn’t cheap and when typical industrial robots usually contain around six gears and surgical robots need 32, the costs can quickly add up.
As robot technology evolves, the industry could greatly benefit from a scalable, flexible, accessible speed reduction technology. There is also an opportunity to create a gear with even higher strength, low noise, low vibration and the need for even less maintenance.
While a broken down gear is not a disaster as an automation parts supplier can source you a replacement, it can cause downtime, disrupt production and add unexpected costs. As well as robotics, the benefits will be felt in other fields too like automotive, aerospace, renewable energy and machinery.
What’s next for robot gears?
After half a century of incremental change in gearing technology, numerous start-ups are now coming forward with new approaches to the age-old speed reducer problem. IM Systems, a Dutch company, for example, has reinvented the gear so that it has no teeth. Its product, the Archimedes Gear relies on frictional contact to convert speed into torque using flexrollers made from hollow steel cylinders to compress and transmit the rotational power. While each point of contact isn’t as strong as interlocking mechanical teeth, up to 30 rollers can contribute to the traction.
Interestingly, the Archimedes Drive can reach impressive single-stage reduction ratios of up to 10,000:1, boasts high torque, high accuracy and zero backlash. Its design, using bearing-like components, means that it would not be subject to the same lead times as other solutions and would help to deal with the shortage of gears. The reduction in cost, too, could make them an enticing option for robot manufacturers who require a large number of gears. In addition, the company can design variations of its drives for use in any axis of the robot.
Another new business, Motus Labs, has also come up with a possible solution to many of the industry’s challenges — the Motus M-DRIVE. The technology was developed on the principles of kinesiology and kinematics, the study of human and non-human body movement, to simulate the motion of an athlete running.
It even includes hips, knees, legs and feet. This is a gearless drive design, which uses mating blocks instead of teeth and replaces them with low friction, aluminium and plastic mating surfaces. The new method maximises torque and minimises slippage and flexing. Other benefits include increased contact area and efficiency, lightweight materials and greater rigidity.
Sophie Hand is UK country manager at EU Automation
|Building a digital twin||05/09/2019|
Research company Gartner predicts that half of large industrial companies will be using digital twins to improve productivity by 2021. Jonathan Wilkins, director at industrial automation parts supplier EU Automation, explains the important factors that manufacturers should consider when building a digital twin.
Digital twin technology is not a new concept — manufacturers have created 3D renderings of computer-aided design (CAD) models for years, for asset management and prototyping. However, the increasing availability and affordability of technologies such as Internet of Things (IoT) enabled devices, data interpretation software and bandwidth mean that more manufacturers can use digital twins to improve their processes.
Digital twins are virtual versions of a real object, which could be a building or network of buildings, a product, a system or even a city. IoT sensors instantly transmit data from an object to the digital twin, giving manufacturers an accurate representation of the asset that will adapt depending on what happens on the factory floor.
Cost is often a challenge for smaller manufacturers that want to digitalise their processes because most technologies, particularly digital twins, require an up-to-date IT infrastructure that not all facilities can afford. Regardless of the cost, all manufacturing businesses should learn about this emerging technology to prepare for when it disrupts the industry in the near future.
Smaller manufacturers should start small and scale up as they begin to see the advantages of the technology. For example, the business can implement a digital twin to monitor the performance of a single part in an asset, introduce more models of individuals parts later and then bring them together to build a twin that monitors an entire machine or system.
The information modelled in a digital twin could include condition data, such as pressure, temperature or vibration, operational status whether its online or offline and the device’s context, such as location in the facility and its relationship to other devices or systems.
Knowing the preferred outcome of using a digital twin will also help manufacturers to understand what other information to input into the system. For example, if the business wants to understand how to improve product quality and customer service, they can involve data from the sales and marketing team to simulate how a new product will create new opportunities.
"Sharing our likes and dislikes has become second nature and businesses do all they can for a good customer satisfaction rating. Combined with data-driven, hyper-personalized marketing and persuasive technologies, companies now have the power to really “move” customers: to directly read and influence their emotions and state of mind," explains Thijs Pepping, a trend analyst at Capgemini's VINT, the Sogeti Research Institute for the Analysis of New Technology. "And the frontrunners are already creating psychological profiles and digital twins of their users."
Including data about a machine, such as product details, technical specifications and warranty status can help manufacturers using a digital twin to improve maintenance. With this information and data about machine condition collected by the twin, engineers can tailor maintenance and repairs based on actual usage of the asset, instead of estimating when the asset requires maintenance based on its lifespan. An increased awareness of machine condition can also reduce unplanned downtime as plant managers can source a replacement before the machine breaks down.
Manufacturers should prioritise cybersecurity when implementing a digital twin, because it will store valuable data that could be used to harm the business. Manufacturers should determine who will have access to the simulation, ensuring that only the necessary people can gain information or control of the asset. Considering cybersecurity during the design phase and clearly managing the type of information collected, where it is stored and who has access to it will reduce the risk of security breaches once the twin is implemented.
Digital twin technology can help a business to digitalise its processes and understand how asset management can improve maintenance, productivity, customer satisfaction and more. I suspect that Gartner might be underestimating the percentage of industrial companies using digital twins by 2021. For an industry brought up on CAD, it could easily become as much second nature
|Gripper tech: challenges & opportunities||23/07/2019|
Recent technological advances mean that robots can now perform many of the tasks we traditionally associate with the human hand. Here Sophie Hand shares some recent developments in gripping technology.
The ability to grip and manipulate objects has been central to the survival of the human race and, since 1969, gripper technology has done the same for robots. Advances in grippers offers incredible benefits for manufacturers in precision, performance and productivity. Manufacturers can use this end-effector tooling for picking, placing or packing objects. Grippers can handle hazardous materials without a safety risk to employees and can also take over repetitive tasks that may cause a repetitive strain injury.
There is a wide variety of grippers on the market to suit different applications. One of the most basic forms is a parallel motion two-jaw gripper, which is commonly used for lifting objects. However, there are a number of different designs including bellows grippers, O-ring grippers and needle grippers. As well as their physical structure, grippers can vary in how they are powered, as they can be hydraulic, pneumatic or electric. Despite the number of grippers available on the market, there are still many tasks that are difficult for robots to accomplish. Typical industrial grippers were designed to be task-specific, which means they are not particularly versatile.
Alongside the development of industrial grippers, universities and researchers were developing technology that mimicked the human hand. This technology was challenging to develop, expensive and used a lot of power. In recent years, engineers around the world have driven incredible advances in robot grippers and these two worlds are now colliding. We now have technology that is affordable, energy efficient and flexible and is able to overcome many of the traditional challenges.
A gentle touch
It has historically been difficult for a robot to handle fragile objects with the correct force. For example a robot handling fruit for a food and beverage company must hold items firmly enough so that the fruit doesn’t slip out of its grip, but gently enough that items are not crushed or bruised. The softness of human hands allows for compliant contact where our fingers mould against the surface of an object, but this is not inherent to a robot’s grippers, which are traditionally made of metal or other hard materials. Another challenge for robot grippers is to adapt to what they are feeling — something which comes naturally to humans.
However, businesses are working to overcome this to expand the applications of robots into environments where fragile objects are handled. Soft robotics is a subfield of robotics involving robots that are made with compliant materials, similar to living organisms ― be it the tentacles of an octopus or the fleshy fingers of a human hand.
One start-up working in this area is Ubiros, who has developed an innovative electrically actuated gripper known as Gentle. The gripper has a soft enough touch that it can arrange flowers in a vase without damaging them. The electric actuation delivers more precise control than a typical pneumatic device.
The human hand & beyond
Another historic challenge for grippers was dexterity. Many traditional gripper designs have two or three fingers, made of stiff material. While they are effective for picking and placing tasks, they may not be suited to more complex manipulation activities. To enable robots to complete tasks that require more dexterity, engineers have been developing technology that more closely resembles the human hand.
engineers have been developing technology that more closely resembles the human hand
One example of this is the RBO Hand 2, a human-like hand with five silicone fingers, which has been developed by researchers at the Technical University Berlin. The fingers on this hand are controlled by pressurised air, which causes them to curl and straighten when carrying out a certain task. The design means that it can create complex geometries, mechanically adapt to the shape of an object and has low impact energy.
Another example of a company developing soft robotics based on the human hand is the Shadow Robot Company, one of Britain’s longest-running robotics firms. The company specialises in grasping and manipulation for robots and has developed the Dexterous Hand, with 20 actuated degrees of freedom, absolute position and ultra-sensitive touch sensors. The device is therefore suitable for automated tasks that require a close approximation of the human hand.
While a manufacturer is unlikely to use a robot to copy someone’s handwriting, they may have picking and placing or other handling jobs to automate. Developments in gripper technology and advances in soft robotics now means that that robots are overcoming traditional challenges and moving into new fields.Sophie Hand is UK country manager at automation parts supplier EU Automation
|OPC UA in the smart factory||15/05/2019|
Here Sophie Hand, UK country manager at industrial parts supplier EU Automation, explains the universal language of automation equipment – OPC UA.
A single, unifying language is beneficial in industrial automation environments; if equipment in a facility does not use the same communication protocol it can hinder their ability to share information. In the era of the Industrial Internet of Things (IIoT), connectivity is essential to remain competitive, that’s why the Open Platform Communications Foundation developed a cross-platform protocol.
The Open Platform Communications Unified Architecture (OPC UA), is a machine-to-machine communication protocol for industrial automation. OPC UA is open-source, platform-independent, protocol-independent and comes with a very rich extensible data model. Importantly, OPC UA was designed with cybersecurity in mind — it is secure by design, with validation, encryption and authentication already built in.
Thanks to OPC UA, equipment from different vendors can communicate, solving the interoperability pain-point for manufacturers. As well as vendor-to-vendor connection between programmable logic controllers (PLCs), the protocol can connect supervisory control and data acquisition (SCADA) systems, manufacturing execution systems (MES) and enterprise resource planning (ERP) systems for a vertical view point of the factory. By connecting to the cloud, plant managers can remotely access information to gain an overview of the entire plant in real-time.
OPC UA offers unified connectivity in a cost-effective way, because manufacturers don’t need to alter their existing machines. The OPC server will convert the communication protocol of the PLC or other piece of equipment into the OPC protocol. Consequently, even small and medium-sized businesses can now benefit from complete connectivity in their production processes. Because of its benefits, OPC UA is already used by 50 million machines worldwide.
There are numerous benefits to a factory where all parts and even better, all systems, can communicate. For example, manufacturers with cloud-connected systems can remotely control processes in the plant, which means there is no need to travel to a site for simple fixes. If a component breaks down, the manufacturer could identify this quickly and order a replacement component, limiting expensive downtime.
Communication problems may have prevented the Tower of Babel from going ahead, but it doesn’t have to prevent your smart factory project from being successful. OPC UA provides an easy and effective way to get your entire factory talking.
|Blockchain vs the Tangle||21/08/2018|
Jonathan Wilkins, marketing director of EU Automation, explains how blockchain and the tangle differ and what both can do to improve manufacturing practice
Apple versus Microsoft, Uber versus Lyft, Amazon Video versus Netflix. Technology companies are constantly advancing their product or service offering to remain competitive in their fields, particularly when they have a large rivalry. As cryptocurrencies become more popular, their creators are vying for new customers.
Cryptocurrencies, such as Bitcoin, have the potential to radically change the financial sector. They use technology that stores and processes sensitive information in a way that is easy to trace but difficult to hack.
Blockchain, the digital ledger that records cryptocurrency transactions, logs each set of transactions as blocks and chains. The data is distributed over a large network of computers, rather than just one, to protect the data and decentralise control.
Blockchain is in the early stages of development, so its future is unclear. However, there are currently some limitations stunting its growth.
Secure but slow
The main criticism of blockchain-based cryptocurrencies such as Bitcoin is speed.
Miners verify transactions that are added to the blockchain and monitor the system to keep the network secure from hackers. However, miners are incentivised to complete transactions as they receive a fee for every block created. The diffused nature of blockchain makes it inherently secure. Any individual or group of people with malicious intentions would need to control 51 per cent of the nodes on the entire network to manipulate the transactions and balances.
Currently, blockchain miners complete 4.5 transactions per second whereas Visa completes 4,000 transactions per second. It would be difficult for Blockchain to reach this scale because when more blocks are created, the system becomes slower. For the Blockchain to become mainstream, issues with speed and scalability must be resolved.
Enter the Tangle
The IOTA foundation considered these limitations when it created the Tangle, a technology to rival the blockchain. It ignored the process of block and chain, choosing instead to develop a directed acyclic graph (DAG), or a collection of non-circular nodes that allow connectivity and transactions between humans and machines.
To complete a transaction in the Tangle, you must verify two other previous transactions. There is no need for miners to power the network – users can help the system grow. The Tangle also processes micropayments and machine-to-machine payments, encouraging machine connectivity and removing transaction fees.
Each node will become more secure as more related transactions are made, making it more difficult for each transaction to be tampered with. The verification speed of the network becomes faster as more devices join it, which means that. the more devices there are using the Tangle, the more secure and scalable it becomes — a crucial advantage for cryptocurrencies vying for a position as the de facto alternative to non-digital currencies.
Blockchain and the Tangle are both in the early stages of development, so we are only starting to see the potential. However, when looking to the future, the Tangle could have more potential in manufacturing applications.
Blockchain and the Tangle are both in the early stages of development
IOTA developed the technology to help companies realise Industry 4.0 and markets the Tangle as the backbone of the Internet of Things (IoT). Any internet-connected device can securely complete transactions using the Tangle. As the technology refines, machines will be able to complete secure transactions without human interaction.
Despite being the most talked about technology hosting cryptocurrencies, blockchain is not the only solution to online, secure transactions. Competitors, such as IOTA, are developing technologies that remove the limitations of blockchain. In the future, rather than choosing one based on price or the one with the latest model, manufacturers should look to the goals of each system to determine which service will benefit them most.
|Ethical robot design||01/11/2016|
Despite it being a prolific cinematic theme – think The Matrix and The Terminator – robot ethics has not been discussed much in industry until now. However, the most recent technological advancements in the field have led to the introduction of a new UK standard for robot designers. Leroy Spence, head of sales development at EU Automation, explains the new standard and the impact on industry.
The British Standards Institution (BSI) has devised a new guideline for the ethical design and application of robots and robotic systems. It recognises potential ethical issues that can arise from the increasing number of automated and autonomous systems being introduced to industrial and consumer environments. It also emphasises that it must always be transparent who is responsible for the behaviour of the robot, even if it behaves autonomously. The standard is relevant to all robots and robotic systems including autonomous cars, medical robots, industrial robots and those used for personal care.
A committee of scientists, academics, philosophers, ethicists and users developed the standard which is intended for use by robot and robotic device designers and managers. The standard, BS 8611:2016, was originally presented in September 2016 at a conference in Oxford, UK, and is available for purchase on the BSI website.
The new standard begins similarly to Isaac Asimov’s three laws of robotics, first proposed in his science fiction short story Runaround in 1942. Asimov’s first law states a robot may not injure a human being or allow a human to come to harm through inaction. The second law rules that a robot must obey all instructions given by humans, except those that conflict with the first law. Finally, the third law dictates a robot must protect its own existence as long as this does not involve conflicting with the first two laws. Robots should therefore always be safe, secure and fit for purpose.
BSI guidelines for manufacturers on previously uncommon hazards include; robot deception, robot addiction and the potential for a learning system to exceed its remit. The issue of whether forming an emotional bond with a robot is desirable is also covered; a particularly contentious subject if the robot interacts with the elderly or children. The standard also discusses the risks of the robot becoming sexist or racist, an issue that prominently surfaced when Twitter users influenced Microsoft’s new AI chatbot, Tay, to spew out offensive messages.
According to Alan Winfield, professor of robotics from the University of the West of England, this is the first published standard on robot ethics. However the EU is also working on robot ethics standards, with a draft report issued in May 2016. This covers the ethical issues of an automated workforce and will lay the groundwork for ethical development and design of robots.
If approved, the standard would become the first legal framework on the issue of robot ethics. The introduction of the new standard could provide the impetus for bodies such as the EU or even further afield to consider legal action to safeguard humans from the ethical issues associated with the growing number of industrial and commercial robots.
In industry, standards on the ethical use of robots are of particular use. Traditionally, industrial machines were guarded and caged to be kept safely away from humans. Newer generations of robots are able to work alongside and even in collaboration with human workers, having sensors and the ability to learn, as well as other safety features.
Examples of collaborative industrial robots are ABB’s YuMi or Rethink Robotics’s Baxter. These collaborative robots can work alongside humans and make it easy to integrate automation to an industrial process.
Although collaborative robots are becoming more popular, it is still common for manufacturers to operate legacy industrial automation systems, which offer the benefits of industrial automation without the ethical concerns. For manufacturers worried about the wellbeing of their industrial automation systems, but who are not ready to upgrade to the latest generation of cobots, sourcing legacy industrial parts doesn’t have to be difficult. A supplier of new and obsolete industrial automation parts, such as EU Automation, can provide replacement parts to safeguard the system’s future until the manufacturer is sure that an upgrade is necessary.
The BS 8611:2016 standard is one of the first signs that industry is starting to preoccupy itself with ensuring robot behaviour is accountable, truthful and unprejudiced. The dystopian future of Matrix is a highly unlikely possibility, but if we want to introduce robotics into industry and consumer environments on a wider scale, the ethical question should be at the forefront of our minds.