Misplaced Pages

Freddy II

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

Freddy (1969–1971) and Freddy II (1973–1976) were experimental robots built in the Department of Machine Intelligence and Perception (later Department of Artificial Intelligence, now part of the School of Informatics at the University of Edinburgh ).

#631368

34-401: Technical innovations involving Freddy were at the forefront of the 70s robotics field. Freddy was one of the earliest robots to integrate vision, manipulation and intelligent systems as well as having versatility in the system and ease in retraining and reprogramming for new tasks. The idea of moving the table instead of the arm simplified the construction. Freddy also used a method of recognising

68-587: A video projector . Patterns are usually generated by passing light through a digital spatial light modulator , typically based on one of the three currently most widespread digital projection technologies, transmissive liquid crystal , reflective liquid crystal on silicon (LCOS) or digital light processing (DLP; moving micro mirror) modulators, which have various comparative advantages and disadvantages for this application. Other methods of projection could be and have been used, however. Patterns generated by digital display projectors have small discontinuities due to

102-405: A complete and unambiguous reconstruction of shapes. Another method also belonging to the area of fringe projection has been demonstrated, utilizing the depth of field of the camera. It is also possible to use projected patterns primarily as a means of structure insertion into scenes, for an essentially photogrammetric acquisition. The optical resolution of fringe projection methods depends on

136-534: A critical report for the science and engineering research funding agencies in the UK, against Donald Michie from the University of Edinburgh and John McCarthy from Stanford University . The Edinburgh Freddy II and Stanford/SRI Shakey robots were used to illustrate the state-of-the-art at the time in intelligent robotics systems. Freddy Mark I (1969–1971) was an experimental prototype, with 3 degrees-of-freedom created by

170-524: A rotating platform driven by a pair of independent wheels. The other main components were a video camera and bump sensors connected to a computer. The computer moved the platform so that the camera could see and then recognise the objects. Freddy II (1973–1976) was a 5 degrees of freedom manipulator with a large vertical 'hand' that could move up and down, rotate about the vertical axis and rotate objects held in its gripper around one horizontal axis. Two remaining translational degrees of freedom were generated by

204-425: A sine wave shaped intensity modulation, but the methods work with "rectangular" modulated stripes, as delivered from LCD or DLP displays as well. By phase shifting, surface detail of e.g. 1/10 the stripe pitch can be resolved. Current optical stripe pattern profilometry hence allows for detail resolutions down to the wavelength of light, below 1 micrometer in practice or, with larger stripe patterns, to approx. 1/10 of

238-415: A small Honeywell H316 computer with 16KB of RAM which directly performed sensing and control. Freddy was a versatile system which could be trained and reprogrammed to perform a new task in a day or two. The tasks included putting rings on pegs and assembling simple model toys consisting of wooden blocks of different shapes, a boat with a mast and a car with axles and wheels. Information about part locations

272-399: A time, complete 3D shapes have to be combined from different measurements in different angles. This can be accomplished by attaching marker points to the object and combining perspectives afterwards by matching these markers. The process can be automated, by mounting the object on a motorized turntable or CNC positioning device. Markers can as well be applied on a positioning device instead of

306-413: A work surface that moved beneath the gripper. The gripper was a two finger pinch gripper. A video camera was added as well as a later a light stripe generator. The Freddy and Freddy II projects were initiated and overseen by Donald Michie . The mechanical hardware and analogue electronics were designed and built by Stephen Salter (who also pioneered renewable energy from waves (see Salter's Duck )), and

340-1162: Is a stub . You can help Misplaced Pages by expanding it . Structured-light 3D scanner A structured-light 3D scanner is a device that measures the three-dimensional shape of an object by projecting light patterns —such as grids or stripes—onto it and capturing their deformation with cameras. This technique allows for precise surface reconstruction by analyzing the displacement of the projected patterns, which are processed into detailed 3D models using specialized algorithms . Due to their high resolution and rapid scanning capabilities, structured-light 3D scanners are utilized in various fields, including industrial design , quality control , cultural heritage preservation, augmented reality gaming, and medical imaging . Compared to 3D laser scanning , structured-light scanners can offer advantages in speed and safety by using non-coherent light sources like LEDs or projectors instead of lasers . This approach allows for relatively quick data capture over large areas and reduces potential safety concerns associated with laser use. However, structured-light scanners can be affected by ambient lighting conditions and

374-413: Is a function of the steepness of a surface part, i.e. the first derivative of the elevation. Stripe frequency and phase deliver similar cues and can be analyzed by a Fourier transform . Finally, the wavelet transform has recently been discussed for the same purpose. In many practical implementations, series of measurements combining pattern recognition, Gray codes and Fourier transform are obtained for

SECTION 10

#1733105096632

408-411: Is commonly assumed that the comparison is between the data graph and the model graph . The case of exact graph matching is known as the graph isomorphism problem . The problem of exact matching of a graph to a part of another graph is called subgraph isomorphism problem . Inexact graph matching refers to matching problems when exact matching is impossible, e.g., when the number of vertices in

442-551: Is currently on display at the Royal Museum in Edinburgh , Scotland , with a segment of the assembly video shown in a continuous loop. Graph matching Graph matching is the problem of finding a similarity between graphs . Graphs are commonly used to encode structural information in many fields, including computer vision and pattern recognition , and graph matching is an important tool in these areas. In these areas it

476-477: The pixel boundaries in the displays. Sufficiently small boundaries however can practically be neglected as they are evened out by the slightest defocus. A typical measuring assembly consists of one projector and at least one camera. For many applications, two cameras on opposite sides of the projector have been established as useful. Invisible (or imperceptible ) structured light uses structured light without interfering with other computer vision tasks for which

510-418: The acquisition of a multitude of samples simultaneously. Seen from different viewpoints, the pattern appears geometrically distorted due to the surface shape of the object. Although many other variants of structured light projection are possible, patterns of parallel stripes are widely used. The picture shows the geometrical deformation of a single stripe projected onto a simple 3D surface. The displacement of

544-513: The angle between these beams. The method allows for the exact and easy generation of very fine patterns with unlimited depth of field. Disadvantages are high cost of implementation, difficulties providing the ideal beam geometry, and laser typical effects like speckle noise and the possible self interference with beam parts reflected from objects. Typically, there is no means of modulating individual stripes, such as with Gray codes. The projection method uses incoherent light and basically works like

578-513: The digital electronics and computer interfacing were designed by Harry Barrow and Gregan Crawford. The software was developed by a team led by Rod Burstall, Robin Popplestone and Harry Barrow which used the POP-2 programming language, one of the world's first functional programming languages. The computing hardware was an Elliot 4130 computer with 384KB (128K 24-bit words) RAM and a hard disk linked to

612-570: The dynamic range of the camera can be exceeded. Transparent or semi-transparent surfaces also cause major difficulties. In these cases, coating the surfaces with a thin opaque lacquer just for measuring purposes is a common practice. A recent method handles highly reflective and specular objects by inserting a 1-dimensional diffuser between the light source (e.g., projector) and the object to be scanned. Alternative optical techniques have been proposed for handling perfectly transparent and specular objects. Double reflections and inter-reflections can cause

646-429: The illumination patterns. These methods have shown promising 3D scanning results for traditionally difficult objects, such as highly specular metal concavities and translucent wax candles. Although several patterns have to be taken per picture in most structured light variants, high-speed implementations are available for a number of applications, for example: Motion picture applications have been proposed, for example

680-419: The individual stripe has to be identified, which can for example be accomplished by tracing or counting stripes (pattern recognition method). Another common method projects alternating stripe patterns, resulting in binary Gray code sequences identifying the number of each individual stripe hitting the object. An important depth cue also results from the varying stripe widths along the object surface. Stripe width

714-421: The object itself. The 3D data gathered can be used to retrieve CAD (computer aided design) data and models from existing components ( reverse engineering ), hand formed samples or sculptures, natural objects or artifacts. As with all optical methods, reflective or transparent surfaces raise difficulties. Reflections cause light to be reflected either away from the camera or right into its optics. In both cases,

SECTION 20

#1733105096632

748-505: The parts visually by using graph matching on the detected features. The system used an innovative collection of high level procedures for programming the arm movements which could be reused for each new task. In the mid 1970s there was controversy about the utility of pursuing a general purpose robotics programme in both the USA and the UK. A BBC TV programme in 1973, referred to as the "Lighthill Debate", pitched James Lighthill , who had written

782-460: The project were leaders in the field at the time and included Pat Ambler , Harry Barrow, Ilona Bellos, Chris Brown, Rod Burstall, Gregan Crawford, Jim Howe, Donald Michie , Robin Popplestone , Stephen Salter, Austin Tate and Ken Turner. Also of interest in the project was the use of a structured-light 3D scanner to obtain the 3D shape and position of the parts being manipulated. The Freddy II robot

816-452: The projected pattern will be confusing. Example methods include the use of infrared light or of extremely high framerates alternating between two exact opposite patterns. Geometric distortions by optics and perspective must be compensated by a calibration of the measuring equipment, using special calibration patterns and surfaces. A mathematical model is used for describing the imaging properties of projector and cameras. Essentially based on

850-464: The reflective properties of the scanned objects. Projecting a narrow band of light onto a three-dimensionally shaped surface produces a line of illumination that appears distorted from other perspectives than that of the projector, and can be used for geometric reconstruction of the surface shape (light section). A faster and more versatile method is the projection of patterns consisting of many stripes at once, or of arbitrary fringes, as this allows for

884-454: The same, the matching still may be only inexact. Two categories of search methods are the ones based on identification of possible and impossible pairings of vertices between the two graphs and methods that formulate graph matching as an optimization problem . Graph edit distance is one of similarity measures suggested for graph matching. The class of algorithms is called error-tolerant graph matching. This computer science article

918-523: The simple geometric properties of a pinhole camera , the model also has to take into account the geometric distortions and optical aberration of projector and camera lenses. The parameters of the camera as well as its orientation in space can be determined by a series of calibration measurements, using photogrammetric bundle adjustment . There are several depth cues contained in the observed stripe patterns. The displacement of any single stripe can directly be converted into 3D coordinates. For this purpose,

952-460: The stripe pattern to be overlaid with unwanted light, entirely eliminating the chance for proper detection. Reflective cavities and concave objects are therefore difficult to handle. It is also hard to handle translucent materials, such as skin, marble, wax, plants and human tissue because of the phenomenon of sub-surface scattering. Recently, there has been an effort in the computer vision community to handle such optically complex scenes by re-designing

986-456: The stripe width. Concerning level accuracy, interpolating over several pixels of the acquired camera image can yield a reliable height resolution and also accuracy, down to 1/50 pixel. Arbitrarily large objects can be measured with accordingly large stripe patterns and setups. Practical applications are documented involving objects several meters in size. Typical accuracy figures are: As the method can measure shapes from only one perspective at

1020-412: The stripes allows for an exact retrieval of the 3D coordinates of any details on the object's surface. Two major methods of stripe pattern generation have been established: Laser interference and projection. The laser interference method works with two wide planar laser beam fronts. Their interference results in regular, equidistant line patterns. Different pattern sizes can be obtained by changing

1054-409: The two graphs are different. In this case it is required to find the best possible match. For example, in image recognition applications, the results of image segmentation in image processing typically produces data graphs with the numbers of vertices much larger than in the model graphs data expected to match against. In the case of attributed graphs , even if the numbers of vertices and edges are

Freddy II - Misplaced Pages Continue

1088-469: The width of the stripes used and their optical quality. It is also limited by the wavelength of light. An extreme reduction of stripe width proves inefficient due to limitations in depth of field, camera resolution and display resolution. Therefore, the phase shift method has been widely established: A number of at least 3, typically about 10 exposures are taken with slightly shifted stripes. The first theoretical deductions of this method relied on stripes with

1122-490: Was developed by Pat Ambler and Robin Popplestone , in which robot behavior was specified at the object level. This meant that robot goals were specified in terms of desired position relationships between the robot, objects and the scene, leaving the details of how to achieve the goals to the underlying software system. Although developed in the 1970s RAPT is still considerably more advanced than most commercial robot programming languages. The team of people who contributed to

1156-521: Was obtained using the video camera, and then matched to previously stored models of the parts. It was soon realised in the Freddy project that the 'move here, do this, move there' style of robot behavior programming (actuator or joint level programming) is tedious and also did not allow for the robot to cope with variations in part position, part shape and sensor noise. Consequently, the RAPT robot programming language

#631368