Structural topology optimization seeks to distribute material throughout a design domain in a way that maximizes a certain performance goal. In this work, we solve the topology optimization problem by parameterizing the designs via recently introduced coordinate-based neural networks. Specifically, we show that networks with Fourier feature mapping can achieve state-of-the-art performance. Our method enables the realization of a range of designs using a single mesh via tuning the frequency content of the solutions independently of the finite element discretization grid. This frequency control offers attractive properties, such as mesh-independent results and sub-pixel filtering that leads to appropriate designs for upsampling. We demonstrate our method on the compliance minimization problem, optimizing for the stiffest possible structure within a weight budget for a prescribed set of loads.
Advancements in additive manufacturing have enabled design and fabrication of materials and structures not previously realizable. In particular, the design space of composite materials and structures has vastly expanded, and the resulting size and complexity has challenged traditional design methodologies, such as brute force exploration and one factor at a time (OFAT) exploration, to find optimum or tailored designs. To address this challenge, supervised machine learning approaches have emerged to model the design space using curated training data; however, the selection of the training data is often determined by the user. In this work, we develop and utilize a Reinforcement learning (RL)-based framework for the design of composite structures which avoids the need for user-selected training data. For a 5 × 5 composite design space comprised of soft and compliant blocks of constituent material, we find that using this approach, the model can be trained using 2.78% of the total design space consists of 225 design possibilities. Additionally, the developed RL-based framework is capable of finding designs at a success rate exceeding 90%. The success of this approach motivates future learning frameworks to utilize RL for the design of composites and other material systems.
The textile industry touches many aspects of our daily lives, with clothing, furniture, vehicle interiors and covers, as well as a plethora of medical, sports, and leisure-driven specialized products. This research aims to expand the types of fabric properties that are available for design and manufacturing by introducing methods for modifying material stiffness and tensile characteristics. Specifically, this paper introduces a technique to incorporate anisotropic stitching to control direction and strength of a fabric’s stretch through the use of an embroidery machine and computer-driven stitch design and planning. The contributions of this paper include: a method for specifying and controlling direction in stitch planning; a sequential stitch planner that incorporates both density and direction; and a showcase of results that support the value and uniqueness of this new process of manufacturing for textile artifacts.
Recent advances in low-cost FDM 3D printing and a range of commercially available materials have enabled integrating different properties into a single object such as flexibility and conductivity, assisting fabrication of a wide variety of interactive devices through multi-material printing. Mechanically different materials such as rigid and flexible filament, however, display issues when adhering to each other making the object vulnerable to coming apart. In this work, we propose Multi-ttach, a low-cost technique to increase the adhesion between different materials utilizing various 3D printing parameters with three specialized geometric structures : (1) bead and (2) lattice structures that interlock layers in vertical material arrangement, and (3) stitching in horizontal material arrangement. We approach this by modifying the geometry of the interface layer at the G-code level and using processing parameters. We validate the result through mechanical testing using off-the-shelf materials and desktop printers and demonstrate the applicability through a range of existing applications that tackle the benefit of multi-material FDM 3D printing.
3D printing offers the opportunity to perform automated restoration of objects to reduce household waste, restore objects of cultural heritage, and automate repair in medical and manufacturing domains. We present an approach that takes a 3D model of a broken object and retrieves proxy 3D models of corresponding complete objects from a library of 3D models, with the goal of using the complete proxy to repair the broken object. We input multi-view renders and point cloud representations of the query to neural networks that output learned visual and geometric feature encodings. Our approach returns complete proxies that are visually and geometrically similar to the broken query object model by searching for the learned encodings in the complete models library. We demonstrate results for retrieval of complete proxies for broken object models with breaks generated synthetically using models from the ShapeNet dataset, and from publicly available datasets of scanned everyday objects and cultural heritage objects. By combining visual and geometric features, our approach shows consistently lower Chamfer distance than when either feature is used alone. Our approach outperforms the existing state-of-the-art method in retrieval of proxies for broken objects in terms of the Chamfer distance. The 3D proxies returned by our approach enable understanding of object geometry to identify object portions requiring repair, to incorporate user preferences, and to generate 3D printable restoration components. Our code to perform broken object model generation, feature extraction, and object retrieval is available at https://git.io/JuKaJ.
Applying textures to 3D models are means for creating realistic looking objects. This is especially important in the 3D manufacturing domain as manufactured models should ideally comprise a natural and realistic appearance. Nevertheless, natural material textures usually consist of dense patterns and fine details. Their embedding onto 3D models is typically cumbersome, requiring large processing time and resulting in large size meshes. This paper presents a novel approach for direct embedding of fine scale geometric textures onto 3D printed models by on-the-fly modification of the 3D printer’s head. Our idea is to embed 3D textures by revising the 3D printer’s G-code, i.e., incorporating texture details through modification of the printer’s path. Direct manipulation of the printer’s head movement allows for fine-scale texture mapping and editing on-the-fly in the 3D printing process. Thus, our method avoids the computationally expensive texture mapping, mesh processing and manufacturing preprocessing. This allows embedding detailed geometric textures of unlimited density which can model manual manufacturing artifacts and natural material properties. Results demonstrate that our direct G-code textured models are printed robustly and efficiently in both space and time compared to traditional methods.
Additive manufacturing is typically conducted in a layer-by-layer fashion. A key step of the process is to define, within each planar layer, the trajectories along which material is deposited to form the final shape. The direction of these trajectories triggers an anisotropy in the fabricated parts, which directly affects their properties, from their mechanical behavior to their appearance. Controlling this anisotropy paves the way to novel applications, from stronger parts to controlled deformations and surface patterning.
This work introduces a method to generate trajectories that precisely follow an input direction field while simultaneously avoiding intra- and inter-layer defects. Our method results in spatially coherent trajectories - all follow the specified direction field throughout the layers - while providing precise control over their inter-layer arrangement. This allows us to generate a staggered layout of trajectories across layers, preventing unavoidable tiny gaps from forming tunnel-shaped voids throughout a part volume.
Our approach is simple, robust, easy to implement, and scales linearly with the input volume. It builds upon recent results in procedural generation of oscillating patterns, generating a signal in the 3D domain that oscillates with a frequency matching the deposition beads width while following the input direction field. Trajectories are extracted with a process akin to a marching square.
Interactive fabrication aims to close the gap between design and fabrication, allowing for rich interactions with materials and reflection in action. Drawing from craft practice, we contribute software that enables real-time control of digital fabrication machines from a Computer-Aided Design (CAD) environment. Our software not only allows interactive control of toolpath geometry, but also enables the control of machine parameters such as speed, acceleration, or jerk. This creates new opportunities for toolpath and material exploration. We evaluate our software with a professional glass artist on a custom digital fabrication machine that can accommodate multiple tools such as brushes, engraving bits, or microscopes. Finally, we reflect on implications for machine control.
Construction robotics are increasingly popular in the architectural fabrication community due to their accuracy and flexibility. Because of their high degree of motion freedom, these tools are able to assemble complex structures with irregular designs, which advances architectural aesthetics and structural performance. However, automated task and motion planning (TAMP) for a robot to assemble non-repetitive objects can be challenging due to (1) a non-repetitive assembly pattern (2) the need for a continuous robotic motion throughout a sequence of movement (3) a congested construction scene and (4) occasional robot configuration constraints due to taught positions. Recent work has already begun to address these challenges for repetitive assembly processes, where the robot repeats a pattern of primitive behaviors (e.g. brick stacking or spatial extrusion). Yet, there are many assembly processes that can benefit from a non-repetitive pattern. For example, processes can change tools on an element-by-element level to accommodate a wider range of geometry.
Our work is motivated by the necessity of robotic modeling and planning for a recently published timber assembly process which utilizes distributed robotic clamps to press together interlocking joints. In addition to pick-and-place operations, the robot needs to move numerous tools within the construction scene, similar to a tool-change operation. In order to facilitate an agile process for architectural design, construction process design, and TAMP, we introduce a flowchart-based specification language which allows various designers to describe their design and construction intent and knowledge. A compiler can then translate the assembly description, sequence, process flowchart, and robotic setup into a plan skeleton. Additionally, we present a linear and a non-linear solving algorithm that can solve the plan skeleton for a full sequence of robot motions. This algorithm can be customized to take into account designer intuition, which can speed up the planning process. We provide a comparison of the two algorithms using the timber assembly process as our case study. We validate our results by robotically executing and constructing a large-scale real-world timber structure. Finally, we demonstrate the flexibility of our flowchart by showing how custom assembly actions are modeled in our case study. We also demonstrate how other recently published robotic assembly processes can be formulated using our flowcharts to demonstrate generalizability.