2018

Wohlers Report[edit | edit source]

[18] Wohlers Report. Annual worldwide progress report in 3D Printing, 2018.[1]

According to Wohlers Report [..] and EY's Global 3D printing Report [..], additive manufacturing is one of the most disruptive technologies of our time, which could be applied in automotive and aerospace fields, medical equipment development, and education. This technology can increase productivity, simplify fabrication processes, and minimize limitations of geometric shapes.

Automated processes monitoring in 3D printing using supervised machine learning[edit | edit source]

[19] U. Delli, S. Chang. Automated processes monitoring in 3D printing using supervised machine learning. Procedia Manufacturing 26 (2018) 865–870, doi.org/10.1016/j.promfg.2018.07.111[2]

Abstract Quality monitoring is still a big challenge in additive manufacturing, popularly known as 3D printing. Detection of defects during the printing process will help eliminate the waste of material and time. Defect detection during the initial stages of printing may generate an alert to either pause or stop the printing process so that corrective measures can be taken to prevent the need to reprint the parts. This paper proposes a method to automatically assess the quality of 3D printed parts with the integration of a camera, image processing, and supervised machine learning. Images of semi-finished parts are taken at several critical stages of the printing process according to the part geometry. A machine learning method, support vector machine (SVM), is proposed to classify the parts into either 'good' or 'defective' category. Parts using ABS and PLA materials were printed to demonstrate the proposed framework. A numerical example is provided to demonstrate how the proposed method works.

Anomaly detection and classification in a laser powder bed additive manufacturing process using a trained computer vision algorithm[edit | edit source]

[20] L. Scime, J. Beuth. Anomaly detection and classification in a laser powder bed additive manufacturing process using a trained computer vision algorithm. Additive Manufacturing 19 (2018) 114–126. doi.org/10.1016/j.addma.2017.11.009[3]

Abstract Despite the rapid adoption of laser powder bed fusion (LPBF) Additive Manufacturing by industry, current processes remain largely open-loop, with limited real-time monitoring capabilities. While some machines offer powder bed visualization during builds, they lack automated analysis capability. This work presents an approach for in-situ monitoring and analysis of powder bed images with the potential to become a component of a real-time control system in an LPBF machine. Specifically, a computer vision algorithm is used to automatically detect and classify anomalies that occur during the powder spreading stage of the process. Anomaly detection and classification are implemented using an unsupervised machine learning algorithm, operating on a moderately-sized training database of image patches. The performance of the final algorithm is evaluated, and its usefulness as a standalone software package is demonstrated with several case studies.

A Robust Monocular 3D Object Tracking Method Combining Statistical and Photometric Constraints[edit | edit source]

[21] L. Zhong, L. Zhang. A Robust Monocular 3D Object Tracking Method Combining Statistical and Photometric Constraints. International Journal of Computer Vision, 2018. DOI: 10.1007/s11263-018-1119-x.[4]

Abstract Both region-based methods and direct methods have become popular in recent years for tracking the 6-dof pose of an object from monocular video sequences. Region-based methods estimate the pose of the object by maximizing the discrimination between statistical foreground and background appearance models, while direct methods aim to minimize the photometric error through direct image alignment. In practice, region-based methods only care about the pixels within a narrow band of the object contour due to the level-set-based probabilistic formulation, leaving the foreground pixels beyond the evaluation band unused. On the other hand, direct methods only utilize the raw pixel information of the object, but ignore the statistical properties of foreground and background regions. In this paper, we find it beneficial to combine these two kinds of methods together. We construct a new probabilistic formulation for 3D object tracking by combining statistical constraints from region-based methods and photometric constraints from direct methods. In this way, we take advantage of both statistical property and raw pixel values of the image in a complementary manner. Moreover, in order to achieve better performance when tracking heterogeneous objects in complex scenes, we propose to increase the distinctiveness of foreground and background statistical models by partitioning the global foreground and background regions into a small number of sub-regions around the object contour. We demonstrate the effectiveness of the proposed novel strategies on a newly constructed real-world dataset containing different types of objects with ground-truth poses. Further experiments on several challenging public datasets also show that our method obtains competitive or even superior tracking results compared to previous works. In comparison with the recent state-of-art region-based method, the proposed hybrid method is proved to be more stable under silhouette pose ambiguities with a slightly lower tracking accuracy.

3D Printing of a Leaf Spring: A Demonstration of Closed-Loop Control in Additive Manufacturing[edit | edit source]

[22] K. Garanger, T. Khamvilai, E. Feron. 3D Printing of a Leaf Spring: A Demonstration of Closed-Loop Control in Additive Manufacturing. 2018 IEEE Conference on Control Technology and Applications (CCTA), pp. 465-470. DOI: 10.1109/CCTA.2018.8511509.[5]

Abstract 3D printing is rapidly becoming a commodity. However, the quality of the printed parts is not always even nor predictable. Feedback control is demonstrated during the printing of a plastic object using additive manufacturing as a means to improve macroscopic mechanical properties of the object. The printed object is a leaf spring made of several parts of different infill density values, which are the control variables in this problem. In order to achieve a desired objective stiffness, measurements are taken after each part is completed and the infill density is adjusted accordingly in a closed-loop framework. With feedback control, the absolute error of the measured part stiffness is reduced from 11.63% to 1.34% relative to the specified stiffness. This experiment is therefore a proof of concept to show the relevance of using feedback control in additive manufacturing. By considering the printing process and the measurements as stochastic processes, we show how stochastic optimal control and Kalman filtering can be used to improve the quality of objects manufactured with rudimentary printers.

Machine‐Learning‐Based Monitoring of Laser Powder Bed Fusion[edit | edit source]

[23] B. Yuan, G.M. Guss, A.C. Wilson et al. Machine‐Learning‐Based Monitoring of Laser Powder Bed Fusion. United States, 2018. DOI:10.1002/admt.201800136.[6]

Abstract A two‐step machine learning approach to monitoring laser powder bed fusion (LPBF) additive manufacturing is demonstrated that enables on‐the‐fly assessments of laser track welds. First, in situ video melt pool data acquired during LPBF is labeled according to the (1) average and (2) standard deviation of individual track width and also (3) whether or not the track is continuous, measured postbuild through an ex situ height map analysis algorithm. This procedure generates three ground truth labeled datasets for supervised machine learning. Using a portion of the labeled 10 ms video clips, a single convolutional neural network architecture is trained to generate three distinct networks. With the remaining in situ LPBF data, the trained neural networks are tested and evaluated and found to predict track width, standard deviation, and continuity without the need for ex situ measurements. This two‐step approach should benefit any LPBF system – or any additive manufacturing technology – where height‐map‐derived properties can serve as useful labels for in situ sensor data.

  • ---

2017[edit | edit source]

Pose Optimization in Edge Distance Field for Textureless 3D Object Tracking[edit | edit source]

[24] B. Wang, F. Zhong, X. Qin. Pose Optimization in Edge Distance Field for Textureless 3D Object Tracking. CGI'17 Proceedings of the Computer Graphics International Conference, Article No. 32. doi:10.1145/3095140.3095172[7]

Abstract This paper presents a monocular model-based 3D tracking approach for textureless objects. Instead of explicitly searching for 3D-2D correspondences as previous methods, which unavoidably generates individual outlier matches, we aim to minimize the holistic distance between the predicted object contour and the query image edges. We propose a method that can directly solve 3D pose parameters in unsegmented edge distance field. We derive the differentials of edge matching distance with respect to the pose parameters, and search the optimal 3D pose parameters using standard gradient-based non-linear optimization techniques. To avoid being trapped in local minima and to deal with potential large inter-frame motions, a particle filtering process with a first order autoregressive state dynamics is exploited. Occlusions are handled by a robust estimator. The effectiveness of our approach is demonstrated using comparative experiments on real image sequences with occlusions, large motions and cluttered backgrounds.

Foundations of Intelligent Additive Manufacturing[edit | edit source]

[25] K. Garanger, E. Feron, P-L. Garoche, J. Rimoli, J. Berrigan, M. Grover, K. Hobbs. Foundations of Intelligent Additive Manufacturing. Published in ArXiv, May 2017.[8]

Abstract During the last decade, additive manufacturing has become increasingly popular for rapid prototyping, but has remained relatively marginal beyond the scope of prototyping when it comes to applications with tight tolerance specifications, such as in aerospace. Despite a strong desire to supplant many aerospace structures with printed builds, additive manufacturing has largely remained limited to prototyping, tooling, fixtures, and non-critical components. There are numerous fundamental challenges inherent to additive processing to be addressed before this promise is realized. One ubiquitous challenge across all AM motifs is to develop processing-property relationships through precise, in situ monitoring coupled with formal methods and feedback control. We suggest a significant component of this vision is a set of semantic layers within 3D printing files relevant to the desired material specifications. This semantic layer provides the feedback laws of the control system, which then evaluates the component during processing and intelligently evolves the build parameters within boundaries defined by semantic specifications. This evaluation and correction loop requires on-the-fly coupling of finite element analysis and topology optimization. The required parameters for this analysis are all extracted from the semantic layer and can be modified in situ to satisfy the global specifications. Therefore, the representation of what is printed changes during the printing process to compensate for eventual imprecision or drift arising during the manufacturing process.

Comparison Between Traditional Texture Methods and Deep Learning Descriptors for Detection of Nitrogen Deficiency in Maize Crops[edit | edit source]

[26] R.H.M. Condori, L.M. Romualdo, O.M. Bruno, P.H.C. Luz. Comparison Between Traditional Texture Methods and Deep Learning Descriptors for Detection of Nitrogen Deficiency in Maize Crops. 2017 Workshop of Computer Vision (WVC). DOI: 10.1109/WVC.2017.00009.[9]

Abstract Every year, efficient maize production is very important to the economy of many countries. Since nutritional deficiencies in maize plants are directly reflected in their grains productivity, early detection is needed to maximize the chances of proper recovery of these plants. Traditional texture methods recently showed interesting results in the identification of nutritional deficiencies. On the other hand, deep learning techniques are increasingly outperforming hand-crafted features on many tasks. In this paper, we propose a simple transfer learning approach from pre-trained cnn models and compare their results with those from traditional texture methods in the task of nitrogen deficiency identification. We perform experiments in a real-world dataset that contains digitalized images of maize leaves at different growth stages and with different levels of nitrogen fertilization. The results show that deep learning based descriptors achieve better success rates than traditional texture methods.

Comparison Between Traditional Texture Methods and Deep Learning Descriptors for Detection of Nitrogen Deficiency in Maize Crops[edit | edit source]

[27] F.-C. Ghesu, B. Georgescu, Y. Zheng et al. c. IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 41, Issue 1, pp. 176 - 189, 2017. DOI: 10.1109/TPAMI.2017.2782687.[10]

Abstract Robust and fast detection of anatomical structures is a prerequisite for both diagnostic and interventional medical image analysis. Current solutions for anatomy detection are typically based on machine learning techniques that exploit large annotated image databases in order to learn the appearance of the captured anatomy. These solutions are subject to several limitations, including the use of suboptimal feature engineering techniques and most importantly the use of computationally suboptimal search-schemes for anatomy detection. To address these issues, we propose a method that follows a new paradigm by reformulating the detection problem as a behavior learning task for an artificial agent. We couple the modeling of the anatomy appearance and the object search in a unified behavioral framework, using the capabilities of deep reinforcement learning and multi-scale image analysis. In other words, an artificial agent is trained not only to distinguish the target anatomical object from the rest of the body but also how to find the object by learning and following an optimal navigation path to the target object in the imaged volumetric space. We evaluated our approach on 1487 3D-CT volumes from 532 patients, totaling over 500,000 image slices and show that it significantly outperforms state-of-the-art solutions on detecting several anatomical structures with no failed cases from a clinical acceptance perspective, while also achieving a 20-30 percent higher detection accuracy. Most importantly, we improve the detection-speed of the reference methods by 2-3 orders of magnitude, achieving unmatched real-time performance on large 3D-CT scans.

Co-Occurrence Filter[edit | edit source]

[28] R.J. Jevnisek, S. Avidan. Co-Occurrence Filter. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3184-3192, 2017. DOI: 10.1109/CVPR.2017.406.[11]

Abstract Co-occurrence Filter (CoF) is a boundary preserving filter. It is based on the Bilateral Filter (BF) but instead of using a Gaussian on the range values to preserve edges it relies on a co-occurrence matrix. Pixel values that co-occur frequently in the image (i.e., inside textured regions) will have a high weight in the co-occurrence matrix. This, in turn, means that such pixel pairs will be averaged and hence smoothed, regardless of their intensity differences. On the other hand, pixel values that rarely co-occur (i.e., across texture boundaries) will have a low weight in the co-occurrence matrix. As a result, they will not be averaged and the boundary between them will be preserved. The CoF therefore extends the BF to deal with boundaries, not just edges. It learns co-occurrences directly from the image. We can achieve various filtering results by directing it to learn the co-occurrence matrix from a part of the image, or a different image. We give the definition of the filter, discuss how to use it with color images and show several use cases.

  • ---

2016[edit | edit source]

EY's Global 3D printing Report[edit | edit source]

[29] F. Thewihsen, S. Karevska, A. Czok, C. Pateman-Jones, D. Krauss. EY's Global 3D printing Report. 2016.[12]

Texture based quality assessment of 3D prints for different lighting conditions[edit | edit source]

[30] J. Fastowicz, K. Okarma. Texture based quality assessment of 3D prints for different lighting conditions. In Proceedings of the International Conference on Computer Vision and Graphics, ICCVG (2016), 17-28. In: Chmielewski L., Datta A., Kozera R., Wojciechowski K. (eds) Computer Vision and Graphics. ICCVG 2016. Lecture Notes in Computer Science, vol 9972. Springer, Cham. doi:10.1007/978-3-319-46418-3_2[13]

Abstract In the paper the method of "blind" quality assessment of 3D prints based on texture analysis using the GLCM and chosen Haralick features is discussed. As the proposed approach has been verified using the images obtained by scanning the 3D printed plates, some dependencies related to the transparency of filaments may be noticed. Furthermore, considering the influence of lighting conditions, some other experiments have been made using the images acquired by a camera mounted on a 3D printer. Due to the influence of lighting conditions on the obtained images in comparison to the results of scanning, some modifications of the method have also been proposed leading to promising results allowing further extensions of our approach to no-reference quality assessment of 3D prints. Achieved results confirm the usefulness of the proposed approach for live monitoring of the progress of 3D printing process and the quality of 3D prints.

Optical Flow Co-occurrence Matrices: A novel spatiotemporal feature descriptor[edit | edit source]

[31] C. Caetano, J.A. dos Santos, W.R. Schwartz. Optical Flow Co-occurrence Matrices: A novel spatiotemporal feature descriptor. 2016 23rd International Conference on Pattern Recognition (ICPR). DOI: 10.1109/ICPR.2016.7899921.[14]

Abstract Suitable feature representation is essential for performing video analysis and understanding in applications within the smart surveillance domain. In this paper, we propose a novel spatiotemporal feature descriptor based on co-occurrence matrices computed from the optical flow magnitude and orientation. Our method, called Optical Flow Co-occurrence Matrices (OFCM), extracts a robust set of measures known as Haralick features to describe the flow patterns by measuring meaningful properties such as contrast, entropy and homogeneity of co-occurrence matrices to capture local space-time characteristics of the motion through the neighboring optical flow magnitude and orientation. We evaluate the proposed method on the action recognition problem by applying a visual recognition pipeline involving bag of local spatiotemporal features and SVM classification. The experimental results, carried on three well-known datasets (KTH, UCF Sports and HMDB51), demonstrate that OFCM outperforms the results achieved by several widely employed spatiotemporal feature descriptors such as HOF, HOG3D and MBH, indicating its suitability to be used as video representation.

  • ---

2015[edit | edit source]

Holistically-Nested Edge Detection[edit | edit source]

[32] S. Xie, Z. Tu. Holistically-Nested Edge Detection. 2015 IEEE International Conference on Computer Vision (ICCV). DOI: 10.1109/ICCV.2015.164.[15]

Abstract We develop a new edge detection algorithm that addresses two critical issues in this long-standing vision problem: (1) holistic image training, and (2) multi-scale feature learning. Our proposed method, holistically-nested edge detection (HED), turns pixel-wise edge classification into image-to-image prediction by means of a deep learning model that leverages fully convolutional neural networks and deeply-supervised nets. HED automatically learns rich hierarchical representations (guided by deep supervision on side responses) that are crucially important in order to approach the human ability to resolve the challenging ambiguity in edge and object boundary detection. We significantly advance the state-of-the-art on the BSD500 dataset (ODS F-score of 0.782) and the NYU Depth dataset (ODS F-score of 0.746), and do so with an improved speed (0.4 second per image) that is orders of magnitude faster than recent CNN-based edge detection algorithms.

U-Net: Convolutional Networks for Biomedical Image Segmentation[edit | edit source]

[33] O. Ronneberger, P. Fischer, T. Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. MICCAI 2015: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015 pp 234-241. In: Navab N., Hornegger J., Wells W., Frangi A. (eds) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science, vol 9351. Springer, Cham. doi:10.1007/978-3-319-24574-4_28[16]

Abstract There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU.

Polarized 3D: High-Quality Depth Sensing with Polarization Cues[edit | edit source]

[34] A. Kadambi, V. Taamazyan, B. Shi, R. Raskar. Polarized 3D: High-Quality Depth Sensing with Polarization Cues. 2015 IEEE International Conference on Computer Vision (ICCV). DOI: 10.1109/ICCV.2015.385.[17]

Abstract Coarse depth maps can be enhanced by using the shape information from polarization cues. We propose a framework to combine surface normals from polarization (hereafter polarization normals) with an aligned depth map. Polarization normals have not been used for depth enhancement before. This is because polarization normals suffer from physics-based artifacts, such as azimuthal ambiguity, refractive distortion and fronto-parallel signal degradation. We propose a framework to overcome these key challenges, allowing the benefits of polarization to be used to enhance depth maps. Our results demonstrate improvement with respect to state-of-the-art 3D reconstruction techniques.

MultiFab: A Machine Vision Assisted Platform for Multi-material 3D Printing[edit | edit source]

[35] P. Sitthi-Amorn, J.E. Ramos, Y. Wang, et al. MultiFab: A Machine Vision Assisted Platform for Multi-material 3D Printing. Journal ACM Transactions on Graphics (TOG), Volume 34 Issue 4, Article No. 129, 2015. DOI: 10.1145/2766962.[18]

Abstract We have developed a multi-material 3D printing platform that is high-resolution, low-cost, and extensible. The key part of our platform is an integrated machine vision system. This system allows for self-calibration of printheads, 3D scanning, and a closed-feedback loop to enable print corrections. The integration of machine vision with 3D printing simplifies the overall platform design and enables new applications such as 3D printing over auxiliary parts. Furthermore, our platform dramatically expands the range of parts that can be 3D printed by simultaneously supporting up to 10 different materials that can interact optically and mechanically. The platform achieves a resolution of at least 40 μm by utilizing piezoelectric inkjet printheads adapted for 3D printing. The hardware is low cost (less than $7,000) since it is built exclusively from off-the-shelf components. The architecture is extensible and modular -- adding, removing, and exchanging printing modules can be done quickly. We provide a detailed analysis of the system's performance. We also demonstrate a variety of fabricated multi-material objects.

References[edit | edit source]

  1. Wohlers Report. Annual worldwide progress report in 3D Printing, 2018.
  2. U. Delli, S. Chang. Automated processes monitoring in 3D printing using supervised machine learning.Procedia Manufacturing 26 (2018) 865–870, doi.org/10.1016/j.promfg.2018.07.111
  3. L. Scime, J. Beuth. Anomaly detection and classification in a laser powder bed additive manufacturing process using a trained computer vision algorithm. Additive Manufacturing 19 (2018) 114–126. doi.org/10.1016/j.addma.2017.11.009
  4. L. Zhong, L. Zhang. A Robust Monocular 3D Object Tracking Method Combining Statistical and Photometric Constraints. International Journal of Computer Vision, 2018. DOI: 10.1007/s11263-018-1119-x.
  5. K. Garanger, T. Khamvilai, E. Feron. 3D Printing of a Leaf Spring: A Demonstration of Closed-Loop Control in Additive Manufacturing. 2018 IEEE Conference on Control Technology and Applications (CCTA), pp. 465-470. DOI: 10.1109/CCTA.2018.8511509.
  6. B. Yuan, G.M. Guss, A.C. Wilson et al. Machine‐Learning‐Based Monitoring of Laser Powder Bed Fusion. United States, 2018. DOI:10.1002/admt.201800136.
  7. B. Wang, F. Zhong, X. Qin, Pose Optimization in Edge Distance Field for Textureless 3D Object Tracking, CGI'17 Proceedings of the Computer Graphics International Conference, Article No. 32. doi:10.1145/3095140.3095172
  8. K. Garanger, E. Feron, P-L. Garoche, J. Rimoli, J. Berrigan, M. Grover, K. Hobbs. Foundations of Intelligent Additive Manufacturing. Published in ArXiv, May 2017.
  9. R.H.M. Condori, L.M. Romualdo, O.M. Bruno, P.H.C. Luz. Comparison Between Traditional Texture Methods and Deep Learning Descriptors for Detection of Nitrogen Deficiency in Maize Crops. 2017 Workshop of Computer Vision (WVC). DOI: 10.1109/WVC.2017.00009.
  10. F.-C. Ghesu, B. Georgescu, Y. Zheng et al. Multi-Scale Deep Reinforcement Learning for Real-Time 3D-Landmark Detection in CT Scans. IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 41, Issue 1, pp. 176 - 189, 2017. DOI: 10.1109/TPAMI.2017.2782687.
  11. R.J. Jevnisek, S. Avidan. Co-Occurrence Filter. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3184-3192, 2017. DOI: 10.1109/CVPR.2017.406.
  12. F. Thewihsen, S. Karevska, A. Czok, C. Pateman-Jones, D. Krauss. EY's Global 3D printing Report, 2016.
  13. J. Fastowicz, K. Okarma. Texture based quality assessment of 3D prints for different lighting conditions. In Proceedings of the International Conference on Computer Vision and Graphics, ICCVG (2016), 17-28. In: Chmielewski L., Datta A., Kozera R., Wojciechowski K. (eds) Computer Vision and Graphics. ICCVG 2016. Lecture Notes in Computer Science, vol 9972. Springer, Cham. doi:10.1007/978-3-319-46418-3_2
  14. C. Caetano, J.A. dos Santos, W.R. Schwartz. Optical Flow Co-occurrence Matrices: A novel spatiotemporal feature descriptor. 2016 23rd International Conference on Pattern Recognition (ICPR). DOI: 10.1109/ICPR.2016.7899921.
  15. S. Xie, Z. Tu. Holistically-Nested Edge Detection. 2015 IEEE International Conference on Computer Vision (ICCV). DOI: 10.1109/ICCV.2015.164.
  16. O. Ronneberger, P. Fischer, T. Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. MICCAI 2015: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015 pp 234-241. In: Navab N., Hornegger J., Wells W., Frangi A. (eds) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science, vol 9351. Springer, Cham. doi:10.1007/978-3-319-24574-4_28
  17. A. Kadambi, V. Taamazyan, B. Shi, R. Raskar. Polarized 3D: High-Quality Depth Sensing with Polarization Cues. 2015 IEEE International Conference on Computer Vision (ICCV). DOI: 10.1109/ICCV.2015.385.
  18. P. Sitthi-Amorn, J.E. Ramos, Y. Wang, et al. MultiFab: A Machine Vision Assisted Platform for Multi-material 3D Printing. Journal ACM Transactions on Graphics (TOG), Volume 34 Issue 4, Article No. 129, 2015. DOI: 10.1145/2766962.
FA info icon.svg Angle down icon.svg Page data
Authors
Aliaksei_Petsiuk
License CC-BY-SA-4.0
Language English (en)
Related 0 subpages, 3 pages link here
Impact 43 page views (more)
Created May 14, 2022 by Irene Delgado
Last modified September 10, 2024 by StandardWikitext bot
Cookies help us deliver our services. By using our services, you agree to our use of cookies.