Tachyon (software)
Original author(s) | John E. Stone |
---|---|
Written in | C |
Type | Ray tracing/3D rendering software |
Website |
jedi |
Tachyon is a parallel/multiprocessor ray tracing software. It is a parallel ray tracing library for use on distributed memory parallel computers, shared memory computers, and clusters of workstations. Tachyon implements rendering features such as ambient occlusion lighting, depth-of-field focal blur, shadows, reflections, and others. It was originally developed for the Intel iPSC/860 by John Stone for his M.S. thesis at University of Missouri-Rolla.[1] Tachyon subsequently became a more functional and complete ray tracing engine, and it is now incorporated into a number of other open source software packages such as VMD, and SageMath. Tachyon is released under a permissive license (included in the tarball).
Evolution and Features
Tachyon was originally developed for the Intel iPSC/860, a distributed memory parallel computer based on a hypercube interconnect topology based on the Intel i860, an early RISC CPU with VLIW architecture and . Tachyon was originally written using Intel's proprietary NX message passing interface for the iPSC series, but it was ported to the earliest versions of MPI shortly thereafter in 1995. Tachyon was adapted to run on the Intel Paragon platform using the Paragon XP/S 150 MP at Oak Ridge National Laboratory. The ORNL XP/S 150 MP was the first platform Tachyon supported that combined both large-scale distributed memory message passing among nodes, and shared memory multithreading within nodes. Adaptation of Tachyon to a variety of conventional Unix-based workstation platforms and early clusters followed, including porting to the IBM SP2. Tachyon was incorporated into the PARAFLOW CFD code to allow in-situ volume visualization of supersonic combustor flows performed on the Paragon XP/S at NASA Langley Research Center, providing a significant performance gain over conventional post-processing visualization approaches that had been used previously.[2] Beginning in 1999, support for Tachyon was incorporated into the molecular graphics program VMD, and this began an ongoing period co-development of Tachyon and VMD where many new Tachyon features were added specifically for molecular graphics. Tachyon was used to render the winning image illustration category for the NSF 2004 Visualization Challenge.[3] In 2007, Tachyon added support for ambient occlusion lighting, which was one of the features that made it increasingly popular for molecular visualization in conjunction with VMD. VMD and Tachyon were gradually adapted to support routine visualization and analysis tasks on clusters, and later for large petascale supercomputers. Tachyon was used to produce figures, movies, and the Nature cover image of the atomic structure of the HIV-1 capsid solved by Zhao et al. in 2013, on the Blue Waters petascale supercomputer at NCSA, U. Illinois.[4][5]
Use in Parallel Computing Demonstrations, Training, and Benchmarking
Owing in part to its portability to a diverse range of platforms Tachyon has been used as a test case for a variety of parallel computing and compiler research articles.
In 1999, John Stone assisted Bill Magro with adaptation of Tachyon to support early versions of the OpenMP directive-based parallel computing standard, using Kuck and Associates' KCC compiler. Tachyon was shown as a demo performing interactive ray tracing on DEC Alpha workstations using KCC and OpenMP.
In 2000, Intel acquired Kuck and Associates Inc.,[6] and Tachyon continued to be used as an OpenMP demonstration. Intel later used Tachyon to develop a variety of programming examples for its Threading Building Blocks (TBB) parallel programming system, where an old version of the program continues to be incorporated as an example to the present day.[7][8]
In 2006, Tachyon was selected by the SPEC HPG for inclusion in the SPEC MPI 2007 benchmark suite.[9][10]
Beyond Tachyon's typical use as tool for rendering high quality images, likely due to its portability and inclusion in SPEC MPI 2007, it has also been used as a test case and point of comparison for a variety of research projects related to parallel rendering and visualization,[11][12][13][14][15][16][17][18][19][20] cloud computing,[21][22][23][24][25] and parallel computing,[26][27][28] compilers,[29][30][31][32] runtime systems,[33][34] and computer architecture,[35][36][37] performance analysis tools,[38][39][40] and energy efficiency of HPC systems.[41][42][43]
See also
External links
Wikimedia Commons has media related to Tachyon (software). |
- Tachyon Parallel/Multiprocessor Ray Tracing System website
- Tachyon ray tracer (built into VMD)
- John Stone's M.S. thesis describing the earliest versions of Tachyon
References
- ↑ Stone, John E. "An Efficient Library for Parallel Ray Tracing and Animation". M.S. thesis, Computer Science Dept., University of Missouri-Rolla, April 1998.
- ↑ Stone, J.; Underwood, M. (1996-07-01). "Rendering of numerical flow simulations using MPI". MPI Developer's Conference, 1996. Proceedings., Second: 138–141. doi:10.1109/MPIDC.1996.534105.
- ↑ "Water Permeation Through Aquaporins.". Emad Tajkhorshid, Klaus Schulten, Theoretical and Computational Biophysics Group, University of Illinois at Urbana-Champaign.
- ↑ Zhao, Gongpu; Perilla, Juan R.; Yufenyuy, Ernest L.; Meng, Xin; Chen, Bo; Ning, Jiying; Ahn, Jinwoo; Gronenborn, Angela M.; Schulten, Klaus. "Mature HIV-1 capsid structure by cryo-electron microscopy and all-atom molecular dynamics". Nature. 497 (7451): 643–646. doi:10.1038/nature12162. PMC 3729984. PMID 23719463.
- ↑ Stone, J.E.; Isralewitz, B.; Schulten, K. (2013-08-01). "Early experiences scaling VMD molecular visualization and analysis jobs on blue waters". Extreme Scaling Workshop (XSW), 2013: 43–50. doi:10.1109/XSW.2013.10.
- ↑ "Intel To Acquire Kuck & Associates. Acquisition Expands Intel's Capabilities in Software Development Tools for Multiprocessor Computing". Retrieved January 30, 2016.
- ↑ "Intel® Threading Building Blocks (Intel® TBB)". Retrieved January 30, 2016.
- ↑ "Parallel for -Tachyon". Intel Corporation. Retrieved January 30, 2016.
- ↑ "122.tachyon SPEC MPI2007 Benchmark Description". Retrieved January 30, 2016.
- ↑ "SPEC MPI2007—an application benchmark suite for parallel systems using MPI". Concurrency Computat.: Pract. Exper., 22: 191–205. doi:10.1002/cpe.1535.
- ↑ Rosenberg, Robert O.; Lanzagorta, Marco O.; Chtchelkanova, Almadena; Khokhlov, Alexei (2000-01-01). "Parallel visualization of large data sets". 3960: 135–143. doi:10.1117/12.378889.
- ↑ Lawlor, Orion Sky. "IMPOSTORS FOR PARALLEL INTERACTIVE COMPUTER GRAPHICS" (PDF). M.S., University of Illinois at Urbana-Champaign, 2001. Retrieved January 30, 2016.
- ↑ "Lawlor, Orion Sky, Matthew Page, and Jon Genetti. "MPIglut: powerwall programming made easier." (2008)." (PDF). Retrieved January 30, 2016.
- ↑ McGuigan, Michael (2008-01-09). "Toward the Graphics Turing Scale on a Blue Gene Supercomputer". arXiv:0801.1500.
- ↑ "Lawlor, Orion Sky, and Joe Genetti. "Interactive volume rendering aurora on the GPU." (2011)." (PDF).
- ↑ Grottel, S.; Krone, M.; Scharnowski, K.; Ertl, T. (2012-02-01). "Object-space ambient occlusion for molecular dynamics". Visualization Symposium (PacificVis), 2012 IEEE Pacific: 209–216. doi:10.1109/PacificVis.2012.6183593.
- ↑ Stone, J.E.; Isralewitz, B.; Schulten, K. (2013-08-01). "Early experiences scaling VMD molecular visualization and analysis jobs on blue waters". Extreme Scaling Workshop (XSW), 2013: 43–50. doi:10.1109/XSW.2013.10.
- ↑ Stone, John E.; Vandivort, Kirby L.; Schulten, Klaus (2013-01-01). "GPU-accelerated Molecular Visualization on Petascale Supercomputing Platforms". Proceedings of the 8th International Workshop on Ultrascale Visualization. UltraVis '13. New York, NY, USA: ACM: 6:1–6:8. doi:10.1145/2535571.2535595. ISBN 9781450325004.
- ↑ "Sener, Melih, et al. "Visualization of Energy Conversion Processes in a Light Harvesting Organelle at Atomic Detail."" (PDF). Retrieved January 30, 2016.
- ↑ "Zhuang, Khadka Prashantand Yu, Upakarasamy Lourderaj, and William L. Hase. "A Grid-based Cyber infrastructure for High Performance Chemical Dynamics Simulations."" (PDF).
- ↑ Patchin, Philip; Lagar-Cavilla, H. Andrés; de Lara, Eyal; Brudno, Michael (2009-01-01). "Adding the Easy Button to the Cloud with SnowFlock and MPI". Proceedings of the 3rd ACM Workshop on System-level Virtualization for High Performance Computing. HPCVirt '09. New York, NY, USA: ACM: 1–8. doi:10.1145/1519138.1519139. ISBN 9781605584652.
- ↑ Neill, R.; Carloni, L.P.; Shabarshin, A.; Sigaev, V.; Tcherepanov, S. (2011-09-01). "Embedded Processor Virtualization for Broadband Grid Computing". 2011 12th IEEE/ACM International Conference on Grid Computing (GRID): 145–156. doi:10.1109/Grid.2011.27.
- ↑ "A Workflow Engine for Computing Clouds, Daniel Franz, Jie Tao, Holger Marten, and Achim Streit. CLOUD COMPUTING 2011 : The Second International Conference on Cloud Computing, GRIDs, and Virtualization.". Retrieved January 30, 2016.
- ↑ "Tao, Jie, et al. "An Implementation Approach for Inter-Cloud Service Combination." International Journal on Advances in Software 5.1&2 (2012): 65-75." (PDF).
- ↑ "Heterogeneous Cloud Systems Based on Broadband Embedded Computing - Academic Commons". doi:10.7916/d8hh6jg1.
- ↑ "Manjikian, Naraig. "Exploring Multiprocessor Design and Implementation Issues with In-Class Demonstrations." Proceedings of the Canadian Engineering Education Association (2010).". Retrieved January 30, 2016.
- ↑ Kim, Wooyoung; Voss, M. (2011-01-01). "Multicore Desktop Programming with Intel Threading Building Blocks". IEEE Software. 28 (1): 23–31. doi:10.1109/MS.2011.12. ISSN 0740-7459.
- ↑ Tchiboukdjian, M.; Carribault, P.; Perache, M. (2012-05-01). "Hierarchical Local Storage: Exploiting Flexible User-Data Sharing Between MPI Tasks". Parallel Distributed Processing Symposium (IPDPS), 2012 IEEE 26th International: 366–377. doi:10.1109/IPDPS.2012.42.
- ↑ Ghodrat, Mohammad Ali; Givargis, Tony; Nicolau, Alex (2008-01-01). "Control Flow Optimization in Loops Using Interval Analysis". Proceedings of the 2008 International Conference on Compilers, Architectures and Synthesis for Embedded Systems. CASES '08. New York, NY, USA: ACM: 157–166. doi:10.1145/1450095.1450120. ISBN 9781605584690.
- ↑ "Guerin, Xavier. An Efficient Embedded Software Development Approach for Multiprocessor System-on-Chips. Diss. Institut National Polytechnique de Grenoble-INPG, 2010.". Retrieved January 30, 2016.
- ↑ Milanez, Teo; Collange, Sylvain; Quintão Pereira, Fernando Magno; Meira Jr., Wagner; Ferreira, Renato (2014-10-01). "Thread scheduling and memory coalescing for dynamic vectorization of SPMD workloads". Parallel Computing. 40 (9): 548–558. doi:10.1016/j.parco.2014.03.006.
- ↑ Ojha, Davendar Kumar; Sikka, Geeta (2014-01-01). Satapathy, Suresh Chandra; Avadhani, P. S.; Udgata, Siba K.; Lakshminarayana, Sadasivuni, eds. A Study on Vectorization Methods for Multicore SIMD Architecture Provided by Compilers. Advances in Intelligent Systems and Computing. Springer International Publishing. pp. 723–728. doi:10.1007/978-3-319-03107-1_79. ISBN 9783319031064.
- ↑ Kang, Mikyung; Kang, Dong-In; Lee, Seungwon; Lee, Jaedon (2013-01-01). "A System Framework and API for Run-time Adaptable Parallel Software". Proceedings of the 2013 Research in Adaptive and Convergent Systems. RACS '13. New York, NY, USA: ACM: 51–56. doi:10.1145/2513228.2513239. ISBN 9781450323482.
- ↑ Biswas, S.; de Supinski, B.R.; Schulz, M.; Franklin, D.; Sherwood, T.; Chong, F.T. (2011-05-01). "Exploiting Data Similarity to Reduce Memory Footprints". Parallel Distributed Processing Symposium (IPDPS), 2011 IEEE International: 152–163. doi:10.1109/IPDPS.2011.24.
- ↑ Li, Man-Lap; Sasanka, R.; Adve, S.V.; Chen, Yen-Kuang; Debes, E. (2005-10-01). "The ALPBench benchmark suite for complex multimedia applications". Workload Characterization Symposium, 2005. Proceedings of the IEEE International: 34–45. doi:10.1109/IISWC.2005.1525999.
- ↑ Zhang, Jiaqi; Chen, Wenguang; Tian, X.; Zheng, Weimin (2008-12-01). "Exploring the Emerging Applications for Transactional Memory". Ninth International Conference on Parallel and Distributed Computing, Applications and Technologies, 2008. PDCAT 2008: 474–480. doi:10.1109/PDCAT.2008.77.
- ↑ "Almaless, Ghassan, and Franck Wajsburt. "On the scalability of image and signal processing parallel applications on emerging cc-NUMA many-cores." Design and Architectures for Signal and Image Processing (DASIP), 2012 Conference on. IEEE, 2012." (PDF).
- ↑ Szebenyi, Z.; Wolf, F.; Wylie, B.J.N. (2011-05-01). "Performance Analysis of Long-Running Applications". 2011 IEEE International Symposium on Parallel and Distributed Processing Workshops and Phd Forum (IPDPSW): 2105–2108. doi:10.1109/IPDPS.2011.388.
- ↑ Szebenyi, Zoltán; Wylie, Brian J. N.; Wolf, Felix (2008-06-27). Kounev, Samuel; Gorton, Ian; Sachs, Kai, eds. SCALASCA Parallel Performance Analyses of SPEC MPI2007 Applications. Lecture Notes in Computer Science. Springer Berlin Heidelberg. pp. 99–123. doi:10.1007/978-3-540-69814-2_8. ISBN 9783540698135.
- ↑ Wagner, M.; Knüpfer, A.; Nagel, W.E. (2013-10-01). "Hierarchical Memory Buffering Techniques for an In-Memory Event Tracing Extension to the Open Trace Format 2". 2013 42nd International Conference on Parallel Processing (ICPP): 970–976. doi:10.1109/ICPP.2013.115.
- ↑ Kim, Wonyoung; Gupta, M.S.; Wei, Gu-Yeon; Brooks, D. (2008-02-01). "System level analysis of fast, per-core DVFS using on-chip switching regulators". IEEE 14th International Symposium on High Performance Computer Architecture, 2008. HPCA 2008: 123–134. doi:10.1109/HPCA.2008.4658633.
- ↑ Hackenberg, Daniel; Schöne, Robert; Molka, Daniel; Müller, Matthias S.; Knüpfer, Andreas (2010-07-27). "Quantifying power consumption variations of HPC systems using SPEC MPI benchmarks". Computer Science - Research and Development. 25 (3-4): 155–163. doi:10.1007/s00450-010-0118-0. ISSN 1865-2034.
- ↑ Ioannou, N.; Kauschke, M.; Gries, M.; Cintra, M. (2011-10-01). "Phase-Based Application-Driven Hierarchical Power Management on the Single-chip Cloud Computer". 2011 International Conference on Parallel Architectures and Compilation Techniques (PACT): 131–142. doi:10.1109/PACT.2011.19.