Multimedia

-OR-
Reset

Results

Name Products Type Action
4K Motion Detection Using Optical Flow on TySOM Kit   
Image data resolution is constantly growing in different applications and 4K UltraHD resolution is a standard nowedays. We will demonstrate that Aldec TySOM board which based on Xilinx Zynq FPGA can be successfully used in application and FPGA chip can accelerate functions and algorithms to achieve high performance.
TySOM™ EDK Demonstration Videos
4K Video Processing Using TySOM Board   
In this video, we showcase Aldec's 4k video transferring reference design using Xilinx Zynq based SoC FPGA device on TySOM-3-ZU7EV board. In this demo, two TySOM-3-ZU7EV board are connected using high data bandwidth interface (QSFP+).
TySOM™ EDK Demonstration Videos
Addressing the Challenges of SoC Verification in practice using Co-Simulation   
Heterogeneous System on Chip (SoC) devices like the Xilinx Zynq 7000 and Zynq UltraScale+ MPSoC combine a high-performance processing systems (PS) with state of the art programmable logic (PL). This combination allows system to be architected to provide an optimal solution. Verifying this interaction between the PS and PL presents a challenge to the design team. While each can be verified in isolation using QEMU for the PS and Riviera-PRO for the PL. The integration between the PS and PL all too often takes place late in the design cycle when the impact of addressing issues raised is larger in both time and cost. There is however, another way which is Co-Simulation, which can be performed early in the development cycle. This webinar will explore the challenges which are faced by SoC users, introduce the concept of Co-Simulation and its constituent parts along with demonstrating advanced debugging techniques. We will examine the required environment and pre-requisites needed to perform Co-Simulation. Detailed examples will then be presented to demonstrate basic and advanced debugging concepts. Based upon a Zynq implementing a Pulse Width Modulation IP core operating under SW control. We will look at examples which introduce basic Co-Simulation flow like waveform inspection along with advanced debugging aspects such as software and Hardware breakpoints and single stepping. These techniques will enable us to identify and debug issues which reside in both the software and hardware design. Co-Simulation enables you to develop your application faster and reduce the bring up time once the application hardware arrives for integration. This webinar will demonstrate these benefits and more which are gained when Co-Simulation is used, while demonstrating the ease with which the environment can be established and simulation performed. Play webinar   
Riviera-PRO, TySOM™ EDK Recorded Webinars
Advanced RTL Debugging for Zynq SoC Designs   
Presenter: Radek Nawrot, Aldec Software Product Manager

Abstract: Designers of complex embedded applications based on Xilinx® Zynq™ device require a high-performance RTL simulation and debugging platform. In this webinar, you will learn several advanced RTL debugging methodologies and techniques that you can employ for your block-level and system level simulation. You will learn how to use Dataflow, Code Coverage, Xtrace and Waveform Contributors for analyzing the errors in your AXI-based Zynq designs.

We welcome you to refer to the following Application Notes prior to the webinar:
Xilinx AXI-Based IP Overview
Simulating AXI BFM Examples Available in Xilinx CORE Generator
Simulating AXI-based Designs in Riviera-PRO
Performing Functional Simulation of Xilinx Zynq BFM in Riviera-PRO

Agenda
  • Embedded development flow between Xilinx Vivado™, SDK™, Riviera-PRO™ and TySOM™
  • Quick introduction to AXI
  • Running Riviera-PRO from Vivado
  • Code Coverage in simulation process
  • Advance dataflow- design overview
  • Bug injection – Xtrace in action
  • Waveform with Contributors – seek bug in code
 Play webinar   
Riviera-PRO, TySOM™ EDK Recorded Webinars
Designing FPGA-based ADAS Application - Driver Drowsiness Detection   
Advanced Driver Assistance Systems (ADAS) provide a significant contribution to increasing automotive safety. ADAS systems provide the driver with increased situational awareness, helping to reduce collision and accidents. To provide the driver with increased situation awareness ADAS systems can be categorized as providing external or internal awareness. External ADAS systems monitor such aspects as blind spots and lane detection, while internal systems monitor the occupants and particularly the driver themselves such as Driver Drowsiness Detection. Both internal and external ADAS systems rely heavily upon embedded vision systems, implementing these embedded vision systems depending upon the task at hand can be computationally intensive. This computational complexity can reduce the performance of the system introducing latency and reducing the validity of the information provided to the driver. The use of hardware programmable logic enables the implementation of a low latency high performance system. However, industry standard development techniques such as the use of OpenCV cannot be used due to high development cost and timescales. This webinar will demonstrate how an ADAS driver drowsiness detection application can be implemented using a Zynq heterogeneous SoC which combines programmable logic with high performance ARM cores. This example will demonstrate how a System Optimizing Compiler can be used in conjunction with the Zynq to create the ADAS application using high level languages and industry standard frameworks. The use of the System Optimizing Compiler enables seamless acceleration of C functions into the programmable logic, enabling a significant performance increase.  Play webinar   
TySOM™ EDK Recorded Webinars
DNN Based Object Classification on TySOM Kit   
Object detection by a Neural Network is a hot topic nowadays in many fields such as automotive or industrial. Accurate and fast detection and classification is required. These requirements can be fulfilled with system based on FPGA accellerated SoC chips such as Xilinx Zynq. In this presentation we will demonstrate how to utilze Aldec TySOM board for object detection and recognition application.
TySOM™ EDK Demonstration Videos
FPGA Accelerator for Genome Aligner - ReneGENE   
Abstract:
Aldec industry partner, ReneLife introduces its proprietary core technology, ReneGENE, for fast and accurate alignment of short reads obtained from the Next Generation Sequencing (NGS) pipeline. The technology, devoid of heuristics can precisely align the DNA reads against a reference genome at a single nucleotide resolution. As genomics permeates the entire landscape of biology, including biomedicine and therapeutics, ReneGENE creates a genomic highway that significantly contributes to reduce the time from sample to information without compromising on accuracy, critical for lifesaving medicare applications, biotechnology product development and forensics.

In this webinar, we present AccuRA, a high-performance reconfigurable FPGA accelerator engine for ReneGENE, offered on Aldec HES-HPC™ for accurate, and ultra-fast big data mapping and alignment of DNA short-reads from the NGS platforms. AccuRA demonstrates a speedup of over ~ 1500+ x compared to standard heuristic aligners in the market like BFAST which was run on a 8-core 3.5 GHz AMD FX™ processor with a system memory of 16 GB. AccuRA employs a scalable and massively parallel computing and data pipeline, which achieves short read mapping in minimum deterministic time. It offers full alignment coverage of the genome (million to billion bases long), including repeat regions and multi-read alignments. AccuRA offers a need-based affordable solution, deployable both in the cloud and local platforms. AccuRA scales well on the Aldec platform, at multiple levels of design granularity.

Agenda:
  • Introducing the world of genomic big data computing
  • The need for accuracy and precision
  • Introducing ReneGENE/AccuRA
  • Product Demo
  • Impact of ReneGENE-The Genomic Highway
Presenter: Santhi Natarajan, Ph. D (IISc) Play webinar   
Riviera-PRO, HES-DVM, TySOM™ EDK Recorded Webinars
FPGA-based Implementation of ADAS Bird's Eye View   
The computation-intensive functions of the ADAS Bird’s Eye View (Surround View) application are accelerated and implemented in the FPGA to achieve 30 fps. The surrounding views of the car are stitched together to simplify the object detection for the processing system. The solution includes a TySOM-3-ZU7EV board, FMC-ADAS daughter card and 4x blue eagle cameras.
TySOM™ EDK Demonstration Videos
How to develop a real time human detection application on an FPGA edge device using deep learning   
The ability to process at the edge is critical to many modern applications such as smart traffic cameras, assisted/autonomous driving and vision guided robotics. While these use cases may implement different high-level algorithms, all of them require a common foundation as follows: • Interfacing with one or more high resolution, high bandwidth image sensors to capture images • Processing the captured image to convert from RAW pixel data to RGB and onwards into a format which is suitable for further processing of the specific algorithm. • Implement Convolution Neural Network Machine Learning Inference algorithms classifying objects of interest for the use case • Recording data to nonvolatile memory to provide event log/buffer of the application history for diagnosis or later analysis. This webinar will demonstrate an example implementation by using the Aldec TySOM-3A-ZU19EG, FMC-ADAS and FMC-NVMe daughter cards where these applications can be easily prototyped on a Xilinx Zynq UltraScale+ MPSoC. In this webinar you’ll learn: • Benefits of using FPGAs for inference deep learning object detection on the edge • How to capture video into the FPGA using automotive HSD camera, 192-degree wide lens • How to implement a deep learning human detection algorithm inside FPGA and accelerate it with the power of Xilinx Deep Learning Processor Unit (DPU) • How to record/buffer the processed data at high speed over the PCIe to a NVMe SSD card • How to display the output of the system on a HDMI monitor This webinar will explore each stage of the implementation explaining in detail the design implementation and engineering decisions required to create such a solution. Of course, along with demonstrating the benefits of such an approach for edge processing systems. Play webinar   
TySOM™ EDK Recorded Webinars
How to Develop High-Performance Deep Learning Applications on FPGA-based Edge Devices   
Machine Learning (ML) is becoming a fundamental part of almost any computer vision-based application on the edge. From pedestrian detection in ADAS to cancer diagnosis in medical, and quality assurance in agriculture. However, there are challenges involved in developing an optimized and high-precision machine learning applications on the edge such as selecting the right processing system and neural network. FPGAs, as edge computing units, have shown a solid potential on improving the performance of ML applications. Aldec has recently developed DNN-based object detection applications on TySOM-3A-ZU19EG embedded development board (using Xilinx® Zynq™ MPSoC™ FPGA) for its customers to kick start their ML projects. In this webinar, you will learn about the ML application development process and what tools are required to simplify the design and implementation for FPGA-based machine learning applications. Play webinar   
TySOM™ EDK Recorded Webinars
SoC Emulation in FPGA with ARM Hardware Model   
Smart IoT devices and AI-driven autonomous machines are set to become commonplace in the near-future. Most will incorporate an ARM-based System-on-Chip (SoC) that has both hardware and software components; that should be verified together. Often, software is verified separately from hardware either due to the lack of accurate ARM hardware model or simulation bottleneck. An emulation platform combining reconfigurable FPGA logic with ARM processors bridges the gap. However, while FPGA vendors have mixed-technology platforms, like Xilinx Zynq, they contain insufficient FPGA logic elements to implement contemporary SoC designs. In this webinar we will demonstrate how to connect Xilinx Zynq MPSoC and its ARM Cortex A53/R5 processors with the largest Xilinx UltraScale FPGA. During a live demo, Aldec’s HES-DVM emulation platform will be connected with a TySOM-3 board, and we will show you how to combine ARM cores from Zynq with the rest of the SoC implemented in Ultrascale FPGA utilizing AMBA AXI and MGT based chip-to-chip interconnect. Finally, we will demonstrate the hardware and software debugging capabilities of such a hybrid emulation platform.  Play webinar   
HES-DVM, HES™ Boards, TySOM™ EDK Recorded Webinars
Solving a Sudoku Game with BinCNN on TySOM Board   
This demo utilizes BinCNN machine learning algorithm to solve a Sudoku game. A USB camera and TySOM-3-ZU7EV board is used in this project. BinCNN is a neural network trained for recognizing numbers between 1 to 9. In this project, the ARM processor grabs the images from the camera and processes them by resizing, filtering and distinguishing the empty and non-empty boxes
TySOM™ EDK Demonstration Videos
Unboxing the TySOM-1 Embedded Development Kit   
The TySOM-1 board from the TySOM TM product line is an embedded development platform from Aldec based on the ZYNQ XC7Z030. Possible applications include IoT, Multimedia, Automotive, UAV/UAS and Factory Automation.
TySOM™ EDK Demonstration Videos
Unboxing the TySOM-2 Embedded Development Kit   
The TySOM-2 board from the TySOMTM product line is an embedded development platform based on the ZYNQ XC7Z045. Possible applications include IoT, Multimedia, Automotive, UAV/UAS and Factory Automation.
TySOM™ EDK Demonstration Videos
Unboxing the TySOM-3 Embedded Development Kit   
At the center of the TySOM-3 Embedded Development Kit is the TySOM-3 Board which features the Zynq UltraScale+ EV MPSoC. With the largest FPGA available in the UltraScale+ family, a built-in video codec supporting H.264/H.265 encoding/decoding, and a multi-ARM processing system, this board has the power to provide the prototyping solution to any application, and is especially suitable for those in the realm of embedded vision. In this video we'll see everything included in the TySOM-3 EDK, as well as an overview of the board's features.
TySOM™ EDK Demonstration Videos
Unboxing TySOM-3A-ZU19EG Embedded Development Board   
In this video, Farhad Fallah unboxes Aldec's TySOM-3A-ZU19EG Embedded development board. TySOM-3A-ZU19EG is a compact SoC prototyping board featuring Zynq® UltraScale+™ MPSoC device which provides 64-bit processor scalability while combining real-time control with soft and hard engines for SoC prototyping solution, IP verification, graphics, video, packet processing, and early software development. The main advantage of Xilinx Zynq Ultrascale+ ZU19EG-FFVB1517 MPSoC over other Zynq MPSoC devices is that it carries the largest FPGA in the Zynq® UltraScale+™ MPSoC family with over a million logic cells. It includes a quad-core ARM® Cortex-A53 platform running up to 1.5GHz. Combined with dual-core Cortex-R5 real-time processors, a Mali-400 MP2 graphics processing unit, and 16nm FinFET+ programmable logic, EG devices have the specialized processing elements needed to excel in next-generation wired and 5G wireless infrastructure, cloud computing, and Aerospace and Defense applications.
TySOM™ EDK Demonstration Videos
Xilinx Zynq based Development Platform for ADAS   
Aldec introduces a Xilinx Zynq-based development platform for Advanced Driving Assistance Systems (ADAS) which is implemented on the TySOM-2 Embedded development board and FMC ADAS extension card, both of which are available through Aldec. The solution provides a 360° view of the car's driving surroundings while simultaneously providing real-time edge detection. The TySOM-2 utilizes a Zynq 7Z045 SoC FPGA which allows the image processing to to be accelerated onto FPGA.
TySOM™ EDK Demonstration Videos
Xilinx Zynq based Development Platform for IoT Gateway   
In this video Aldec delivers the TySOM-1 as a solution for your Internet of Things (IoT) gateway. As the gateway, the TySOM-1 utilizes three separate sensors across Bluetooth and Z-wave, providing temperature, humidity, and illuminance data to an easy to use webpage hosted by the board. Data is also sent via Wi-Fi to the TySOM Control mobile app which can be used to control USB lights at the click of a button. Using the board’s Zynq 7Z030 SoC, encryption and decryption of data between TySOM-1 and mobile app is hardware accelerated through an AES module implemented on the Zynq’s FPGA.
TySOM™ EDK Demonstration Videos
18 results
Ask Us a Question
x
Ask Us a Question
x
Captcha ImageReload Captcha
Incorrect data entered.
Thank you! Your question has been submitted. Please allow 1-3 business days for someone to respond to your question.
Internal error occurred. Your question was not submitted. Please contact us using Feedback form.
We use cookies to ensure we give you the best user experience and to provide you with content we believe will be of relevance to you. If you continue to use our site, you consent to our use of cookies. A detailed overview on the use of cookies and other website information is located in our Privacy Policy.