Mar. 10, 2025
Audio: Listen to this article.
Please visit our website for more information on this topic.
Welcome to the Digital Signal Processing chapter of the Ultimate Guide To High End Immersive Audio. The main table of contents can be viewed here.
Digital signal processing or DSP is a proverbial four letter word in many audiophile circles. DSP means many things to many people, and is often an undefinable scapegoat on which questionable sound quality is pinned. Back in the day there were good reasons for this distaste of DSP. The first implementations were technically interesting, but sounded terrible. Today, DSP is used in every digital playback system, and is even used to create the infamous but great sounding Mobile Fidelity Ultradisc One-Step series of vinyl reissues. This chapter of The Definitive Guide To High End Immersive Audio scratches the surface of the cavernous topic of digital signal processing, focusing on two areas of importance, decoding immersive music and digital room correction.
Definitions
Decoding - Nearly all immersive music is encoded in a proprietary format requiring decoding by the listener's audio system. Discrete Immersive content is the only music that doesn't require decoding because it's delivered as ten or twelve channel WAV files at 24 bit / 352.8 kHz. Most other music is encoded in a Dolby format such as Dolby Digital Plus or TrueHD Dolby Atmos. Auro 3D, Sony 360 Real Audio, and IAMF (Immersive Audio Model and Formats) are also available somewhat, but extremely limited in distribution and market acceptance (currently).
The decoding process not only involves unpacking a digital audio stream, but also rendering audio to the correct, and correct number of, channels. An encoded Atmos file can be played on systems from two through sixteen channels. The decoding system is told how many channels and in which configuration it should render the audio for playback.
Proprietary formats are often viewed skeptically by audiophiles who've used FLAC for decades. However, formats such those from Dolby don't really have an open source or free alternative that can match the market penetration and feature set.
Digital Room Correction - Another sensitive topic in the audiophile world, even though by it really shouldn't be. Everyone should be at least trying state of the art digital room correction in their own systems because it's that good. DRC is a massively confusing topic for all but the most nerdy audiophiles. For this chapter the DRC concepts most easily digestible are time and frequency correction. Time correction ensures that the direct sound hits the listening position at the same time while frequency correction smooths out the peaks and dips caused by one's listening room (too much or too little bass for example).
Within the world of digital room correction there are countless main topics, sub-topics, and differing opinions. This guide attempts to cover some broad areas and provide listeners actionable information they can use to audition the results of different DRC concepts by listening to different products or working with an expert in DRC.
Why It's Required
As an audiophile I like to think I can 'will' my musical playback into perfection with the straight wire with gain philosophy, but that's a fool's errand. A middle ground approach, involving the manual adjustment of time and frequency parameters, is also one that's more likely to produce dubious results, but at least provide endless hours of DIY fiddling / entertainment for those so inclined. Don't get me wrong, I have the utmost respect for those who roll up their sleeves and white knuckle DSP and I have no doubt they are satisfied with the results, but the level of accuracy achieved by a human can't match that of a machine. Enabling a machine to handle the tough parts and using human subjective evaluations for the final touches, results in a state of the art listening experience of which our audiophile forefathers could've of only dreamt.
The focus for a long time in this hobby was bit perfection. Playing an album as perfect as possible was a laudable goal in the early days of computer audio, when many apps mangled our music before our DACs even had a chance to convert the bits to audio. Now, with playback apps more under control and state of the art DSP we can focus on audio that's 'bit perfect' at our ears.
Using digital room correction in the time domain is absolutely required unless one's listening position is equidistant from every loudspeaker. It takes a very special room to accommodate such a setup. This is typically only seen in audio laboratory settings or certified ITU/EBU control rooms. Associated with the timing adjustments is the volume level because a loudspeaker that's closer to the listening position may be louder than those further away and may have different sensitivity characteristics than the 'main' front speakers.
Digitally correcting for frequency issues should be done after one attempts to physically adjust the listening room, using absorption, diffusion, and preferably normal human items such as plants, furniture, etc' No matter how a room is designed, the laws of physics will overpower the will of even the most dedicated anti-DSP listener. Correcting for bass issues, with very long sound waves, can foil all but an anechoic chamber's worth of absorptive material.
Last, the decoding aspect of DSP is required if one wants to hear all the channels of an immersive album. Without a TrueHD Dolby Atmos decoder, one can't hear the entire album as it was designed to be heard. Listeners may get a portion of the channels and a portion of the music via some other means, but not the true immersive experience.
Decoding and Room Correction Options
The reason both decoding and digital room correction are included in the same DSP chapter is because they are linked in most audio systems. Splitting the decoding from DRC will gain more traction as new devices hit the market that enable decoded audio to be output to a number of other devices as pure PCM audio, but currently these devices are few and far between (Arvus offers two, and another manufacturer will offer one soon). An example of this decoding and DSP link can be seen when using a traditional processor (Trinnov, Marantz, Anthem, etc'). If one decodes an immersive audio signal into twelve channels prior to the processor's input (HDMI or other), the processor can't handle a decoded PCM signal with that many channels. Thus, the decoding must take place within the processor, if one wants to use the processor's room correction.
It is possible to decode immersive audio using an Arvus H1-D and output the audio via Ethernet or the Arvus H2-4D and output via AES or Ethernet, unlinking the decoding from room correction, as long as one has a device capable of accepting a high channel count AES or Ethernet signal and running the proper room correction.
In Simple Terms
Here are three ways of decoding and running digital room correction for immersive music playback, in simple terms and in no specific order.
Computer - This is how I do everything because it's the only way to obtain true state of the art playback at the highest of audiophile capabilities. Please understand that the other methods are also great, otherwise I wouldn't mention them, but just like in sports, only one team / method can be the best with respect to sound quality. However, there are also drawbacks to using a computer. For one, it's a computer. It'll have issues. There's no way around that fact. Fortunately I'm capable of handling any of the issues that come up, but I understand not everyone cares to deal with them, even if they are tech savvy.
Using a computer to decode immersive audio can be done using the macOS operating system when playing from Apple Music, as the Dolby Digital Plus decoder is built-in. Decoding Dolby TrueHD Atmos and Auro 3D are more difficult. Auro offers a VST plugin that I've used to decode Auro 3D music through JRiver Media Center, but this plugin hasn't been updated to work on Macs with Apple Silicon. In other words, the Auro plugin doesn't work on any Mac sold in stores today. It does work on Windows and Intel based Macs for roughly $20 per month.
Decoding TrueHD Dolby Atmos on a computer, for content that's sold as MKV files or ripped from Blu-ray, is done either in real time or offline mode using a combination of apps. This approach requires a bit of extra work, but results in decoded WAV files capable of being played with any app that supports the requisite number of channels (JRiver, Audirvana, etc').
The easiest way to decode TrueHD Dolby Atmos is to do it offline. Using the application named Music Media Helper and the Dolby Reference Player, MKV files downloaded or ripped from Blu-ray can be converted into any supported Atmos channel configuration (5.1.2, 5.1.4, 7.1.4, 9.1.6, etc') as WAV files. FLAC will never support more than eight channels without embedded / encoded data, and WAV works pretty good anyway. Once the files are decoded, the listener is free to use state of the art digital room correction.
There are countless ways to do this, but I will explain what I believe is the absolute best. At a high level, using a good mic preamp with an Earthworks M30 microphone or better, and Audiolense on a Windows PC (only runs on Windows currently) to measure and create the room correction filters, is the best. Period. I recommend hiring Mitch Barnett to walk you through the measurement process and create filters for you, unless you're a glutton for punishment.
The current state of the art in room correction begins with the Audiolense application. To my knowledge, and I will happily include corrections if notified, no other application that runs on a computer or in a traditional processor, is as powerful and capable as Audiolense. As a real world example of this superiority, Audiolense features digital crossovers with bass offloading that's totally configurable for each loudspeaker. This means speakers with limited frequency ranges can have the bass offloaded to a subwoofer, while full range speakers in the same system can reproduce audio to the limits of their capabilities as well. In practice, a listener playing Tsuyoshi Yamamoto's album A Shade of Blue, with Hiroshi Kagawa's double bass emanating from the center channel, can have the very bottom end of the frequency range of that bass offloaded to a subwoofer, if the center channel can't reproduce the aforementioned frequencies. Without this capability, the bass is sent to the center channel and not reproduced in the audio system. Another less than optimal way would have all the bass for all channels sent to the subwoofer, but then the front left and right channels wouldn't reproduce Hiroshi Kagawa's bass as they should because they can often reach down to 20 Hz.
Using a computer for room correction also enables one to use incredibly powerful FIR filters created by Audiolense. I use Accurate Sound's Hang Loose Convolver to host these filters as it works better than any native in-app convolution engine. A real world example of these powerful filters can be seen using simple math.
It starts with 65,536 tap FIR filters. This alone is well beyond the capabilities of traditional processors. As one listens to higher sample rates, the filter can be upsampled to several hundred thousand or over one million taps automatically. This ensures the frequency resolution of the FIR filter stays the same when the sample rate increases and is a distinction with a major difference.
Frequency resolution = fs / N where fs is the sample rate and N is the number of filter taps.
A 65,536 tap FIR filter at 48 kHz (Atmos is currently all released at 48 kHz) has a frequency resolution of / = 0.732 Hz.
The frequency range spans 0Hz to 24 kHz. Thinking of an FIR filter as a graphic equalizer: /0.732 = 32,768 sliders for an FIR equalizer. This FIR real world example has times the frequency resolution of a 1/3 octave equalizer. In addition a rough rule of thumb is that the effective low frequency limit of the filter is to multiply the frequency resolution by 3, which is 3 x 0.732 Hz = 2.2 Hz. A 65,536 tap FIR filter running on a computer can control frequencies down to 2.2 Hz.
Notice I've been talking about FIR (finite impulse response) filters. These are phase linear and processor/memory intensive. Traditional hardware processors can't use FIR filters for the lowest frequencies, because they lack hardware DSP processing power, and often use less precise IIR (infinite impulse response) filters in combination with FIR filters to cover the full range. IIR filters are frequently less stable and suffer from unequal delays at different frequencies. More information about the difference between IIR and FIR filters can be seen here (link).
There is no free lunch with 65k tap FIR filters or such powerful DSP in general. Using a computer can either be a pro or a con depending on the user and situation. In addition, high tap count filters increase latency. This is a non-issue for music only listeners, but can be an issue for those watching movies. Sophisticated applications such as JRiver Media Center offer latency compensation that works in conjunction with Hang Loose Convolver's VST plugin. HLC reports the latency to JRMC, and JRMC compensated for this during video playback, removing lip-sync issues. For my music only system this isn't an issue at all. Alternatively, one can use minimum phase FIR filters which still have the power to control the bass frequencies at the expense of giving up the time domain correction. But a minimum phase FIR filter has zero latency so will work with Apple TV, or YouTube or Netflix through standalone convolution (example).
Another potential issue with these state of the art filters is called insertion loss. This means the volume level is cut, based on the amount of correction used. An audio system with enough headroom can easily make up for this volume reduction, but it should be understood while designing an audio system.
One last benefit of using a computer for digital room correction is the ability to play discrete immersive albums, and even on rare occasions Atmos ADM files. I've purchased ADM files through Bandcamp, but these are certainly not the norm. Playing ten or twelve channel discrete immersive DXD content with 500,000+ tap filters is the height of living, with respect to high fidelity music playback. It takes a computer to make it in the studio, and to play it at home.
Note: One method I've bene experimenting with is using an Aurender music server to play immersvie music, up through tweleve channel DXD, and routing the audio through a computer for DSP, then on to my Merging Technologies hardware for playback. This method will continue to evolve and improve ease of use in the long run for music lovers.
Hybrid Approach -There is a hybrid approach between using a computer for everything and nothing. As we move away from using a computer, the solutions usually get easier to use, but performance does decrease. Whether or not that performance decrease matters is up to each listener. This guide is about presenting facts, not making friends.
One example of this hybrid approach to DSP is decoding and measuring on a computer while running the room correction filters on an audio hardware device. In my system I have this setup for testing as well as the previously mentioned computer only approach. I use the Sonarworks SoundID Reference application with a Sonarworks microphone to measure my system in my room. The process takes about an hour, but is fairly idiot proof. The app walks one through each microphone placement and tells the user what to do at each step along the way. This is different from Audiolense which requires either serious knowledge or a professional such as Mitch Barnett working with the user.
After running the measurements, SoundID Reference displays a few options and shows a frequency response curve. It's possible to make manually adjustments or select from built-in options such as the Dolby curve. I've done both, but usually wound up using the Dolby option. After a curve is selected, the filter is exported to work with a number of hardware devices. In my case I uploaded the filter to my Merging Technologies Anubis and enabled it.
One the filter is enabled on the Anubis, thinking about filters is over. It operates on all audio signals routed through the device, no matter the sample rate or channel count, without user intervention. This is convenient. Changing channel counts while using Hang Loose Convolver can involve manually switching filters, to ensure the channels are routed to the correct loudspeaker.
In the real world playback looks fairly similar to the computer only approach, with the exception of not running convolution software on the computer. This means no VST plugin in an app like JRiver or no Hang Loose Convolver accepting audio from Apple Music or Audirvana before outputting to the same Merging Anubis.
The downsides of this Hybrid stem from a limited measurement and filter creation application and hardware horsepower. Using the double bass in the center channel example from above, when I play this track and use SoundID Reference in my system, none of the center channel bass is offloaded to the subwoofer. Because my center channel, like most center channel speakers, is low frequency limited, I just don't hear the full capabilities of the double bass in the center channel.
Other negatives are the mixed filter mode using FIR and IIR, with phase changes, and lack of filter taps for low frequency control compared to a full computer solution, equaling less resolution.
One really nice feature of this hybrid approach is a zero latency mode. It's possible to have zero latency, but the amount of correction is limited. I've used SoundID Reference for testing video playback, and everything is in sync perfectly. However, use of minimum phase FIR filters, as mentioned above, also offer zero latency and control bass frequencies very well.
This hybrid approach is most commonly used in professional studios rather than audiophile listening rooms. However, it is a nice option to have. I'm glad it is an add-on to the Merging Anubis because I can use it if I need it. But, I wouldn't go out of my way to get it, if I already had Audiolense and a convolution engine running on a computer. The audio output just doesn't sound as good to me, most likely because it's objectively less precise due to hardware and software limitations.
Traditional Processor - This approach is the most popular and by far the easiest. Traditional processors such as those from Trinnov, Marantz, and Anthem have built-in immersive audio decoders and digital room correction. Dirac, RoomPerfect, and Audyssey are some of the bigger names embedded into traditional processors.
The typical workflow for decoding and room correction couldn't be easier. Connecting an Apple TV to a processor and streaming Apple Music or Tidal will get Dolby Atmos music flowing into the system with a couple clicks. Playing TrueHD Dolby Atmos can be done by putting the MKV downloads or Blu-ray rips onto an NVIDIA Shield connected to the processor and tapping play. Fully decoded and processed with the tap of a finger.
The quality of digital room correction in traditional processors is all over the board. It ranges from those that make the sound worse to those that do a really great job. It all comes down to the sophistication of the software and the horsepower of the hardware.
Taking measurements involves zero computers and often a microphone made specifically for the processor or brand of processors. Just add to cart, connect it when it arrives, and run through the setup wizard. Sophisticated products like the Trinnov Altitude 16 or 32 enable one to VNC into the processor to make adjustments and see an approximation of the end results of the DSP. The beauty of this is a good Trinnov dealer will handle all of the configuration, and use the brilliant team at Trinnov for backup in tough situations.
Similar to the hybrid approach, running DSP on A/V hardware hits its limits due to lack of horsepower. Limited number of filter taps, IIR filters or mixed mode IIR and FIR filters, will equate to sound quality that isn't as good as the computer only approach. However, and this is a big however, because these processors are designed to work with video simultaneously, they are designed to have minimal latency, which lowers the amount of DSP processing they can do. The products are working as designed.
On the other hand, a processor like a Trinnov Altitude 32 uses a computer internally and technically could adjust for latency like JRiver does, but I don't believe it has enough processing power to run 65,536 tap FIR filters to keep everything in phase and control bass down to 2.2 Hz.
The ease of use of these processors can't be overstated. In fact, I've been working with a very high end dealer for the last several months on an immersive system design, and I recommended a Trinnov Altitude processor for the specific installation. It's the right horse for many courses. In this case, the listener forwent playback of discrete DXD and state of the art room correction, in favor of great room correction and ease of playing Atmos content from an Apple TV and NVIDIA Shield. The fact that the Trinnov is a Roon Ready endpoint for up through eight channel PCM is also a bonus that factored in the final decision.
Given that the traditional processors are all proprietary, it's hard to say what's going on inside. The user manuals give some clues and show users how to use different filter modes, such as IIR, FIR, and mixed FIR+IIR, but that's a very high level look into what's going on. I wish some of them would reveal more details because they really have a lot to offer as opposed to some of the mass market processors using the cheapest and weakest chips to get audio decoded and processed.
When I began my immersive audio journey I planned on using a Trinnov Altitude processor as method of playback. I would still like to make this happen, more because I want to experience it first hand in my own room and I want to know how it works as well as I can. This would enable me to educate readers about the product much more.
Last, as a music only audiophile I don't want a screen in my listening room. Call me old school, but that's just the way I like it. A traditional processor necessitates the use of a screen of some type. There are possible ways to use some processors without a screen, but as of right now, I wouldn't wish it upon anyone who likes a fiddle-free listening experience. I'm looking for and testing solutions that enable me to use a device like the Arvus to decode and output Atmos from an Apple TV and Shield, without a display. The Shield can be operated without a display, but the Apple TV is another story. I have some ideas.
Digital Signal Processing Wrap Up
DSP, both decoding and digital room correction, not to mention all the other items for which DSP is used, is a cavernous hole with many unknowns to all but the most geeky audiophiles. I don't consider myself an expert by any stretch of the imagination, but I have used many of the products and talked to several true experts in the field.
Like many technologies, digital signal processing is limited by the sophistication of the software and horsepower of the hardware. In addition, in the hands of a professional, DSP can be magical or in the hands of a novice, it can enable sound quality that reaches new lows. Tread lightly and call in the pros when needed.
Immersive audio playback involves decoding and room correction, which are commonly linked to the same device. They don't have to be, but they usually are. Understanding oneself is key to making a decision about which route will best work in any given system. A Computer only route will provide the best objective audio performance, while the traditional processor route will provide the most convenient and easiest use. For many the key will be bringing these two ends of the continuum as close together as possible, and as of today this is done with a Trinnov Altitude processor.
I am sold on the state of the art room correction offered by Audiolense and filters created by Mitch Barnett of Accurate Sound. The sound quality is second to none, both subjectively and objectively. This is the only way to create a high end immersive experience on the same level as many two channel audiophile systems.
Further Reading
To jump right in, these 5 metrics are essential when choosing the correct data acquisition hardware:
Signal/sensor type compatibility
What measurement resolution do I need?
Maximum sampling rate required
What level of accuracy do I need?
Does my DAQ system need isolation?
Scrolling further down we will break down all the important questions you need to ask yourself in order to choose the best data acquisition system for you. Questions are grouped into three mains sections:
Technical considerations
Ease of Use, Features, and Support
Cost considerations
The essential function of a DAQ system is to make measurements from various sensors and electrical signals, e.g. to measure some kind of physical phenomenon. Therefore, making a comprehensive list of the types and quantities of signals and sensors that you need to measure is the first thing that you should do. This information will drive everything that follows.
There are numerous sensors and transducers for measuring temperature, vibration, pressure, strain, voltages, currents, resistance, and many more.
Today's DAQ systems can measure much more than analog signals like voltages and temperatures. There are digital signals from encoders, RPM sensors, gear tooth, and tacho sensors, for example, that require a different kind of input. These 'counter' inputs must be sampled much faster than the analog inputs.
There are also digital bus data that can be displayed and captured. Few examples:
CAN bus data found in every car and truck
Data transmitted across RS232 and Ethernet interfaces
Position data from GPS sensors, and 3D orientation data from IMU (inertial measuring units)
EtherCAT data found in countless industrial and process control environments
ARINC 429 and MIL-STD- bus data found in commercial and military aircraft
Do you need to capture data from any of these?
Furthermore, there are industrial video cameras that can capture video in sync with your other data. Is this a requirement for you? Will it be in the future?
The most common measurement resolution today is 16-bit, but there are higher-end DAQ systems that provide 24-bit resolution, which is considered essential, especially for noise and vibration applications, and 12-bit resolution systems for low-end data logger applications.
Each bit doubles the possible resolution of the DAQ system. That means that 24-bit is not just 50% more resolution than 16-bit resolution. In fact, 24-bit ADCs provide 256 times greater possible resolution than 16-bit ADCs.
Theoretical maximum values based on bit resolution:
Dynamic range is the difference between the smallest observable part of the signal and the largest. Systems that have higher dynamic ranges provide more vertical axis resolution. Another way of looking at this is to ask this next question'
This goes hand-in-hand with dynamic range. You should have an idea of the smallest change in a signal that you want to see.
Systems with the highest dynamic range provide the best possible vertical axis measuring resolution. But you also have to consider the time axis, because if changes in amplitude fall between samples, they cannot be measured.
Systems with greater than 100 dB dynamic range and at least 200 kS/s sample rate can handle most applications. For shock, noise, and vibration applications, a dynamic range of 120 to even 160 dB is highly desirable.
In an ideal world, measuring systems would be 100% accurate, but perfection is not really possible. There will always be some inaccuracies introduced into the measurement chain, starting right at the sensor itself.
The most common factors contributing to the inaccuracy of any measuring system are gain errors, offset errors, and temperature drift errors in the analog domain and the time base accuracy of the ADC (analog-to-digital converter) and digital counters.
A good DAQ system will specify its accuracy in a clear and consistent way. For example, time base accuracy is normally expressed as 'Typical 5 ppm, Max: 12 ppm,' meaning that the oscillator might typically deviate from its nominal value up to 5 parts per million, but the maximum deviation is 12, in this example.
Gain accuracy is normally written as a percentage, like '±0.05 % of reading.' So if the system is reading +1.000 V, the gain error could be as much as ±0. V. It can also be written as a percentage of full-scale.
You will get efficient and thoughtful service from interwiser.
Offset accuracy is written as an absolute value, like '±0.02 mV,' which is easy to understand.
Temperature drift can affect both the gain and offset accuracy. Gain temperature offset is written like '±50 ppm/K of reading ±200 μV/K,' for example. This ties the error directly to the change in ambient temperature, in Kelvin.
You're probably thinking 'I don't want any noise!' Of course not, but in the real world, electrical and magnetic 'noise' is bombarding DAQ systems constantly. Even when they are shielded properly, sensor cables are like antennas. In addition, there are nearby sources of EMI and RFI in most DAQ measuring environments.
The key is to find the system with the best isolation and the best grounding practices. Please see the next question for more details about isolation.
Electrical galvanic isolation is important for safety, for example, when you're dealing with high voltages. But it also preserves the integrity of all measured data regardless of voltage level, by eliminating or reducing noise, crosstalk, and ground loops, which can either obscure or completely destroy your measurements. In general, we know two types of isolation:
Channel-to-channel isolation means that the noise or crosstalk between and among all channels is prevented.
Input-to-output isolation means that whatever the sensor or measuring wire is connected to in the outside world is isolated from the DAQ system.
Systems with both channel-to-channel and input-to-output isolation provide the most robust isolation.
Learn more:
The Importance of Isolation in Data Acquisition SystemsElectrical isolation is a separation of a circuit from other sources of electrical potential. Learn about importance of galvanic isolation in DAQ systems.When and Why to Use Isolated Amplifiers?Learn why the usage of isolated amplifiers is highly recommended, in order to ensure reliable measurements, and protect your instrument from damage.
Filtering is one of the most important elements of analog signal conditioning. Filtering allows you to reduce or completely block frequencies above or below a selectable frequency component from being recorded. There are also band-pass and band-stop filters, allowing you to select a certain range of frequencies for inclusion or exclusion.
Filtering is sometimes needed to block 50 or 60 Hz noise created by nearby electrical power systems, or higher frequencies from electromotors, generators, power supplies, fluorescent lights, etc. There are many sources of electrical noise that can interfere with your measurements.
Ask yourself if you need non-destructive filtering. Would it be helpful if the filters applied before recording could be removed afterward, or changed completely? All Dewesoft data acquisition systems have filtering capabilities.
An 'alias' is a false signal that is created when the signal is moving too fast compared to the sample rate. Aliasing must be prevented before the measurement is made, otherwise, there is no way to recreate the real signal later.
There are two ways to prevent aliasing:
Always set the sample rate to be at least twice as fast as the highest frequency observed. (Sometimes this is not possible because the highest frequency cannot be predicted.)
Choose a DAQ system with a built-in anti-aliasing filter (AAF) system. Essentially this is a very steep low-pass filter that is set automatically to a percentage of the selected sample rate. This AAF filter blocks frequencies that are too high for the selected sample rate to reproduce, and thus prevent alias signals from being created.
Any DAQ system that is used for noise, shock, and vibration, or applications that involve AC waveforms should have robust, completely automatic anti-aliasing built-in, to ensure the integrity of your measurements.
The job of a DAQ system is to reproduce the character of the signal with as much fidelity as possible. A good rule of thumb is to pick a system that is capable of sampling at least 10 times faster than the fastest signal that you need to capture.
For example, if you need to capture signals that have 20 kHz content, your system should be capable of sampling at 200 kHz. You might read that you only need 2x over-sampling, i.e., the Nyquist frequency. Just note that only 2 samples per period cannot reproduce the appearance of the waveform.
Most engineers have a limited budget every year, and they tend to favor systems that are easy to upgrade or add on to next year, and in the years that follow.
Product lines that are built around a common 'ecosystem' are easily interconnected and share modular elements. They can be synchronized together easily. They share a common software platform that integrates them all and eliminates having to learn to use completely different systems.
When looking at a manufacturer's offerings, find out if they are built around such an ecosystem: can you interconnect them, and do they share common hardware and software?
This should be easy to analyze - it's all about the environment in which the DAQ system is expected to work.
Certainly, any mobile environment, like inside a car, truck, train, etc., requires a portable system. Besides being generally smaller and lighter, portable systems typically run from either AC or DC power, so that they can be easily powered inside vehicles. Many of them also run from internal batteries, making them ideal for fieldwork. Battery-powered data acquisition systems have the added advantage of having a built-in UPS, and they don't draw on the power system if that's what you're trying to measure.
If your environment is a nice clean laboratory or manufacturing floor then a benchtop system is generally fine. Benchtop systems are typically AC-powered and are larger than portable ones. If you intend to permanently mount your system in a 19' rack enclosure, then a rack-mounting system makes the most sense.
This is a matter of your preference. PC-connected DAQ systems have the advantage of being smaller because they don't have a computer, display, or hard disks inside of them. They are generally less expensive for the same reason. And, you can use your own computer.
But if you're going to have to leave the system in a non-secure environment, a stand-alone system might be a better choice.
If you're bouncing around in cars and trucks, for example on test tracks, then there's going to be a lot of shock and vibration. You know your environment better than anyone, so you probably already know the answer to this question.
It's something worth thinking about. Most DAQ systems are not designed for high shock and vibration environments. Always check the shock and vibration ratings.
Systems rated in the 100 g range (EN -2-27:) are considered rugged. Vibration is also a factor, particularly random vibration. Systems tested to MIL-STD-202G Method 214A, test conditions II-D and above are considered rugged.
Another question worth thinking about is operating temperature, dust, and water protection. Environmental testing either in nature or in chambers always requires systems that can tolerate a much wider range of temperatures found in factories and laboratories.
Field testing of any type when you're taking the instruments outside can lead to temperature extremes. It can also lead to salty air, heavy humidity, fog, dust, and even rain.
If this is your working environment, then a system rated IP65 or IP67 is a very good idea, since it will be sealed against the ingress of particles, including water. Systems rated to IP67 and with operating temperatures from -40° to 80°C (-40° to 185°F) provide excellent water and particle ingress protection and temperature performance.
DAQ manufacturers vary in terms of how much and which information is provided within their specifications. In addition to performance characteristics like a number of channels, selectable gain ranges, top sample rate, et al, it's essential to look at the accuracy and isolation specifications.
It's easy to be impressed by specification numbers with lots of precision, but that's not the same thing as accuracy! For example, if you specify that the time is 2:00: PM, but it's really 3:15 PM, that's a lot of precision, but very poor accuracy.
It's a good idea to look at how consistent specifications are given across different signal conditioners, for example. Is isolation of V shown on several modules, but not mentioned at all with other signal conditioners? That could be a red flag. Always make sure that you know what the most important specifications to you really are, and how they apply to all important aspects of the product.
It's great to have an operating manual, but only if you can understand it. It should be well-written in your language, and clear. Ideally, the manual will also be available as a searchable PDF file or even available online as a website with extensive search features.
And a great convenience is HELP built right into the software itself. When running the DAQ software, does pressing F1 to bring up context-sensitive help based on what you're doing in the software? Is it searchable? Does it contain the entire manual?
Also check if there are any other training resources available like webinars, online training courses, and how-to guides, that will help you achieve your goals.
It's always a good idea to ask about what kind of support you can expect after receiving your new DAQ system:
Is there someone to call?
Do they speak my language?
Is the support staff technically proficient?
What if there is a really deep question, will they go to engineering to get an answer for me?
Are there training classes or seminars available online or in person?
Can I get one-on-one training at my facility if I need that?
Are there any technical discussion forums available online?
You get the idea. No one expects all of these things to be free, but they should be available and performed at the most professional level, focusing on you, the customer.
This might be the most important thing to investigate before buying any DAQ system. Ideally, you will have had a product demonstration, but, it should be possible for you to download the software and run it in a demonstration model on your own computer.
Always ask for this, so that you can get the feeling of the interface, and gauge how long it will take you to achieve basic proficiency.
It is a great probability that you will need your DAQ system for different measurement tasks and applications. System reconfiguration to fit your certain task may involve both hardware and software.
For example, if your DAQ system has plug-in signal conditioning modules, you may need to exchange some of them for different measurements. Is this easy to do in the hardware, and does the software automatically recognize the new modules?
Or, you may need to connect an external module or system to your main one - is this also easy to do? Does the software see the new channels and configure them in the software? Is synchronization handled?
The ability to freely name and save an unlimited number of setups should be a basic function of any good DAQ system. However, it's always a good idea to check that this is supported, and how it is implemented.
This will save you a lot of testing preparation time and will enable you to easily repeat frequent measurements.
Most DAQ systems provide some level of built-in data analysis that can be performed after the data has been acquired. It's a good idea to find out exactly what can be done.
For example, if filtering was applied during measurement, can it be changed or even removed afterward? Can you create math functions and then run them on the recorded data? Can you make new displays and compare data easily?
Does the DAQ system allow you to put this software on your own computer and do all of these same analytical functions? Is it free of charge or are you required to have a paid license or a hardware dongle? Can the software be installed on other computers as well, either for free or with a paid license?
Obviously, you'd probably like to be able to do all of these things, and without any fees. But some DAQ makers do charge for this, so you should find out ahead of time.
Unless your data will never require analysis outside of the DAQ system itself, it should be possible to export it in one or more common data formats. It should also be possible to select a portion of the data for exporting, and which channels are to be included.
Common data analysis formats include:
Flexpro®
Excel®
Diadem®
MatLab®
Famos®
UNV / UFF (Universal File Format)
Text/CSV
Nsoft®
RPC III.
But there are many more, so you should verify that your DAQ system supports the specific format that you need. Also look at the import capabilities of your analysis software, because many of them can accept a number of file types.
As mentioned above, if there is a particular data analysis program that your company is using, you should verify that your DAQ system is capable of exporting to it. The most popular analysis programs include Excel® for limited data sets and Flexpro® and Matlab® for virtually unlimited data sets and a wide array of built-in analytical functions.
But there are other programs on the market. Some companies have even developed their own analysis tools in-house, and these are written to accept various file formats, almost always including a basic text format for universal compatibility.
DewesoftX data acquisition software includes in-depth math for digital signal processing in a single software package.
Some companies offer attractive pricing on their data acquisition hardware. But you need to be careful and check if the software that enables you to use that DAQ hardware is included in the offered price.
Also, make sure you know your software requirements! Check if the features you require are included in the included software package or require any upgrade options. Are those for free or require extra investment?
Software is never 'done' - it's always being improved and developed. There will be new releases that not only correct bugs but which add important new features and capabilities.
The majority of data acquisition companies require you to subscribe to an annual support contract, either for the software itself or for the entire system. This cost is often calculated as a percentage of the overall system cost. Or, they charge for new releases.
Dewesoft considers the software to be an essential living part of the data acquisition system and does not charge for its updates and new releases. All releases are free forever for all existing customers. Dewesoft releases 4 major software releases each year, adding new features, performance optimizations, new device support, and bug fixes.
One of the 'hidden' costs of owning a DAQ system may be an annual maintenance fee. You should ask right up front if there is a mandatory maintenance fee, and if so, what does it include and how is it priced? Ideally, there will not be one, but you should know upfront before you order the system, because it is an expense that will have to be budgeted ahead of time.
All measuring systems require calibration. You might assume that a brand-new system would arrive fully calibrated from the factory, and that 'traceable' calibration certificates would be included. Don't take that for granted and verify that with the manufacturer.
If they include them, that's great. If not, how much do they cost? Now, what about next year? And the year after that?
Calibration is normally only valid for one year, so you should find out what your options are for future calibration. If your company has a calibration lab, they may be able to purchase a calibration kit from the DAQ manufacturer that will allow your company to do calibration in-house.
If not these are things to reconsider:
Is there a certified calibration lab in your area that can be hired to perform the service?
Does the DAQ manufacturer themselves offer an annual calibration service?
How quick is their turn-around, and where is it done?
If you need to send the unit halfway across the globe back to the manufacturer headquarters for recalibration, there will be costs and time associated with that. Ask these questions upfront.
The warranty period is another important thing to know about. One year is a good length for a warranty from the manufacturer, and Dewesoft includes this as standard. When comparing DAQ systems you should always ask about the warranty and what it provides.
As already answered earlier above in the question What Technical Support Is Available to Me? Good technical support is essential when it comes to data acquisition and similar systems. Ask yourself those questions. If technical support is available it is usually not free. Double-check the cost associated with it to see what cost could arise during the lifetime of your data acquisition system.
Dewesoft offers FREE worldwide and local technical support to our customers. Our global software and hardware technical support are available in the English language and our support agents are able to connect to your measurement unit with your allowance and help you troubleshoot your measurements. Our subsidiaries in all major countries also offer free technical software and hardware support in your local language.
If you are looking for more details, kindly visit Digital Signal Processing Products.
If you are interested in sending in a Guest Blogger Submission,welcome to write for us!
All Comments ( 0 )