I am using the TDMS Direct Integration in NI-DAQmx Logging and I am using the property Logging.SampsPerFile, and seeing a problem where the Timestamp randomly is missing from the automatically incremented logged files. I have the Logging.SampsPerFile set to 10000000 which when I have 32 channels gives me a little over a gig per file. Looking at the resultant files, in about a quarter of them, there is no t0 or dt information.
TDMS Direct Integration in NI-DAQmx Logging but sometimes timestamp is missing.
Selection of DAQ card
I want to measure the temperature of one system. I am using 8 thermistors ( 4 are of 5 kohm and 4 are of 10 kohm ) at 8 different points of my system. Could you please let me know which DAQ card will be the best for this process?
NI USB 6001 sell
Hi
I have a NI USB-6001 for sale. Used only a couple of times, does anyone have any suggestions of where i could sell it? Or how much its worth?
Thanks
Synchronising AC Voltages and currents in CompactDAQ (AI input triggering AO output)
Dear Community:
This is more of a hardware question than a labview one. I am planning a set-up and I am uncertain that I can pull it off with my planned hardware.
The set-up
- I need to measure 400VAC 3phase voltage at 50Hz, which I plan to do with e.g. NI-9242 mounted on a compactDAQ chassis.
- I also need to generate 50Hz AC low current output signals to 8 channels, which I was hoping to do with a NI-9264 voltage module plus signal isolators, mounted to a separate compactDAQ chassis. (this is currently part of my project requirements, but
The application:
I need the AO signals to be in phase with their respective AI Voltage line measurements so they simulate resistive loads or generation on said voltage lines. In the application I plan to modulate the amplitude of the signals which can be a positive or negative number (i.e. they can be in anti-phase with their voltage simulating generation). I was hoping to deploy the application in a host computer.
The plan
I was planning to use:
a) Time Sensitive Networking versions of the compactDAQ chasis (i.e. NI-9185 and NI-9189) and daisy chain them so the modules can be synchronised. It is only these two chassis and can be put on a dedicated network that is confined to small distances (<10m per node)
b) An additional voltage module NI-9205 that would also measure one of the voltage lines (trough a signal isolator) because this module can generate an Analog "comparison event" which I would set to the positive zero crossing of one of the line voltages.
c) The aforementioned comparison event as the DAQmx trigger to the AO signals in the NI-9264 module. All channels can go out in a single task, and I would take care of writing them in phase with their correct voltage via software. They would work in continous regeneration module.
It is not critical for the AO signals to be generated at the same time as the trigger, they can be a couple of cycles behind and refreshed only every few cycles. Which means that whenever the host decides to change the amplitude of the signal it can write it to the output buffer. Then the AO channels can repeat those signals until the host can process new ones. This is acceptable for my application.
The question:
Can this level of synchronisation can be done with compactDAQ and a host computer?
My budget allows for the extra AI Voltage module that generates the trigger and the Ethernet TSN versions of the chassis, but not a controller or another RT-target device.
Many Thanks
Sincerely
José Zapata
châssis cDAQ état -88705 après un temps de fonctionnement
Bonjour,
J'utilise actuellement sur un banc d'essai un châssis cDAQ 9178 avec plusieurs modules dedans en remplacement d'un fieldpoint en passant d'un PC sous windows 2000 à un PC sous windows 7.
J'arrive a établir la communication avec le châssis et à faire tourner le banc avec mon application labview mais au bout d'un certain temps j'ai une erreur -88705 qui surviens sur NI MAX, avec une erreur Labview.
(images 1,2)
En débranchant et rebranchant le câble USB (besoin d'une rallonge de 10m pour des raisons physique de l'architecture du banc) le châssis et de nouveau opérationnel. Le soucis c'est que le banc d'essai doit tourner pendant plusieurs jours donc inadmissible d'avoir un tel comportement.
La LED activity OK en fonctionnement,et éteins quand j'obtiens l'erreur.
De plus en lançant uniquement le PC avec NI MAX de lancé sans LabVIEW, l'erreur surviens aussi ou bout d'un moment !
En laissant tourner pendant plusieurs jours j'ai eu 3 fois une erreur sur NI MAX. (image erreur 8ada.png et le rapport de crash)
Actuellement je ne vois pas d'où pourrais provenir le problème malgrès les différentes actions que j'ai déjà réaliser, es-ce qu'il y a une personne ici présente qui a des pistes pour que la communication avec le châssis et les modules ne soient pas instable?
actions déjà réalisés:
=> réinstallation des drivers NI.
=> réinstallation/installation de différentes version de NI DAQ.
=> NI Device Loader service toujours OK (et action réalisé https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z000000P8wnSAC&l=fr-FR )
=> Reset MAX configuration Database.
=>désactiver l'autorisation windows à éteindre le périphérique pour économiser de l'énergie.
système:
=> windows 7 - Labview 2017 - NI MAX 18.5
=> PC sans accès à internet.
Cordialement,
Nicolas.
Setting AO voltages in a watchdog task
I'm using two PXIe-6738 cards on Linux and version 18.1.0.49155-0+f3 of the drivers.
I want some analogue outputs to be set to zero if there is a problem with the software. So I have created a watchdog task on PXI1Slot2. I can add digital lines to the watchdog task using DAQmxCfgWatchdogDOExpirStates() and that is working. But if I try to add e.g. PXI1Slot2/ao0 to the task with DAQmxCfgWatchdogAOExpirStates() then I get the following error:
DAQmxCfgWatchdogAOExpirStates(): -200170, Physical channel specified does not exist on this device.
Refer to the documentation for channels available on this device.
This is the first context where it says that PXI1Slot2/ao0 (etc.) does not exist. I can use this and the other AO channels just fine in DAQmxCreateAOVoltageChan(), so clearly the channel exists.
Where can the error be?
Ni-Daq 7.5
Buenas tardes alguien tendra el ejecutable para instalar Ni-Daq 7.5 para instalar en windows 7.
Good afternoon someone will have the executable to install Ni-Daq 7.5 to install on windows 7
DAQ device help
I'm using BNC 2090A with USB 6366 Multifunction I/O device and I'm having trouble with some of the analog inputs. Certain Analog inputs have what seems like an offset voltage. I checked my ground reference on my BNC cable, and the center pin is reading zero volts relative to the shielding. I can switch the BNC signal to AI0 for example and it will read out zero volts as it is supposed to, but when I switch it to AI1, it reads 0.14V. Some channels have an offset of more like 10V. If its just an offset, I could just account for this in my data analysis, but I'm not sure if there is some sort of drift going on in the long term. I will try some tests in the meantime. Perhaps its some sort of compatibility between the devices issue... Does anyone have some insight on this?
Cannot deploy image to cRIO 9148
Hi. Client had some sort of power issue which caused an NI 9148 chassis lose all configuration. MAX is able to find the unit but on the System Resources tab, both 'Free Physical Memory' and 'Primary Disk Free Space' show 'Error'. Attempts to reload the application image from an identical unit return 'Error -2147220304 at nisyscfg.lvlib:Restart.vi:1680001 <APPEND>' (and Progress=0%). Is this a recoverable condition?
Acquisition card PCie 6353 not recognized by NI DAQ-MAX
Dear all,
I bought a PCI 6353 acquisition card. I have already installed it, windows 10 recognized it, instead the software NI-DAQ max didn't recognize. When I look in Devices and Interfaces nothing appears, How can solve this problem? Thank you in advance.
ALessio
OSError when using nidaqmx-python
I want to use nidaqmx-python to read data from NI cDAQ-9184.
The example code can't be executed correctly,and python produces: OSError: [WinError 193] %1 is not a valid Win32 application.
I use Anaconda 3, and create a virtual environment, which Python (64 bits) version is 3.6.9. I install nidaqmx which version is 0.5.7.
I use Windows 10 (64 bits), install labview 2015 (32 bits) and NI MAX.
After some unsuccessful attempts, I find I can executed the example code correctly in Python (32 bits), Python3 (32 bits) is my python in my operating system. But I want to use Python (64 bits) in the future.
(I am sorry for my poor English expression.)
Reading BPW34 with PCI-6251 and BNC 2110
I am having some troubles reading a BPW34 with a with PCI-6251 and BNC 2110. I connected the diode directly to the BNC board AI0 connector and I am monitoring the signal with LabView. However, the signal is not as smooth as the one I see when I connect the the diode directly to an oscilloscope. How can I improve the reading on the NI board? Should I amplify the photodiode?
How to locate "nicaiu. dll" in my computer ? (nidaqmx)
My computer version is windows10 (64 bits), and I install labview2018 64 bits, NI MAX 18.0.
I want to know the file directory of nicaiu.dll which nidaqmx-python uses. (Because I want to know which dll I used in my code, lib64 version or lib32 version. )
Maybe in C:\Program Files (x86)\National Instruments\Shared\ExternalCompilerSupport\C\lib64\msvc or C:\Program Files (x86)\National Instruments\Shared\ExternalCompilerSupport\C\lib32\msvc or D:\Program Files (x86)\National Instruments\Shared\ExternalCompilerSupport\C\lib64\msvc ?
So how can I get it?
print(sys.version)
returns:
3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 14:00:49) [MSC v.1915 64 bit (AMD64)]
Multiple Versions of BNC-2110?
Hi,
I'm trying to understand why several BNC-2110 boxes look different from one another, and what internal differences (if any) exist for those boards.
The attached photo shows the front panels of the two boxes, with their serial & model numbers below. The red rectangles in each photo highlight the differences that are important to me right now. Please explain the internal differences between these products, and how I can access the CTR0 output signal using the 2110 on the left, which lists PFI12 /P2.4 instead of CTR0 on its front panel.
Thanks, in advance, for your help,
Bruce
How Can I Quickly Test If My DAQ Device is Working Properly in RHEL 6.5?
Hi,
I have tried searching this online but I only found a recommended solution for Windows base OS and not for Linux. Is anyone know how can I check the functionality on my newly installed Ni-DAQmx board to my Linux PC?
thanks,
Dexter
cDAQ9191
Why isn't the cDAQ 9191 signal strength LED light on?
C DAQ- 9189 ETHERNET Chassis losses communication to host PC after some time of running the acquisition.
cDAQ-9189 ETHERNET Chassis is configured with a static IP 192.168.1.10 , sub net 255.255.255.0 and default gateway 192.168.1.1 and host PC is configured with IP 192.168.1.5, sub net 255.255.255.0 ,default gateway 192.168.1.1.
Chassis is detected in MAX and is reserved to the host PC. All the modules are detected and self test is passed. Laptop OS is windows 10 , DAQ MX 17.6, Labview 2017 is used for the development. I am running the code since last 3 months from the development laptop and have not faced any issues in communication with the chassis. After the application was built the same was tested from the development laptop.
Now i have a brand new windows 10 updated , 8 GB ,1 TB laptop which i want to use at customer place to which the application is installed and since then i have trouble with communication losses to the chassis.
I ran ping test in command window alongside with application running, to look for any losses and found that the error is "General failure" and this lasts for couple of seconds and again the ping test is back to OK. This causes the acquisition loops to throw error and crash the application.
I tried the same ping test after installing the full Labview development platform and DAQMX 17.6, to this laptop and run the code itself but the issue is not solved. If i switch to the other lenovo laptop, everything works fine and good and i never saw this behavior. This is happening in a random fashion . Some times the application will run smoothly for 1 hr and more and sometimes will not last for more than 10 minutes. I don''t see any memory build up or processor build up.
We have updated the firmware in chassis as well.
The screen shot was taken when IP was set to 169.254.175.48.
There is only one laptop communicating to this chassis .
Any ideas or updates are welcome .
Thank you.
USB-6002 with thermocouple issue
Hi,
For ~2 years I have been using a K-type thermocouple with a USB-6002 to log temperatures from a hotplate in a basic labview program.
About 2 weeks ago the temperature measurement started showing a few random negative numbers (see attached graph).
The problem has steadily got worse, and I'd say now every 1 in 5 measurements gives a negative number. All other measurements seem fine, giving a stable correct temperature.
I have tried a new thermocouple, I have tried a different USB-6002, and I have tried using different AI ports, but I still have the problem.
I have made no changes to my labview program, but my tests so far seem to point to a software related problem.
I'm using labview 17.0 (64-bit).
Thanks for any help,
Graham
USB-6356 has DC offset on one channel
I am working with a USB-6356. All the channel except one seem fine. On channel ai3, if I short AI3+ and AI3- to AIGND I will see a large DC offset (~9V).
If I drive that channel using one of the analog outputs, connecting AO0 to AI3+ and AOGND to AI3- I can see my signal (100mV singe wave), but it rides at 9V.
If I connect to any other channel in the same way it works fine (a sine wave around 0V)
M Series cards. AIGND/AOGND/DGND
I am using a PCI-6229 card . This is my default multifunction card for many projects.
Was just reading the User Manual for a specific topic clarity and was wondering what is the difference between the three grounds as presented by the card. AI GND / AO GND / D GND
While which to use for what is obvious there also is a mention in the user manual which says that all these three are connected on the board.
I always have signal conditioners connected to all my I/Os and so the loading on the pins is well below even the normal rating ... like DO pins are not loaded beyond 2mA at any time. This being the case what will happen if I refer an Analog Channel with DGND instead of AI GND ?
Just curious.