Quantcast
Channel: Multifunction DAQ topics
Viewing all 6882 articles
Browse latest View live

Stepper motor VI using DAQmx to output signals

$
0
0

Hello,

 

I am new to LabView and am working on configuring a VI to program a stepper motor. I am working on configuring my VI to the NI MyDAQ I will be using. The two outputs I am using are CW/CCW on the motor (digital) and the pulse of the motor (hi/low). This is what I have so far but I am having trouble understanding how to us the DAQmx features. Any suggestions would be greatly appreciated. My VI is attached below. Also - there is a sink/source inconsistency error between my Write DAQ function and the Clear task function (which is why they are not wired together on my VI). The error says : "The type of source is 1D array of DAQmx Event. The type of sink is DAQmx Event." How should I fix this? Let me know. Like I said, I am very new to LabView so please bear with me. Thanks in advance!


Synchronizing analog output using external trigger signal on usb-6001

$
0
0

Hello,

I'm working on a system that uses the analog output of my USB-6001 DAQ to drive scanning mirrors very quickly. This system has to be synchronized with my niScope (PXIe-5122) which I'm using to acquire data. The external trigger signal that I'm trying to use to time both devices is 4kHz (period of 250us). I'd like to be able to be able to update the analog output of my DAQ every 1 or 2 or 4 trigger edges to a new, but predetermined value. I need 2 analog outputs, doing x and y scanning respectively.

Can anyone shed light on how to do this?

1. I need to be able to read the trigger signal

2. I need to use the trigger to update the analog outputs on two channels

3. I need flexibility on when the change happens relative to the edge (maybe a trigger delay or holdoff).

Anything helps. If you can think of a different scheme to synchronize these devices, that'd also be appreciated.

PCI-6289 Output offset

$
0
0

Has anyone else here encountered the problem where their AO output has -10V DC. The board has 2 outputs and the one outputting -10V changes from time to time. The board converts Digital Signals from my xPC target to analog signals for my E-625 amplifier and takes in data from the E-625 amplifier (in analog) and converts it back to digital for my xPC target. Any idea what could be causing this -10 V offset?

cdaq 9178 LEDs not, lighting possibly broken?

$
0
0

 I left my 9178 chassis with a couple of 9211s on it to monitor some temperatures. When I came in this morning my VI had stopped running as it was programmed to do, and the data looked fine. When I went to begin another run I was receiving an error code suggesting there might something wrong with my 9211s. When I went to check the 9178's light were off. I took off all of the 9211s and unplugged and plugged the power back in. the only response I get from the 9178 is an audible buzzing, but nothing else, ( no lights, and no recognition in NIMAX, etc)

 

Do you guys have any suggestions for trouble shooting? At this point I'm pretty sure I have send it in for repair but I want to be sure it is actually broken.

New Calibrated PCI-6224 doesn't work, but old one does

$
0
0

A new calibrated PCI-6224 bought to replace an expired PCI-6224 doesn't work. Old board works fine. Old board rev/lot is 191329D, new one is 191329E. I couldn't find any notes about the changes in the new rev.

SW is windows C++ code written in 2009 and has been in use ever since.

The DAQ Diagnostic Utility Version 2.1 says all is good.

Using NI-DAQmx Version 17.0

Tracing steps with a debugger, I found:
No errors on the write setups.
error = DAQmxReadDigitalU32(taskHandle,-1,10.0,DAQmx_Val_GroupByChannel,(uInt32 *)&AcqData[AcqWriteIdx],bufLen,&sampsRead,NULL);

error: 0.
sampsRead: 0

On another process:
error = DAQmxReadCounterScalarF64(FreqtaskHandle,10.0,&SampleActual,0);

error: -200474

Any ideas as to what changed in the newer rev that affects SW?

Difficulty configuring Array microphone using NI-9215 and MAX

$
0
0

Hello All, 

             Apologies in advance for the stupid question.

As stated above, I'm having some difficulty configuring a G.R.A.S. Array Microphone Type 40PH though my N1-9215 DAQ. When I attempt to assign a global virtual channel its telling me "no supported devices found" when attempting to configure as a sound pressure sensor.

 

I can set it up as a voltage analog device, but the readings I get from it are nonsense, its just white noise that doesn't react to sound differences at all.

 

I fear I have missed something basic!

 

Apologies if this is on the wrong discussion board.

 

Data acquisition AO and AI simultaneous by DAQ 6343 with Visual C# code

$
0
0

Hi i have a question about using the daq and the code to control it

I'm using 1 analog channel AO0 for generation of sinus from 10Hz to 3kHz (10, 30, 50, 100, 500, 750, 1000, 3000)(Hz). This signal enters to electronic circuit, from the circuit i have two outputs, they are connected to AI2 and AI3.

 

from the examples i took some code

to generate and measure all 3 channels simultaneously, i'm using finite mode for the sampling AI.

code:

// Create the master and slave tasks
 inputTask = new Task("inputTask");
 outputTask = new Task("outputTask");


// Configure both tasks with the values selected on the UI.
inputTask.AIChannels.CreateVoltageChannel("Dev2\ai2:3, "", AITerminalConfiguration.Rse, 
inputMinValue, inputMaxValue, AIVoltageUnits.Volts); // Configure timing specs SAMPLERATE = 5000 * 2; RATE = 5000; //samples per channel /s
inputTask.Timing.ConfigureSampleClock("", SAMPLERATE, SampleClockActiveEdge.Rising,
SampleQuantityMode.FiniteSamples, RATE); outputTask.AOChannels.CreateVoltageChannel(Generator_AO, "", inputMinValue,
inputMaxValue, AOVoltageUnits.Volts); // Verify the tasks inputTask.Control(TaskAction.Verify); outputTask.Control(TaskAction.Verify); FunctionGenerator fGen = new FunctionGenerator(outputTask.Timing, FREQ, SamplesBuffer.ToString(),
CyclesBuffer.ToString(), "Sine", Voltage_GenSinus.ToString()); outputTask.Timing.ConfigureSampleClock("", fGen.ResultingSampleClockRate, SampleClockActiveEdge.Rising,
SampleQuantityMode.FiniteSamples, SamplesBuffer); output = fGen.Data; writer = new AnalogSingleChannelWriter(outputTask.Stream); writer.WriteMultiSample(false, output); // Officially start the task outputTask.Start(); inputTask.Start();
reader = new AnalogMultiChannelReader(inputTask.Stream); data = reader.ReadWaveform(output.Length);//reader.ReadWaveform(5000); dataToDataTable(output, data, ref inputDataTable, group_index); inputTask.Stop(); outputTask.Stop();

variables RATE = 5000 (samples per channel), SAMPLERATE = RATE * 2

i know that sample rate must be at least multiply 2 of measured frequency. everything works fine

up to 100Hz

at frequency bigger from 100Hz i have ten periods of sinus  of results

 

SampleBuffer for sinus from 10hz to 100hz is 5000 and from 500hz to 3000hz is 1000

why this happen, this is can not be changed at work of the task

 

the graph of the measurement of high freq is the last one

 

tech spec for this daq input 500kS/s

output is 900kS/s

 

why at the first line it not measuring the signal as it must be done like at low freqs?

USB 6211 Isn't Recognised After Computer Restarts

$
0
0

Hi,

I'm using three USB-6211s connected via a USB hub, which has its own power supply. These work fine until I restart the computer. When I log on I get messages saying the devices are not recognised and only one of the DAQs appears in MAX. If I physically unplug each DAQ from the hub and then plug it back in again, they show up in MAX and I can run my LabVIEW program.

 

Eventually the DAQ devices will be kept inside a closed control box so the user will not be able to unplug them. Can anyone help me figure out what's going wrong? I thought it might be the USB hub but I've made sure it is using its own power rather than drawing it from the computer's USB connection.

 

Thanks for your help,

Dan


Analog Output with 2 channels writing only one value to both channels

$
0
0

Hello, I'm running Labview 16 with DAQmx 16.1.0

I saw this behavior first in my application and also in the DAQmx analog output example vi (attached).

In my application, I was sending a 2D DBL array to write to the outputs a0 and a1. Even though the two channels in the input array had different values, only 1 value would be written to both outputs.

For example, if the input array was {[1,2,3,4],[40,80,120,160]}, the output on both channels would be {40,80,120,160],[40,80,120,160]}. When I switched the order of the channels in the input array, the other value would be written to both outputs. I've seen the same issue with inputting a 1D waveform array. The issue was there with the DAQ assistant and using the DAQmx blocks.

Why is this happening??

Thanks

Read Spreadsheet for DAQ Voltage Range Manipulation

$
0
0

Hello, 

 

I have an analogue output NI-9264 module and I'm trying to manipulate DAQ voltage ranges using the "Read Delimited Spreadsheet" VI. 

 

The data from the file path is being saved into Excel as a CSV (Comma delimited file). 


The data that is being passed from CSV is:

 

Min Voltage,Max Voltage

10,3

5,1

3,2

2,3

1,3

 

In order to get rid of the heading I have set the input of "start of read offset" to the value '13'. 

 

However when running the VI 0's appear on the top row, which is an undesired result. 

 

How do you remove the first row from the indicator?

 

Many Thanks 

Accessing daqmx resources/variables in DSM stresses the CPU load for shared variable engine

$
0
0

I have got a mix of fieldpoint/compactFieldpoint/compactDAQ chassis that are used for surveillance and logging of remote stations infrastructure status (voltage/temperature/fuses).

My problem is when accessing the shared variables in Distributed Systems Manager the CPU load for the shared variable engine increase from 10% up to 98%. Then the objects/resources lose connection to the MAX tasks (global virtual channels) before any values are assigned to the variables. After a minute or two the CPU load return to normal again. Now there is a error message showing in DSM where the variable's values should be:

IAK_SHARED: Hex: 0x8ABC5003 The connection to the server has been lost.

I have googled the explanation of the error but I can still see the variables updating and logging in MAX/Historical data during the situation.

When launching the DSM the CPU also jumps to close to 100% and settles after a short while.

 

Anyone having the same issue or a solution to this problem?

 

Best regards

Kenneth H

How to Control DAQ Pinouts using an Array which connects to Physical Channel Node on DAQmx

$
0
0

Hello, 

 

I was trying to work out a way to manipulate several DAQ module pinouts using an index array or a script (.txt) file mechanism which will enable different pinouts to be switched on and off during simulation. I was hoping that this could be achieved by passing it through a text file but I haven't been able to find any examples in the forum or online on how to achieve this. Therefore, instead I have tried to use a Physical Channel Array but again this provides restraints as to what DAQ module pinouts I actually want on and off. Does any one have an example or an idea of what I am looking to achieve that will be compatible with DAQmx functions.

 

The idea I started with is below: 

 

Building an Array to Virtual Channel.PNG 

I soon realised that my current design is really only useful for selecting one pin out using the Enum control from the front panel, so that's not really what I'm looking to achieve. I wish to pass several pinouts to one DAQmx Create Virtual Channel and have the capability of switching on and off those pinouts to a specific binary configuration.   

 

I'm already aware of the NI-DAQmx syntax for specifying physical channel strings (i.e. Dev0/ai0:4) but this would provide limited capability in terms of changing the strings during the simulation run.

 

In NI Max, the test panel option provides Boolean step ups for the the digital output configuration: 

00000000

00000001

00000010

00000011

00000100

...... etc etc

 

So surely this could also be achieved with Analogue Outputs in LabVIEW too and would save a considerable amount of block diagram space. So this type of formatting would be a win win really. I just want to place the hardware through as many possible simulated configurations using the AO pins and then record the configurations into an excel spreadsheet (.xls) file. I'm not sure if my idea is something that is possible in LabVIEW, with the idea of conserving block diagram space and mainpulating several pinouts going into the node of one DAQmx Create Virtual Channel. 


Would appreciate your thoughts and insights with this. 

 

Happy to answer any further questions if I wasn't clear on anything. 

 

Many Thanks

 

Virtualization PCI-6025E: not responding to the selected base address error

$
0
0

Hi all,

 

I know that there are already a few topics about running DAQ cards in virtual machines (e.g. Virtual machines and NI hardware - http://forums.ni.com/t5/LabVIEW/Virtual-machines-and-NI-hardware/td-p/1609208 or NI PCI-6024E and Virtualization - http://forums.ni.com/ni/board/crawl_message?board.id=250&message.id=90353 ) but I still want to create a new thread with a bit more specific question.

 

I managed to make PCI passthrough for PCI-6025E card from Linux environment host to Windows 2000 guest. I tried this with both KVM and Xen (I can provide more details if needed, but virtualization in the both cases was done more or less accordingly to documentation). The problem described below refers to the both virtualization schemes.

 

Guest OS initially determines the card as unknown PCI device. Installing NI 7.4.4 Legacy driver allows to recognize device as Data Acquisition Device / PCI-6028E in Windows Device Manager. The card can also be seen from the Measurement and Automation Explorer as PCI-6025E(Device 1).  But here the first strange thing happens: Serial number value is 0xFFFFFFFFFF. Probably this is OK, but in my opinion already suggests that some is wrong. (I don't remember how it was on the real system, and unfortunately don't have possibility to check it now, all the virtualization is done just because my old control computer died and I can not replace it with another, which will allow to install Windows2000).

 

Run of the Test Panels, gives an error "The device is not responding to the selected base address". After ignoring it the Test Panel opens, however Strip Chart Data Mode shows a Fatal error (-10845) that is "overFlowError". Description of the error tells "Because of system and/or bus-bandwidth limitations, the driver could not read data from the device fast enough to keep up with the device throughput; the onboard device memory reported an overflow error", but it does not help to debug it or to solve. One shot and continuous data modes work without any errors, however, software that should use this card (it is a proprietary binary file that came with some equipment), also, can not use it and complains that card or driver does not respond.

 

My question is, what can be the reason that device does not respond to the the selected base address? How this can be debuged/solved?

 

I'll really appreciate any ideas or suggestions. Thank you in advance

Wiring transducer to 9203

$
0
0

I need to wire two transducers for temperature and humidity to a unit that will transmit these measurements to a server. I was looking at the NI-9203 and it appears to be the right card for the job. I am planning to use this card with a cRIO-9063.

 

Looking at the NI-9203 datasheet, I am slightly confused with the wiring diagram. What is this dotted line for? Does it represent a connection? If so, I don't see why a transducer would be directly connected to both positive and negative on the voltage source with no measurement point between the two. It would make more sense that the line was omitted. This is coming from someone who doesn't have a EE background, so I may be misunderstanding the dotted line completely.

dottedline.PNG

 

My second question is, should I power the two transducers and the cRIO unit from the same power supply? Or should I have a PSU for the cRIO and one for the two transducers?

 

Compact Daq: Why does the NI-9923 not show as an configuration option for my ni 9207 in NI MAX

$
0
0

Using NI Max Daq 16.0.1

Network Device  cDAQ 9184:  NI9207 (DSUB) with terminal block 9923.

 

MAX does not recognize the terminal block as an option for this module.

Module is shown in Max.

 

Have not found any similar problems posted.

Any one else having this problem and a solution?


Waiting before analog input

$
0
0

Hello, I have a NI 9263 that outputs an analog signal (long story short) to a sample and a NI 9401 that inputs the response of the sample. I need to wait 10 ms (with precision) after the completion of the writing data before acquiring the data. I don't know how to do this.

 

I present the two unsuccessful trials:

 

1- In "Wait_before_reading_instantaneous" I simply wire the error of the stop task of the write channel to the read DAQmx, which forces the analog input to wait for the analog output. But I don't know how to control the time.

 

2- In "Wait_before_reading_10milisecond" I wait 10 ms in a flat sequence frame. However the real time that passes from frame to frame in Labview is way larger than this 10 ms: I have an uncertainty of +-20ms and a minimum threshold of approx.150 ms (checked with an oscilloscope.)

 

I think the solution is around the 1- approach.  I thank in advance for the help.

 

 

How to remove ghosting totally? NI USB-6363

$
0
0

Thank you in advance. Right now, i am using NI6363-BNC to measure three phase voltage signals. I meet the problem of ghosting effect. I already read your page about ghosting effect. I wonder how to decide whether i remove ghosting effect or not. Even the voltage error is 1mV, it will lead to a wrong result. I try to use AI0(input 5V signal) alone, and it gives me 5.005V when i use AI0(input 5V signal) and AI1(input 5V signal) at the same time. It gives me 5.001V. If 5.005V is correct, does it mean ghosting effect still exist? 

Data Acquisiton Using DAQ PCI 6221 and LabView 8.5

$
0
0

Hello everyone,

 

I have a very specific question regarding the data acquisition with a PCI 6221 multifunction card. In order to understand the question, I will first give a short outline of what I did so far:

 

I am trying to perform a continuous data acquisition (analog input) task. After initializing the virtual channels, the task is started and the DAQmx.Read (Analog 1D Wfm, NChan, NSamp) Vi is continuously called in a while loop. To my understanding, this means that the A/D converter is continously converting the Data, while the PC collects the data from the buffer only from time to time, when DAQmx.Read is called. The frequency for the calling of DAQmx.Read can be defined by the "number of samples per channel", which I usally set to be 1/10 of the total data acqusition rate in order to avoid overwrite errors. Thus, the DAQmx.Read is called with a frequency of 10 Hz.

 

The next steps are to display and save the data. I created a producer-consumer architecture using queues (producer: loop with DAQmx.Read, consumer1: putting data into an array and display it, consumer2: save data to HDD). The array of waveforms of DaqmxRead.vi is enqueued in the producer loop after executing. This array is then sent to the consumers. As each waveform consists of many data points, I using the Mean.vi [NI_AALBase.lvlib] to reduce the noise and amount of data points. For consumer 1, the display, the Mean function just takes the mean of all data arriving in the queue meaning that the data shown on the display updates at the same rate as the DaqmxRead.vi produces data. This also implies that the time separation between the data on the display is 100 ms for the case of the above mentioned Read-Out rate.

 

However, as I sometimes need higher time resolutions than 100 ms, the second consumer works differently: The data arrays arriving through the queue are divided into chunks in order to obtain a higher time resolution than the aforementioned 100 ms. For example, lets say that each waveform contains 2000 data points (i.e. voltages), which where recorded using a sampling rate of 20kHz. This means that the waveforms are separated by 100 ms, whereas the single data point within a waveform are separated by 50 µs. If a time resolution of 1ms is desired, this would mean that consumer2 takes the Waveform and divides the Y-component of the waveform into 100 packages of 20 y-values, which are then averaged (also see the image attached). This surely decreases the signal-to-noise ratio as only 20 instead of 2000 data points are used for averaging.

 

But here comes the problem: When I need a data acquisition rate of 20 Hz for the evaluation (its about the rate of the data written to the HDD), I have two options in my current VI: Either I decrease the number of samples for the DAQmxREAD.vi [e.g. sampling rate 10 kHz, number of samples, 500)] and tell consumer2 to average over the whole package of 500 points arriving in the queue OR I still use a number of samples of 1000, but tell consumer2 to divide the package into to parts and average over each package respectively. I checked both, and the Signal-To-Noise ratio is significantly better when I use a higher Readout rate (calling DaqmxRead more often) instead of subdividing the data packages from the DaqmxRead. Note that this is at the same sampling rate, implying that in the end, the amount of data points which are used for averaging are the same.

Is this an expected behaviour? Is there something wrong with the data averaging I described? I surely can provide a more complete example if necessary.

 

Thank you in advance and best wishes,

Phage

 

 

 

 

Attempting to Recompile Kernel with DAQ Software

$
0
0

Sorry if this is the wrong place to ask this, but I don't see anywhere that would be more appropriate. 

 

I'm running NI-DAQmx Base 15.0.0 on Scientific Linux 6.X with Kernel version 

3.0.80-rt108. However, I discovered that we left out a couple of kernel features when compiling the kernel the first time so I'm wanting to reconfigure and rebuild my kernel.

 

However, when I attempt to re-build and install my kernel the last step (make install) breaks with the following lines: 

 

[root@<machine name> linux-3.0.80]# make install

sh /root/linux_source/linux-3.0.80/arch/x86/boot/install.sh 3.0.80-rt108-7-28-27 arch/x86/boot/bzImage \

System.map "/boot"

ERROR: modinfo: could not find module nimxdfk

ERROR: modinfo: could not find module nipxirmk

ERROR: modinfo: could not find module nidimk

ERROR: modinfo: could not find module nimdbgk

ERROR: modinfo: could not find module niorbk

ERROR: modinfo: could not find module nipalk

ERROR: modinfo: could not find module nikal

 

It appears that when I rebuilt the kernel and modules the NI modules were not included. Does anyone know how to either add them to the path to get installed by "make modules_install"? Would uninstalling the drivers and reinstalling after i upgrade the kernel be best?

 

Thanks,

James

Can I use analog output pins from my DAQ (PXI-6251) that is controlling the SCXI chassis

$
0
0

Hello all,

 

As the title states, I am wondering if I can use the analog output pins from the DAQ card (PXI-6251) when it is controlling the SCXI chassis.

 

From my understanding, the DAQ will multiplex all of the SCXI cards through AI0, leaving the rest of the channels free (AI1-7 and AO0-1).  Therefore, I theoretically should be able to access those AO pin on the DAQ card.  

 

As some background, the current test set up doesn't require me to use any of the SCXI cards, but its going to be a pain to rewire my current system.  I've inherited a tower that don't have the best cable management and everything looks like a spaghetti nightmare.  

 

Thanks,

Matt

Viewing all 6882 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>