Search Results

Search found 7827 results on 314 pages for 'cuda dev'.

Page 3/314 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • [Not in Vermont] IT Jobs: Sharepoint/ASP.NET Dev + Winforms C# Dev in Western Mass

    Two .NET jobs in Western Mass from a recruiter, contact info below Requirement #1: Our client is looking for the best engineers in the world, and then we give them the opportunity to excel. Our light, scrum-based process keeps you focused on delivering functionality that our customers need. We try to do things right (unit tests, continuous builds, bug tracking, etc) and were looking for others who work this way too. Primary Responsibilities Develop SharePoint applications in ASP.NET with a heavy...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to upgrade boost lib using apt-get?

    - by sam
    I use ubuntu 11.04. My boost version: sam@sam:~/code/ros/pcl$ apt-cache showpkg libboost-all-dev Package: libboost-all-dev Versions: 1.42.0.1ubuntu1 (/var/lib/apt/lists/tw.archive.ubuntu.com_ubuntu_dists_natty_universe_binary-amd64_Packages) (/var/lib/dpkg/status) Description Language: File: /var/lib/apt/lists/tw.archive.ubuntu.com_ubuntu_dists_natty_universe_binary-amd64_Packages MD5: 72efad05a3c79394c125b79e1d4eb3a7 Reverse Depends: libvtk5-dev,libboost-all-dev libfeel++-dev,libboost-all-dev Dependencies: 1.42.0.1ubuntu1 - libboost-dev (0 (null)) libboost-date-time-dev (0 (null)) libboost-filesystem-dev (0 (null)) libboost-graph-dev (0 (null)) libboost-iostreams-dev (0 (null)) libboost-math-dev (0 (null)) libboost-program-options-dev (0 (null)) libboost-python-dev (0 (null)) libboost-regex-dev (0 (null)) libboost-serialization-dev (0 (null)) libboost-signals-dev (0 (null)) libboost-system-dev (0 (null)) libboost-test-dev (0 (null)) libboost-thread-dev (0 (null)) libboost-wave-dev (0 (null)) Provides: 1.42.0.1ubuntu1 - Reverse Provides: sam@sam:~/code/ros/pcl$ How to upgrade boost to 1.44+ by using apt tools? Thank you~ When I run apt-add-repository,it shows: sam@sam:~/code/ros/pcl$ sudo apt-add-repository ppa:timklingt/ppa Error reading https://launchpad.net/api/1.0/~timklingt/+archive/ppa: GnuTLS recv error (-9): A TLS packet with unexpected length was received. sam@sam:~/code/ros/pcl$ How to fix it? Thank you~ I try to install libboost1.46-all-dev: sam@sam:~/code/ros/pcl$ sudo apt-get install libboost1.46-all-dev Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libboost1.46-all-dev : Depends: libboost1.46-dev but it is not going to be installed Depends: libboost-date-time1.46-dev but it is not going to be installed Depends: libboost-filesystem1.46-dev but it is not going to be installed Depends: libboost-graph1.46-dev but it is not going to be installed Depends: libboost-iostreams1.46-dev but it is not going to be installed Depends: libboost-math1.46-dev but it is not going to be installed Depends: libboost-program-options1.46-dev but it is not going to be installed Depends: libboost-python1.46-dev but it is not going to be installed Depends: libboost-regex1.46-dev but it is not going to be installed Depends: libboost-serialization1.46-dev but it is not going to be installed Depends: libboost-signals1.46-dev but it is not going to be installed Depends: libboost-system1.46-dev but it is not going to be installed Depends: libboost-test1.46-dev but it is not going to be installed Depends: libboost-thread1.46-dev but it is not going to be installed Depends: libboost-wave1.46-dev but it is not going to be installed E: Broken packages sam@sam:~/code/ros/pcl$ What's these error means? And how to solve it? Thank you~

    Read the article

  • ubuntu 11.10 install 4.4.3-0ubuntu2 package dependencies

    - by HuangheWoo
    before sudo apt-get install gnuplot I sudo apt-get build-dep gnuplot to resolve package dependencies. ~$ sudo apt-get build-dep gnuplot Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting 'liblua5.1-0-dev' instead of 'liblua5.1-dev' The following packages will be REMOVED: libgd2-xpm ubuntu-desktop The following NEW packages will be installed: debhelper diffstat html2text intltool-debian libbsd-dev libcairo-script-interpreter2 libcairo2-dev libedit-dev libexpat1-dev libfontconfig1-dev libfreetype6-dev libgd2-noxpm libgd2-noxpm-dev libglib2.0-dev libjpeg62-dev liblua5.1-0-dev libncurses5-dev libpango1.0-dev libpixman-1-dev libpng12-dev libreadline-dev libreadline6-dev libtinfo-dev libwxbase2.8-dev libwxgtk2.8-dev libxcb-render0-dev libxcb-shm0-dev libxft-dev libxrender-dev po-debconf quilt texinfo wx2.8-headers x11proto-render-dev 0 upgraded, 34 newly installed, 2 to remove and 0 not upgraded. Need to get 9,100 kB of archives. After this operation, 37.8 MB of additional disk space will be used. It says the "ubuntu-desktop" will be removed, but "ubuntu-desktop" is important. What's should I do?

    Read the article

  • Creating .lib files in CUDA Toolkit 5

    - by user1683586
    I am taking my first faltering steps with CUDA Toolkit 5.0 RC using VS2010. Separate compilation has me confused. I tried to set up a project as a Static Library (.lib), but when I try to build it, it does not create a device-link.obj and I don't understand why. For instance, there are 2 files: A caller function that uses a function f #include "thrust\host_vector.h" #include "thrust\device_vector.h" using namespace thrust::placeholders; extern __device__ double f(double x); struct f_func { __device__ double operator()(const double& x) const { return f(x); } }; void test(const int len, double * data, double * res) { thrust::device_vector<double> d_data(data, data + len); thrust::transform(d_data.begin(), d_data.end(), d_data.begin(), f_func()); thrust::copy(d_data.begin(),d_data.end(), res); } And a library file that defines f __device__ double f(double x) { return x+2.0; } If I set the option generate relocatable device code to No, the first file will not compile due to unresolved extern function f. If I set it to -rdc, it will compile, but does not produce a device-link.obj file and so the linker fails. If I put the definition of f into the first file and delete the second it builds successfully, but now it isn't separate compilation anymore. How can I build a static library like this with separate source files? [Updated here] I called the first caller file "caller.cu" and the second "libfn.cu". The compiler lines that VS2010 outputs (which I don't fully understand) are (for caller): nvcc.exe -ccbin "C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\bin" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.0\include" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.0\include" -G --keep-dir "Debug" -maxrregcount=0 --machine 32 --compile -g -D_MBCS -Xcompiler "/EHsc /W3 /nologo /Od /Zi /RTC1 /MDd " -o "Debug\caller.cu.obj" "G:\Test_Linking\caller.cu" -clean and the same for libfn, then: nvcc.exe -gencode=arch=compute_20,code=\"sm_20,compute_20\" --use-local-env --cl-version 2010 -ccbin "C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\bin" -rdc=true -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.0\include" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.0\include" -G --keep-dir "Debug" -maxrregcount=0 --machine 32 --compile -g -D_MBCS -Xcompiler "/EHsc /W3 /nologo /Od /Zi /RTC1 /MDd " -o "Debug\caller.cu.obj" "G:\Test_Linking\caller.cu" and again for libfn.

    Read the article

  • How can I get nVidia CUDA or OpenCL working on a laptop with nVidia discrete card/Intel Integrated Graphics?

    - by PeterDC
    Background: I'm a 3D artist (as a hobby) and have recently started using Ubuntu 12.04 LTS as a dual-boot with Windows 7. It's running on my a fairly new 64-bit Toshiba laptop with an nVidia GeForce GT 540M GPU (graphics card). It also, however has Intel Integrated Graphics (which I suspect Ubuntu's been using). So, when I render my 3D scenes to images on Windows, I am able to choose between using my CPU or my nVidia GPU (faster). From the 3D application, I can set the GPU to use either CUDA or OpenCL. In Ubuntu, there's no GPU option. After doing (too much?) research on the issues with Linux and the nVidia Optimus technology, I am slightly more enlightened, but a lot more confused. I don't care one bit about the Optimus technology, as battery life is not by any means an issue for me. Here's my question: What can I do to be able to use CUDA-utilizing programs (such as Blender) on my nVidia GPU in Ubuntu? Will I need nVidia drivers? (I have heard they don't play nicely with Optimus setups on Linux.) Is there at least a way to use OpenCL on my GPU in Ubuntu?

    Read the article

  • Unavailable repository

    - by katrina
    I am new to Ubuntu and keep butting up against errors, such as this: Package libpng12-dev is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source However the following packages replace it: libpng12-0 E: Unable to locate package subversion E: Package 'git-core' has no installation candidate E: Package 'build-essential' has no installation candidate E: Package 'autoconf' has no installation candidate E: Package 'libtool' has no installation candidate E: Unable to locate package libxml2-dev E: Unable to locate package libgeos-dev E: Unable to locate package libpq-dev E: Unable to locate package libbz2-dev E: Package 'proj' has no installation candidate E: Unable to locate package munin-node E: Unable to locate package munin E: Unable to locate package libprotobuf-c0-dev E: Unable to locate package protobuf-c-compiler E: Unable to locate package libfreetype6-dev E: Package 'libpng12-dev' has no installation candidate E: Unable to locate package libtiff4-dev E: Unable to locate package libicu-dev E: Unable to locate package libboost-all-dev E: Unable to locate package libgdal-dev E: Unable to locate package libcairo-dev E: Unable to locate package libcairomm-1.0-dev E: Couldn't find any package by regex 'libcairomm-1.0-dev' E: Unable to locate package apache2 E: Unable to locate package apache2-dev E: Unable to locate package libagg-dev when I want to do this: sudo apt-get install subversion git-core tar unzip wget bzip2 build-essential autoconf libtool libxml2-dev libgeos-dev libpq-dev libbz2-dev proj munin-node munin libprotobuf-c0-dev protobuf-c-compiler libfreetype6-dev libpng12-dev libtiff4-dev libicu-dev libboost-all-dev libgdal-dev libcairo-dev libcairomm-1.0-dev apache2 apache2-dev libagg-dev. Any help or advice would be greatly appreciated. Or referrals to other questions...

    Read the article

  • How to install Grub2 under several common scenarios

    - by Huckle
    I feel the community has long needed a clean guide on how to install Grub2 under a a few extremely common scenarios. I will accept answer as solved when it has one section per scenario and assumes nothing other than what is specified. Please add to the existing answer, wiki style, keeping to the original assumptions. Rules: 1. You cannot, at any point in the answer, invoke Ubiquity (the Ubuntu installer). 2. I strongly recommend not using any automatic boor-repair tools as they're not very educational Scenario 1: Non-booting Linux OS, No boot partition, Fix from Live CD Setup: /dev/sda1 is formatted ext* /dev/sda2 is formatted linux_swap /dev/sda1 doesn't boot because MBR is scrambled and /boot/* was erased Explain: How to boot to a Live CD / USB and restore Grub2 to the MBR and /boot of /dev/sda1 Scenario 2: Non-booting Linux OS, Boot partition, Fix from Live CD Setup: /dev/sda1 is formatted fat /dev/sda2 is formatted ext* /dev/sda3 is formatted linux_swap /dev/sda2 doesn't boot because the MBR is scrambled and /dev/sda1 was formatted Explain: How to boot to a Live CD / USB and restore Grub2 to the MBR and /dev/sda1 and then update the fstab on /dev/sda2 Scenario 3: Install on to thumb drive, Booting various OSes, From Linux OS Setup: /dev/sdb is removable media /dev/sdb1 is formatted fat /dev/sdb2 is formatted ext* /dev/sdb3 is formatted fat The MBR of /dev/sdb is otherwise not initialized You are executing from a Linux based OS installed on /dev/sda Explain: How to install Grub2 on to /dev/sdb1, mark /dev/sdb1 active, be able to chose between /dev/sdb2 and /dev/sdb3 on boot. Scenario 4: (Bonus) Install on to thumb drive, Booting ISO, From Linux OS Setup: /dev/sdb is removable media /dev/sdb1 is formatted fat /dev/sdb1 contains /iso/live.iso /dev/sdb2 is formatted ext* /dev/sdb3 is formatted fat The MBR of /dev/sdb is otherwise not initialized You are executing from a Linux based OS installed on /dev/sda Explain: How to install Grub2 on to /dev/sdb1, mark /dev/sdb1 active, be able to chose between /dev/sdb2, /dev/sdb3, and /iso/live.iso on boot.

    Read the article

  • cmake, gcc, cuda and -m32 wtf

    - by Nils
    Hi all I figured out that CUDA does not work in 64bit mode on my mac (or couldn't get it running so far). Therefore I decided to compile everything for 32bit. I use cmake 2.8 and added the following options add_definitions(-Wall -m32) set(CUDA_64_BIT_DEVICE_CODE OFF) set(CMAKE_MODULE_LINKER_FLAGS -m32) However when it tries to link it it does something like this: /usr/bin/c++ -mmacosx-version-min=10.6 -Wl,-search_paths_first -headerpad_max_install_names CMakeFiles/SimpleTestsCUDA.dir/BlockMatrix.cpp.o CMakeFiles/SimpleTestsCUDA.dir/Matrix.cpp.o ./SimpleTestsCUDA_generated_SimpleTests.cu.o ./SimpleTestsCUDA_generated_BlockMatrix.cu.o -o SimpleTestsCUDA /usr/local/cuda/lib/libcudart.dylib /usr/local/cuda/lib/libcuda.dylib Which fails with a lot of "file is not of required architecture" warnings from ld. Now if I add manually -m32 to the command above it works. However I have no idea how to teach cmake to add -m32 to every gcc (or ld) invocation. So far it does it for nvcc and gcc, but not for linking..

    Read the article

  • cuda program on VMware

    - by scatman
    i wrote a cuda program and i am testing it on ubuntu as a virtual machine. the reason for this is i have windows 7, i don't want to install ubuntu as a secondary operating system, and i need to use a linux operating system for testing. my question is: will the virtual machine limit the gpu resources? So will my cuda code be faster if i run it under my primary operating system than running it on a virtual machine?

    Read the article

  • Is /dev in linux virtual?

    - by user973917
    Today at work a client had rm -rf /dev and ended up deleting two files in /dev/shm that forced his site to no longer work. From what I learned previously /dev is not virtual, but a fellow technician had suggested to reboot the server because /dev is virtual like /proc. Sure enough I rebooted the server and the files that the client rm -rf'd were there. So, my question is; is /dev virtual? Is it the kind of virtual like /proc? Is there more documentation on this? How can I restore the /dev files without a server reboot?

    Read the article

  • CUDA linking error - Visual Express 2008 - nvcc fatal due to (null) configuration file

    - by Josh
    Hi, I've been searching extensively for a possible solution to my error for the past 2 weeks. I have successfully installed the Cuda 64-bit compiler (tools) and SDK as well as the 64-bit version of Visual Studio Express 2008 and Windows 7 SDK with Framework 3.5. I'm using windows XP 64-bit. I have confirmed that VSE is able to compile in 64-bit as I have all of the 64-bit options available to me using the steps on the following website: (since Visual Express does not inherently include the 64-bit packages) http://jenshuebel.wordpress.com/2009/02/12/visual-c-2008-express-edition-and-64-bit-targets/ I have confirmed the 64-bit compile ability since the "x64" is available from the pull-down menu under "Tools-Options-VC++ Directories" and compiling in 64-bit does not result in the entire project being "skipped". I have included all the needed directories for 64-bit cuda tools, 64 SDK and Visual Express (\VC\bin\amd64). Here's the error message I receive when trying to compile in 64-bit: 1>------ Build started: Project: New, Configuration: Release x64 ------ 1>Compiling with CUDA Build Rule... 1>"C:\CUDA\bin64\nvcc.exe" -arch sm_10 -ccbin "C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin" -Xcompiler "/EHsc /W3 /nologo /O2 /Zi /MT " -maxrregcount=32 --compile -o "x64\Release\template.cu.obj" "c:\Documents and Settings\All Users\Application Data\NVIDIA Corporation\NVIDIA GPU Computing SDK\C\src\CUDA_Walkthrough_DeviceKernels\template.cu" 1>nvcc fatal : Visual Studio configuration file '(null)' could not be found for installation at 'C:/Program Files (x86)/Microsoft Visual Studio 9.0/VC/bin/../..' 1>Linking... 1>LINK : fatal error LNK1181: cannot open input file '.\x64\Release\template.cu.obj' 1>Build log was saved at "file://c:\Documents and Settings\Administrator\My Documents\Visual Studio 2008\Projects\New\New\x64\Release\BuildLog.htm" 1>New - 1 error(s), 0 warning(s) ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== Here's the simple code I'm trying to compile/run in 64-bit: #include <stdlib.h> #include <stdio.h> #include <string.h> #include <math.h> #include <cuda.h> void mypause () { printf ( "Press [Enter] to continue . . ." ); fflush ( stdout ); getchar(); } __global__ void VecAdd1_Kernel(float* A, float* B, float* C, int N) { int i = blockDim.x*blockIdx.x+threadIdx.x; if (i<N) C[i] = A[i] + B[i]; //result should be a 16x1 array of 250s } __global__ void VecAdd2_Kernel(float* B, float* C, int N) { int i = blockDim.x*blockIdx.x+threadIdx.x; if (i<N) C[i] = C[i] + B[i]; //result should be a 16x1 array of 400s } int main() { int N = 16; float A[16];float B[16]; size_t size = N*sizeof(float); for(int i=0; i<N; i++) { A[i] = 100.0; B[i] = 150.0; } // Allocate input vectors h_A and h_B in host memory float* h_A = (float*)malloc(size); float* h_B = (float*)malloc(size); float* h_C = (float*)malloc(size); //Initialize Input Vectors memset(h_A,0,size);memset(h_B,0,size); h_A = A;h_B = B; printf("SUM = %f\n",A[1]+B[1]); //simple check for initialization //Allocate vectors in device memory float* d_A; cudaMalloc((void**)&d_A,size); float* d_B; cudaMalloc((void**)&d_B,size); float* d_C; cudaMalloc((void**)&d_C,size); //Copy vectors from host memory to device memory cudaMemcpy(d_A,h_A,size,cudaMemcpyHostToDevice); cudaMemcpy(d_B,h_B,size,cudaMemcpyHostToDevice); //Invoke kernel int threadsPerBlock = 256; int blocksPerGrid = (N+threadsPerBlock-1)/threadsPerBlock; VecAdd1(blocksPerGrid, threadsPerBlock,d_A,d_B,d_C,N); VecAdd2(blocksPerGrid, threadsPerBlock,d_B,d_C,N); //Copy results from device memory to host memory //h_C contains the result in host memory cudaMemcpy(h_C,d_C,size,cudaMemcpyDeviceToHost); for(int i=0; i<N; i++) //output result from the kernel "VecAdd" { printf("%f ", h_C[i] ); printf("\n"); } printf("\n"); cudaFree(d_A); cudaFree(d_B); cudaFree(d_C); free(h_A); free(h_B); free(h_C); mypause(); return 0; }

    Read the article

  • Optimize CUDA with Thrust in a loop

    - by macs
    Given the following piece of code, generating a kind of code dictionary with CUDA using thrust (C++ template library for CUDA): thrust::device_vector<float> dCodes(codes->begin(), codes->end()); thrust::device_vector<int> dCounts(counts->begin(), counts->end()); thrust::device_vector<int> newCounts(counts->size()); for (int i = 0; i < dCodes.size(); i++) { float code = dCodes[i]; int count = thrust::count(dCodes.begin(), dCodes.end(), code); newCounts[i] = dCounts[i] + count; //Had we already a count in one of the last runs? if (dCounts[i] > 0) { newCounts[i]--; } //Remove thrust::detail::normal_iterator<thrust::device_ptr<float> > newEnd = thrust::remove(dCodes.begin()+i+1, dCodes.end(), code); int dist = thrust::distance(dCodes.begin(), newEnd); dCodes.resize(dist); newCounts.resize(dist); } codes->resize(dCodes.size()); counts->resize(newCounts.size()); thrust::copy(dCodes.begin(), dCodes.end(), codes->begin()); thrust::copy(newCounts.begin(), newCounts.end(), counts->begin()); The problem is, that i've noticed multiple copies of 4 bytes, by using CUDA visual profiler. IMO this is generated by The loop counter i float code, int count and dist Every access to i and the variables noted above This seems to slow down everything (sequential copying of 4 bytes is no fun...). So, how i'm telling thrust, that these variables shall be handled on the device? Or are they already? Using thrust::device_ptr seems not sufficient for me, because i'm not sure whether the for loop around runs on host or on device (which could also be another reason for the slowliness).

    Read the article

  • GNU/Linux: SAS-disk detected as /dev/sg7 - not as /dev/sdb

    - by Ole Tange
    I have just installed a SAS disk into a Debian server. It was detected correctly and everything was fine. Then I moved the SAS disk to a different Debian server, the same hardware model and running same version of Debian, but here the SAS disk is detected as /dev/sg7 and not /dev/sdb. smartctl -a /dev/sg7 works fine, but fdisk and cat hang. I tried putting the SAS disk in another slot: Same problem. How can I force the SAS disk to be detected as /dev/sdb? # uname -a Linux maxwell 3.2.0-4-amd64 #1 SMP Debian 3.2.41-2+deb7u2 x86_64 GNU/Linux

    Read the article

  • Best approach for GPGPU/CUDA/OpenCL in Java?

    - by Frederik
    General-purpose computing on graphics processing units (GPGPU) is a very attractive concept to harness the power of the GPU for any kind of computing. I'd love to use GPGPU for image processing, particles, and fast geometric operations. Right now, it seems the two contenders in this space are CUDA and OpenCL. I'd like to know: Is OpenCL usable yet from Java on Windows/Mac? What are the libraries ways to interface to OpenCL/CUDA? Is using JNA directly an option? Am I forgetting something? Any real-world experience/examples/war stories are appreciated.

    Read the article

  • Simultaneous launch of Multiple Kernels using CUDA for a GPU

    - by cudadev
    Is it possible to launch two kernels that do independent tasks, simultaneously. For example if I have this Cuda code // host and device initialization ....... ....... // launch kernel1 myMethod1 <<<.... (params); // launch kernel2 myMethod2 <<<..... (params); Assuming that these kernels are independent, is there a facility to launch them at the same time allocating few grids/blocks for each. Does CUDA/OpenCL have this provision.

    Read the article

  • CUDA: accumulate data into a large histogram of floats

    - by shoosh
    I'm trying to think of a way to implement the following algorithm using CUDA: Working on a large volume of voxels, for each voxel I calculate an index i and a value c. after the calculation I need to perform histogram[i] += c c is a float value and the histogram can have up to 15,000 bins. I'm looking for a way to implement this efficiently using CUDA. The first obvious problem is that with compute capabilities 1.3 which is what I'm using I can't even do an atomicAdd() of floats so how can I accumulate anything reliably? This example by nVidia does something somewhat simpler. The histograms are saved in the shared memory (which I can't do due to its size) and it only accumulates integers. Can this approach be generalized to my case?

    Read the article

  • How can i recover a zip password using CUDA (GPU) ?

    - by marc
    How can i recover a zip password on linux using CUDA (GPU). For the past two days i tried using "fcrackzip" but it's too slow Few months back i saw some application that can use GPU / CUDA and get large performance boost in comparison to CPU. If brute-force using cuda is not possible, please tell me what's the best application for performing a dictionary attack, and where can i find best (largest) dictionary. Regards

    Read the article

  • How to convert a C++ program that uses CUDA into MEX

    - by Harold Wellington Graves
    For work, I am converting the Image Denoising program that comes with the CUDA SDK into a MATLAB program. As far as I know, I have made all the necessary changes required by MATLAB, but when I try to call mex on it, MATLAB returns a bunch of linkage errors that I have no idea how to fix. If anyone has any suggestions on what I might be doing wrong, I would greatly appreciate it. The command I am giving MATLAB is: mex imageDenoisingGL.cpp -I..\..\common\inc -IC:\CUDA\include -L..\..\common\lib -lglut32 And the output from MATLAB is a bunch of these: imageDenoisingGL.obj : error LNK2019: unresolved external symbol __imp__cutCheckCmdLineFlag@12 referenced in function "void __cdecl __cutilExit(int,char * *)" (?__cutilExit@@YAXHPAPAD@Z) I am running: Windows XP x32 Visual Studio 2005 MATLAB 2007a

    Read the article

  • CUDA & VS2010 problem

    - by Kristian D'Amato
    I have scoured the internets looking for an answer to this one, but couldn't find any. I've installed the CUDA 3.2 SDK (and, just now, CUDA 4.0 RC) and everything seems to work fine after long hours of fooling around with include directories, NSight, and all the rest. Well, except this one thing: it keeps highlighting the <<< >>> operator as a mistake. Only on VS2010--not on VS2008. On VS2010 I also get several warnings of the following sort: C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\xdebug(109): warning C4251: 'std::_String_val<_Ty,_Alloc>::_Alval' : class 'std::_DebugHeapAllocator<_Ty>' needs to have dll-interface to be used by clients of class 'std::_String_val<_Ty,_Alloc>' Anyone know how this can be fixed?

    Read the article

  • /dev/sda1 not a subset of /dev/sda?

    - by Guillaume Brunerie
    Hi, the first entry of my partition table is: $ sudo hexdump -Cv -n 16 -s 446 /dev/sda 000001be 80 01 01 00 83 fe ff ff 3f 00 00 00 81 1c 20 03 |........?..... .| (-Cv describe the output format, -n 16 asks for 16 bytes and -s 446 skips the first 446 bytes) You can see that my first partition is a primary Linux partition and that this partition begin at sector 63 (see for example here for the structure of the partition table). I would then expect that except for the first 63 sectors and the other partitions, /dev/sda1 and /dev/sda are exactly the same. But this is not the case, the sector #2 of /dev/sda1 is not exactly the same as the sector #65 of /dev/sda (but they are very similar, only 16 bytes are different): $ sudo hexdump -Cv -n 512 -s 65b /dev/sda 00008200 00 20 19 00 90 03 64 00 2d 00 05 00 5a 2f 56 00 |. ....d.-...Z/V.| 00008210 b6 b1 16 00 00 00 00 00 02 00 00 00 02 00 00 00 |................| 00008220 00 80 00 00 00 80 00 00 00 20 00 00 d8 38 ee 4c |......... ...8.L| 00008230 9a 01 ef 4c 05 00 24 00 53 ef 01 00 01 00 00 00 |...L..$.S.......| 00008240 59 23 e9 4c 00 4e ed 00 00 00 00 00 01 00 00 00 |Y#.L.N..........| 00008250 00 00 00 00 0b 00 00 00 00 01 00 00 3c 00 00 00 |............<...| 00008260 42 02 00 00 7b 00 00 00 85 23 eb f2 71 67 44 f5 |B...{....#..qgD.| 00008270 bb 8f 6f f2 3a 59 ff 4d 55 62 75 6e 74 75 00 00 |..o.:Y.MUbuntu..| 00008280 00 00 00 00 00 00 00 00 2f 75 62 75 6e 74 75 00 |......../ubuntu.| 00008290 d8 3c df 5d 00 88 ff ff 52 d0 ef 1d 00 00 00 00 |.<.]....R.......| 000082a0 c0 40 51 b6 00 88 ff ff 00 4e c8 bb 00 88 ff ff |[email protected]......| 000082b0 c0 f6 86 b8 00 88 ff ff 30 2e 0d a0 ff ff ff ff |........0.......| 000082c0 38 3d df 5d 00 88 ff ff 00 00 00 00 00 00 fe 03 |8=.]............| 000082d0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 000082e0 08 00 00 00 00 00 00 00 00 00 00 00 8a 53 d3 0e |.............S..| 000082f0 7c 7a 43 e4 8b fb ca e0 72 b7 fa c8 01 01 00 00 ||zC.....r.......| 00008300 00 00 00 00 00 00 00 00 16 4c 47 4b 0a f3 03 00 |.........LGK....| 00008310 04 00 00 00 00 00 00 00 00 00 00 00 fe 7f 00 00 |................| 00008320 24 b7 0c 00 fe 7f 00 00 01 00 00 00 22 37 0d 00 |$..........."7..| 00008330 ff 7f 00 00 01 00 00 00 23 37 0d 00 00 00 00 00 |........#7......| 00008340 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 08 |................| 00008350 00 00 00 00 00 00 00 00 00 00 00 00 1c 00 1c 00 |................| 00008360 01 00 00 00 e9 7f 00 00 00 00 00 00 00 00 00 00 |................| 00008370 00 00 00 00 04 00 00 00 9f 7d bb 00 00 00 00 00 |.........}......| 00008380 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00008390 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 000083a0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 000083b0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 000083c0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 000083d0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 000083e0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 000083f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| versus $ sudo hexdump -Cv -n 512 -s 2b /dev/sda1 00000400 00 20 19 00 90 03 64 00 2d 00 05 00 5a 2f 56 00 |. ....d.-...Z/V.| 00000410 b6 b1 16 00 00 00 00 00 02 00 00 00 02 00 00 00 |................| 00000420 00 80 00 00 00 80 00 00 00 20 00 00 df 76 ef 4c |......... ...v.L| 00000430 df 76 ef 4c 06 00 24 00 53 ef 01 00 01 00 00 00 |.v.L..$.S.......| 00000440 59 23 e9 4c 00 4e ed 00 00 00 00 00 01 00 00 00 |Y#.L.N..........| 00000450 00 00 00 00 0b 00 00 00 00 01 00 00 3c 00 00 00 |............<...| 00000460 46 02 00 00 7b 00 00 00 85 23 eb f2 71 67 44 f5 |F...{....#..qgD.| 00000470 bb 8f 6f f2 3a 59 ff 4d 55 62 75 6e 74 75 00 00 |..o.:Y.MUbuntu..| 00000480 00 00 00 00 00 00 00 00 2f 75 62 75 6e 74 75 00 |......../ubuntu.| 00000490 d8 3c df 5d 00 88 ff ff 52 d0 ef 1d 00 00 00 00 |.<.]....R.......| 000004a0 c0 40 51 b6 00 88 ff ff 00 4e c8 bb 00 88 ff ff |[email protected]......| 000004b0 c0 f6 86 b8 00 88 ff ff 30 2e 0d a0 ff ff ff ff |........0.......| 000004c0 38 3d df 5d 00 88 ff ff 00 00 00 00 00 00 fe 03 |8=.]............| 000004d0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 000004e0 08 00 00 00 00 00 00 00 00 00 00 00 8a 53 d3 0e |.............S..| 000004f0 7c 7a 43 e4 8b fb ca e0 72 b7 fa c8 01 01 00 00 ||zC.....r.......| 00000500 00 00 00 00 00 00 00 00 16 4c 47 4b 0a f3 03 00 |.........LGK....| 00000510 04 00 00 00 00 00 00 00 00 00 00 00 fe 7f 00 00 |................| 00000520 24 b7 0c 00 fe 7f 00 00 01 00 00 00 22 37 0d 00 |$..........."7..| 00000530 ff 7f 00 00 01 00 00 00 23 37 0d 00 00 00 00 00 |........#7......| 00000540 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 08 |................| 00000550 00 00 00 00 00 00 00 00 00 00 00 00 1c 00 1c 00 |................| 00000560 01 00 00 00 e9 7f 00 00 00 00 00 00 00 00 00 00 |................| 00000570 00 00 00 00 04 00 00 00 a3 7d bb 00 00 00 00 00 |.........}......| 00000580 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000590 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 000005a0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 000005b0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 000005c0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 000005d0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 000005e0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 000005f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| For example in the third line, there is a 8.L in the first hexdump and v.L in the second. Why are there differences?

    Read the article

  • Compile latest blender on ubuntu 12.0464bit?

    - by gabriel
    What i want is to compile latest blender from SVN. I am using this guide My issues are: How can i install it with the final .deb created file and how can i give this package to a ppa! So, when i execute sudo apt-get update; sudo apt-get install subversion build-essential gettext \ libxi-dev libsndfile1-dev \ libpng12-dev libfftw3-dev \ libopenexr-dev libopenjpeg-dev \ libopenal-dev libalut-dev libvorbis-dev \ libglu1-mesa-dev libsdl1.2-dev libfreetype6-dev \ libtiff4-dev libavdevice-dev \ libavformat-dev libavutil-dev libavcodec-dev libjack-dev \ libswscale-dev libx264-dev libmp3lame-dev python3.2-dev \ libspnav-dev it gives me this The following packages have unmet dependencies: libjack-dev : Depends: libjack0 (= 1:0.121.0+svn4538-3ubuntu1) but it is not going to be installed E: Unable to correct problems, you have held broken packages. I know that skype does not allow the installation of those libraries. Thanks

    Read the article

  • Solving problems involving more complex data structures with CUDA

    - by Nils
    So I read a bit about CUDA and GPU programming. I noticed a few things such that access to global memory is slow (therefore shared memory should be used) and that the execution path of threads in a warp should not diverge. I also looked at the (dense) matrix multiplication example, described in the programmers manual and the nbody problem. And the trick with the implementation seems to be the same: Arrange the calculation in a grid (which it already is in case of the matrix mul); then subdivide the grid into smaller tiles; fetch the tiles into shared memory and let the threads calculate as long as possible, until it needs to reload data from the global memory into shared memory. In case of the nbody problem the calculation for each body-body interaction is exactly the same (page 682): bodyBodyInteraction(float4 bi, float4 bj, float3 ai) It takes two bodies and an acceleration vectors. The body vector has four components it's position and the weight. When reading the paper, the calculation is understood easily. But what is if we have a more complex object, with a dynamic data structure? For now just assume that we have an object (similar to the body object presented in the paper) which has a list of other objects attached and the number of objects attached is different in each thread. How could I implement that without having the execution paths of the threads to diverge? I'm also looking for literature which explains how different algorithms involving more complex data structures can be effectively implemented in CUDA.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >