Search Results

Search found 20388 results on 816 pages for 'nvidia current'.

Page 17/816 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Ubuntu boots into tty1 after driver downgrade

    - by Zach
    Earlier today I downgraded the driver for my nvidia 240 GT from version 310 to version 304 using ubuntus "additional drivers" utility. After the install I rebooted but instead of opening unity and allowing me to login it booted into tty1 instead. After using the "startx" command I got the message, "nvidia: API mismatch: the nvidia kernel module has version 310.14, but this nvidia driver component has version 304.43." What can I do to solve this? Edit: Solved my own problem by purging all nvidia packaged and reinstalling my drivers.

    Read the article

  • Ubuntu 14.04 doesn't detect my discrete GPU

    - by user258887
    I recently purchased a laptop with an Nvidia GeForce 860m, and have installed Ubuntu 14.04. On my old laptop I had 12.04, which automatically filled Additional Drivers with Nvidia drivers. But on this computer, the only thing in Additional Drivers is Qualcomm. So I manually installed Nvidia, but X Server Settings doesn't seem to detect any GPU... lspci | grep VGA reports only my integrated Intel GPU, but lspci -v reports many things, including the Nvidia GPU: 01:00.0 3D controller: NVIDIA Corporation GM107M [GeForce GTX 860M] (rev a2) Subsystem: ASUSTeK Computer Inc. Device 157d Flags: fast devsel, IRQ 16 Memory at ec000000 (32-bit, non-prefetchable) [size=16M] Memory at c0000000 (64-bit, prefetchable) [size=256M] Memory at d0000000 (64-bit, prefetchable) [size=32M] I/O ports at e000 [size=128] Expansion ROM at ed000000 [disabled] [size=512K] Capabilities: access denied Don't know what any of that means. Not sure if it's supposed to say 'access denied'... I need my GPU to do CUDA and OpenGL programming. What else can I do to figure out why this isn't working?

    Read the article

  • Why don't 12.04 Nvidia drivers work on my GeForce 6150 LE?

    - by Chris
    I got a slimline s7600n and of course i put ubuntu 12.04 on it. Im using recommended proprietary drivers and they are just not working to well. My monitor is 1366x768 but it will only do 1360x768 and minecraft barely runs at all. Besides that everything is fine. But i gota play minecraft! help! lol Thanks. :) Other specs is cpu amd athlon 64 dual core 2ghz ram 1 gig vid is 256mb 64bit Not sure what other info yall need.

    Read the article

  • Why does Windows 7 overall performance is better than Ubuntu 11.10?

    - by user37805
    I have a i7 2600 processor, 8Gb DDR3 ram, nVidia GTX570, and Ubuntu takes 45-50 seconds to boot and 32-35 seconds to power off, while windows 7 boots in 20-25 seconds and shuts down in 10 seconds. Both OS with autologin enabled, and in dual boot. Ubuntu is slow with preload too, and doesn't show any boot splash after installing drivers and didn't recognize my nVidia graphics card on jockey GTK, I had to add x swat repository and that didn't worked. I installed proprietary drivers through terminal (nvidia-common, nvidia-settings) in order to have 3d acceleration. But it doesn't make any difference on the speed. I also have a Pentium 4 PC and ubuntu 11.10 is way faster than windows 7 or XP. Also with nvidia graphics card and preload. http://paste.ubuntu.com/924890/ there is my boot script, sorry but some words are in Spanish because my ubuntu is in Spanish. Not using WUBI, Ubuntu has its own partition, 64-bits, and Matlab 2011 has very low performance compared to windows version.

    Read the article

  • HDMI video output not working for external monitor

    - by user291852
    I have installed from scratch ubuntu GNOME 14.04 with gnome-flashback on my old HP HDX 16 laptop (core 2 duo p8600 + nvidia GT9600M + 4GB of ram) and I have problems with the HDMI output (I use it to extend my desktop on a dell U2412M 1920x1200 monitor). In the following I summarize the configurations that I have tried: Using the open source nouveau drivers, only the laptop monitor works, no signal from the external monitor connected to the HDMI output. However, the output of the xrandr command show that the HDMI output is connected with the correct resolution 1920x1200 (I find this thing really weird). Nouveau drivers with VGA connection works without problems on the external monitor, but the image is blurry compared to the HDMI connection. Using the nvidia drivers (I have tried different versions: nvidia-331-updates and the xorg-edgers versions nvidia-334 and nvidia-337) the HDMI output works, but I have system instabilities, random crashes and display freezes. I can't even enter in terminal mode with ctrl-alt-f1, so I have to manually shut down the laptop with the power button. I really would like to use the HDMI output with the nouveau drivers to avoid the system instabilities that I experienced with the nvidia drivers, but I can't figure out how to make it works. Alessandro

    Read the article

  • Over-scan Issues when using HDTV through VGA

    - by RPG Master
    Right now all we can do is set the TV to 1280x768 instead of its native resolution of 1360x768. Setting it to its native resolution gives you a screen with a large portion of the left side of the screen cut off. We've tried everything with the TV so now we're turning to the innards of Ubuntu in hopes of fixing this. The computer is using an NVIDIA GeForce GT240. This is its current xorg.conf: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 1.0 (buildd@palmer) Fri Apr 9 10:35:18 UTC 2010 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: builtin, VertRefresh source: builtin # HorizSync 28.0 - 55.0 # VertRefresh 43.0 - 72.0 Identifier "Monitor0" VendorName "Unknown" ModelName "CRT-0" HorizSync 28.0 - 55.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce 6600" EndSection Section "Screen" # Removed Option "metamodes" "1360x768 +0+0; 800x600 +0+0" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "TwinViewXineramaInfoOrder" "CRT-0" Option "metamodes" "1360x768 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • Ubuntu 12.10 won't display properly after kernel upgrade

    - by Daniel
    After updating a system today, Ubuntu's doesn't display correctly. The desktop now looks like this. It was working properly before. I had to use the terminal to run synaptic package manager, so I could view the update history; which is as follows: Commit Log for Wed Nov 7 11:50:36 2012 Upgraded the following packages: linux-image-generic (3.5.0.17.19) to 3.5.0.18.21 Installed the following packages: linux-image-3.5.0-18-generic (3.5.0-18.29) linux-image-extra-3.5.0-18-generic (3.5.0-18.29) Prior to this issue, the last active driver was nvidia-current-updates, version 304.51. I tried using the nvidia-current driver, version 304.51.really.304.43 instead, but the problem persists. I tried running nvidia-settings from terminal, so I could try configuring something, but the application informs that the Nvidia driver is not being used. As the x-swat repository has nothing for Quantal, I desperately used the unstable x-edgers repository & upgraded, but to no avail; so I purged it. The display should normally be full HD, but the only available resolutions now are 1024x768(4:3) and 800x600(4:3). The system is Dell XPS-L702X, with NVIDIA GeForce GT 555M, and 17" screen. How can I fix this problem? Update: I tried using the Nouveau third-party driver & this fixes the issue. However, if you have any idea how to get the Nvidia drivers working properly with the latest kernel, please share; as I've noticed some videos playing very slowly on the system, though I'm not sure exactly why.

    Read the article

  • Slight stuttering when moving windows in fresh 12.04 install

    - by Konsolkongen
    Installed Ubuntu 12.04 today and my problem is that when I'm moving the windows around my screen it doesn't feel smooth at all. Usually I can fix this by changing the refresh rate to 60Hz, but this time it doesn't help. My graphics card is a Nvidia GTX 560Ti and I've tried both the 295.40, 295.45 and 304.43 (which I'm currently using) but neither has resolved my problem. I searched around a bit and tried changing the refresh-rate using compizconfig-settings-manager and xrandr. No change using CCSM, but when I tried xrandr I got this reply: konsolkongen@konsolkongen-desktop:~$ xrandr -r 60Rate 60.0 Hz not available for this size - which is nonsense of course. This is what my xorg.conf file looks like: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 295.33 (buildd@allspice) Fri Mar 30 15:25:24 UTC 2012 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "Samsung SyncMaster" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 560 Ti" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "TwinViewXineramaInfoOrder" "DFP-0" Option "metamodes" "DFP-0: 1680x1050_60 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Any help would be greatly appreciated, my obsession with video quality can't stand stuttering like this. For what it's worth though, I don't have any screen tearing, so at least V-sync is on. Thanks.

    Read the article

  • 3 or 4 monitors with Nvidia and Ubuntu

    - by Jason
    I saw that you are (were?) running 4 monitors with Ubuntu 8.10 and two Nvidia cards (http://stackoverflow.com/questions/27113/how-to-use-3-monitors). I was curious if you were doing this with Xinerama, a hacked up TwinView config, or multiple X screens, or some other method? Does it work with compiz? I intend to run my Dell 30" in the middle with two 1280x1024 on the sides and continue to use one X screen, and run compiz, on Ubuntu 9.04. Currently, I am using 2 monitors with twinview and compiz, which runs fantastic. I just can't get the third monitor running (unless I enable it in its own X screen, and then enable Xinerama to enable windows to be dragged as if all one X screen, but this breaks compiz, and I don't care much for having separate X screen). I am very interested in knowing how you set up 4 monitors with 2 GPU's. Thanks!

    Read the article

  • Setting pixel values in Nvidia NPP ImageCPU objects?

    - by solvingPuzzles
    In the Nvidia Performance Primitives (NPP) image processing examples in the CUDA SDK distribution, images are typically stored on the CPU as ImageCPU objects, and images are stored on the GPU as ImageNPP objects. boxFilterNPP.cpp is an example from the CUDA SDK that uses these ImageCPU and ImageNPP objects. When using a filter (convolution) function like nppiFilter, it makes sense to define a filter as an ImageCPU object. However, I see no clear way setting the values of an ImageCPU object. npp::ImageCPU_32f_C1 hostKernel(3,3); //allocate space for 3x3 convolution kernel //want to set hostKernel to [-1 0 1; -1 0 1; -1 0 1] hostKernel[0][0] = -1; //this doesn't compile hostKernel(0,0) = -1; //this doesn't compile hostKernel.at(0,0) = -1; //this doesn't compile How can I manually put values into an ImageCPU object? Notes: I didn't actually use nppiFilter in the code snippet; I'm just mentioning nppiFilter as a motivating example for writing values into an ImageCPU object. The boxFilterNPP.cpp example doesn't involve writing directly to an ImageCPU object, because nppiFilterBox is a special case of nppiFilter that uses a built-in gaussian smoothing filter (probably something like [1 1 1; 1 1 1; 1 1 1]).

    Read the article

  • Strange ATI vs Nvidia TRIANGLE_STRIP issue

    - by chriscisco
    I have this code, I am using a test for my Engine I am working on. On My NVIDIA NVS 4200M it displays the GL_TRIANGLE_STRIP as expected. On my ATI Radeon 5800 it appears to draw a Triangle. shader.begin(); Matrix4<float> temp = getActiveCamera()->getProjectionMatrix() * getActiveCamera()->getObjectToWorld().fastInverse(); glUniformMatrix4fv(shader["mvp"], 1, GL_TRUE, temp.getArray()); glBegin(GL_TRIANGLE_STRIP); glVertexAttrib3f(shader["colour"],0,1,0); glVertexAttrib3f(shader["coord3d"],-.5,-.5,0); glVertexAttrib3f(shader["colour"],1,1,0); glVertexAttrib3f(shader["coord3d"],0.5,-.5,0); glVertexAttrib3f(shader["colour"],1,0,1); glVertexAttrib3f(shader["coord3d"],-.5,.5,0); glVertexAttrib3f(shader["colour"],0,1,1); glVertexAttrib3f(shader["coord3d"],.5,.5,0); glEnd(); shader.end(); Here are what it actually looks like on my two computers. https://www.dropbox.com/s/sgm2j978tx2ipnp/not%20working.png https://www.dropbox.com/s/27idv0b8k0p4pcx/working.png

    Read the article

  • Can the NVIDIA ION chipset handle streaming and gaming reasonable well?

    - by true_gritt
    I'm considering getting a small-footprint "nettop" computer to use as a home theater PC with my Samsung LN40A550 HDTV. I've been looking at systems like the AS Rock ION HT330, the Acer AspireRevo 3610, or the Asus EeeBox PC EB1501. These are all systems with NVIDIA ION chipset (Intel Atom N330 dual core CPU + NVIDIA GeForce 9400 GPU). Is the NVIDIA ION chipset powerful enough to support media streaming at HD resolutions (e.g. via Boxee, Hulu, Netflix) and casual gaming (e.g. World of Warcraft, Madden NFL) reasonably well without herky-jerky video output?

    Read the article

  • NVIDIA same chipset, but different implementations - what is the difference?

    - by Horst Walter
    I have planned to buy a graphics card. When searching for a particular chipset (e.g. GTX 460) I find cards of different vendors (i.e. Gigabyte, Palit, PNY, ...). I can figure out differences in frequency, memory, and equipment. When I read test reports, usually a particular NVIDIA card is compared with its ATI/AMD "counterpart" - have not really found a comparison of all vendors for a particular NVIDIA chipset. So in order to make a decision: a) Are the drivers all the same for all the cards of a particular chipset (and provided by NVIDIA or the vendor?) b) How to figure out which card actually to buy. OK, I choose chipset, and memory, and check the card has the required ports, but then ....

    Read the article

  • Kepler, la nouvelle architecture GPU de NVIDIA : présentation de la technologie et de ses performances

    Kepler, la nouvelle architecture de processeur graphique de NVIDIA Présentation des nouvelles technologies et des performances Annoncée depuis plusieurs mois, la nouvelle architecture de carte graphique de NVIDIA est officiellement annoncée la semaine dernière. Cette nouvelle architecture est destinée à concurrencer la nouvelle architecture de AMD sortie le mois dernier. La première carte de cette gamme se nomme GTX 680 et est basée sur la puce GK104. Pour la génération précédente (architecture FERMI), NVIDIA s'était focalisé sur l'ajout de la tessellation et l'amélioration des performances. Pour Kepler, NVIDIA a travaillé principalement sur la consommation d'énergie : gravure 28nm, nouveaux SMX, GP...

    Read the article

  • What are the implications of Nvidia's "the way it's meant to be played"?

    - by Mike Pateras
    I have an AMD Radeon 5850 (about to be 2), and today I read that Rift is a member of Nvidia's "the way it's meant to be played" program. It was suggested that as such the developers would not be speaking with or working directly with AMD for optimization, and that it would be unlikely that Crossfire support would be added until the game's release. Are any of these implications likely? Or does it just mean that Nvidia is working closely with the developers for optimization and marketing support?

    Read the article

  • Updating Dell Vostro 3700 (Nvidia GeForce GT330M) display driver?

    - by iRubens
    I've bought this laptop "Dell Vostro 3700", having inside an Intel integrated graphic card and an Nvidia GeForce GT330M. Depending on energy saving mode it switches between the two video cards. When I try to update the video driver (now version 189.99 on Windows 7 64-bit) with that found on Nvidia site an error message say that it cannot find compatible graphic hardware. Dell doesn't provide a newer driver version. Has anyone solved the same problem?

    Read the article

  • Application.Current.Shutdown() vs. Application.Current.Dispatcher.BeginInvokeShutdown()

    - by Daniel Rose
    First a bit of background: I have a WPF application, which is a GUI-front-end to a legacy Win32-application. The legacy app runs as DLL in a separate thread. The commands the user chooses in the UI are invoked on that "legacy thread". If the "legacy thread" finishes, the GUI-front-end cannot do anything useful anymore, so I need to shutdown the WPF-application. Therefore, at the end of the thread's method, I call Application.Current.Shutdown(). Since I am not on the main thread, I need to invoke this command. However, then I noticed that the Dispatcher also has BeginInvokeShutdown() to shutdown the dispatcher. So my question is: What is the difference between invoking Application.Current.Shutdown(); and calling Application.Current.Dispatcher.BeginInvokeShutdown();

    Read the article

  • NVIDIA CUDA SDK Examples Compilation Unsupported Architecture 'computer_20'

    - by Andrew Bolster
    On compilation of the CUDA SDK, I'm getting a nvcc fatal : Unsupported gpu architecture 'compute_20' My toolkit is 2.3 and on a shared system (i.e cant really upgrade) and the driver version is also 2.3, running on 4 Tesla C1060s If it helps, the problem is being called in radixsort. It appears that a few people online have had this problem but i havent found anywhere that actually gives a solution.

    Read the article

  • OpenCL: Strange buffer or image bahaviour with NVidia but not Amd

    - by Alex R.
    I have a big problem (on Linux): I create a buffer with defined data, then an OpenCL kernel takes this data and puts it into an image2d_t. When working on an AMD C50 (Fusion CPU/GPU) the program works as desired, but on my GeForce 9500 GT the given kernel computes the correct result very rarely. Sometimes the result is correct, but very often it is incorrect. Sometimes it depends on very strange changes like removing unused variable declarations or adding a newline. I realized that disabling the optimization will increase the probability to fail. I have the most actual display driver in both systems. Here is my reduced code: #include <CL/cl.h> #include <string> #include <iostream> #include <sstream> #include <cmath> void checkOpenCLErr(cl_int err, std::string name){ const char* errorString[] = { "CL_SUCCESS", "CL_DEVICE_NOT_FOUND", "CL_DEVICE_NOT_AVAILABLE", "CL_COMPILER_NOT_AVAILABLE", "CL_MEM_OBJECT_ALLOCATION_FAILURE", "CL_OUT_OF_RESOURCES", "CL_OUT_OF_HOST_MEMORY", "CL_PROFILING_INFO_NOT_AVAILABLE", "CL_MEM_COPY_OVERLAP", "CL_IMAGE_FORMAT_MISMATCH", "CL_IMAGE_FORMAT_NOT_SUPPORTED", "CL_BUILD_PROGRAM_FAILURE", "CL_MAP_FAILURE", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "CL_INVALID_VALUE", "CL_INVALID_DEVICE_TYPE", "CL_INVALID_PLATFORM", "CL_INVALID_DEVICE", "CL_INVALID_CONTEXT", "CL_INVALID_QUEUE_PROPERTIES", "CL_INVALID_COMMAND_QUEUE", "CL_INVALID_HOST_PTR", "CL_INVALID_MEM_OBJECT", "CL_INVALID_IMAGE_FORMAT_DESCRIPTOR", "CL_INVALID_IMAGE_SIZE", "CL_INVALID_SAMPLER", "CL_INVALID_BINARY", "CL_INVALID_BUILD_OPTIONS", "CL_INVALID_PROGRAM", "CL_INVALID_PROGRAM_EXECUTABLE", "CL_INVALID_KERNEL_NAME", "CL_INVALID_KERNEL_DEFINITION", "CL_INVALID_KERNEL", "CL_INVALID_ARG_INDEX", "CL_INVALID_ARG_VALUE", "CL_INVALID_ARG_SIZE", "CL_INVALID_KERNEL_ARGS", "CL_INVALID_WORK_DIMENSION", "CL_INVALID_WORK_GROUP_SIZE", "CL_INVALID_WORK_ITEM_SIZE", "CL_INVALID_GLOBAL_OFFSET", "CL_INVALID_EVENT_WAIT_LIST", "CL_INVALID_EVENT", "CL_INVALID_OPERATION", "CL_INVALID_GL_OBJECT", "CL_INVALID_BUFFER_SIZE", "CL_INVALID_MIP_LEVEL", "CL_INVALID_GLOBAL_WORK_SIZE", }; if (err != CL_SUCCESS) { std::stringstream str; str << errorString[-err] << " (" << err << ")"; throw std::string(name)+(str.str()); } } int main(){ try{ cl_context m_context; cl_platform_id* m_platforms; unsigned int m_numPlatforms; cl_command_queue m_queue; cl_device_id m_device; cl_int error = 0; // Used to handle error codes clGetPlatformIDs(0,NULL,&m_numPlatforms); m_platforms = new cl_platform_id[m_numPlatforms]; error = clGetPlatformIDs(m_numPlatforms,m_platforms,&m_numPlatforms); checkOpenCLErr(error, "getPlatformIDs"); // Device error = clGetDeviceIDs(m_platforms[0], CL_DEVICE_TYPE_GPU, 1, &m_device, NULL); checkOpenCLErr(error, "getDeviceIDs"); // Context cl_context_properties properties[] = { CL_CONTEXT_PLATFORM, (cl_context_properties)(m_platforms[0]), 0}; m_context = clCreateContextFromType(properties, CL_DEVICE_TYPE_GPU, NULL, NULL, NULL); // m_private->m_context = clCreateContext(properties, 1, &m_private->m_device, NULL, NULL, &error); checkOpenCLErr(error, "Create context"); // Command-queue m_queue = clCreateCommandQueue(m_context, m_device, 0, &error); checkOpenCLErr(error, "Create command queue"); //Build program and kernel const char* source = "#pragma OPENCL EXTENSION cl_khr_byte_addressable_store : enable\n" "\n" "__kernel void bufToImage(__global unsigned char* in, __write_only image2d_t out, const unsigned int offset_x, const unsigned int image_width , const unsigned int maxval ){\n" "\tint i = get_global_id(0);\n" "\tint j = get_global_id(1);\n" "\tint width = get_global_size(0);\n" "\tint height = get_global_size(1);\n" "\n" "\tint pos = j*image_width*3+(offset_x+i)*3;\n" "\tif( maxval < 256 ){\n" "\t\tfloat4 c = (float4)(in[pos],in[pos+1],in[pos+2],1.0f);\n" "\t\tc.x /= maxval;\n" "\t\tc.y /= maxval;\n" "\t\tc.z /= maxval;\n" "\t\twrite_imagef(out, (int2)(i,j), c);\n" "\t}else{\n" "\t\tfloat4 c = (float4)(255.0f*in[2*pos]+in[2*pos+1],255.0f*in[2*pos+2]+in[2*pos+3],255.0f*in[2*pos+4]+in[2*pos+5],1.0f);\n" "\t\tc.x /= maxval;\n" "\t\tc.y /= maxval;\n" "\t\tc.z /= maxval;\n" "\t\twrite_imagef(out, (int2)(i,j), c);\n" "\t}\n" "}\n" "\n" "__constant sampler_t imageSampler = CLK_NORMALIZED_COORDS_FALSE | CLK_ADDRESS_CLAMP_TO_EDGE | CLK_FILTER_NEAREST;\n" "\n" "__kernel void imageToBuf(__read_only image2d_t in, __global unsigned char* out, const unsigned int offset_x, const unsigned int image_width ){\n" "\tint i = get_global_id(0);\n" "\tint j = get_global_id(1);\n" "\tint pos = j*image_width*3+(offset_x+i)*3;\n" "\tfloat4 c = read_imagef(in, imageSampler, (int2)(i,j));\n" "\tif( c.x <= 1.0f && c.y <= 1.0f && c.z <= 1.0f ){\n" "\t\tout[pos] = c.x*255.0f;\n" "\t\tout[pos+1] = c.y*255.0f;\n" "\t\tout[pos+2] = c.z*255.0f;\n" "\t}else{\n" "\t\tout[pos] = 200.0f;\n" "\t\tout[pos+1] = 0.0f;\n" "\t\tout[pos+2] = 255.0f;\n" "\t}\n" "}\n"; cl_int err; cl_program prog = clCreateProgramWithSource(m_context,1,&source,NULL,&err); if( -err != CL_SUCCESS ) throw std::string("clCreateProgramWithSources"); err = clBuildProgram(prog,0,NULL,"-cl-opt-disable",NULL,NULL); if( -err != CL_SUCCESS ) throw std::string("clBuildProgram(fromSources)"); cl_kernel kernel = clCreateKernel(prog,"bufToImage",&err); checkOpenCLErr(err,"CreateKernel"); cl_uint imageWidth = 8; cl_uint imageHeight = 9; //Initialize datas cl_uint maxVal = 255; cl_uint offsetX = 0; int size = imageWidth*imageHeight*3; int resSize = imageWidth*imageHeight*4; cl_uchar* data = new cl_uchar[size]; cl_float* expectedData = new cl_float[resSize]; for( int i = 0,j=0; i < size; i++,j++ ){ data[i] = (cl_uchar)i; expectedData[j] = (cl_float)i/255.0f; if ( i%3 == 2 ){ j++; expectedData[j] = 1.0f; } } cl_mem inBuffer = clCreateBuffer(m_context,CL_MEM_READ_ONLY|CL_MEM_COPY_HOST_PTR,size*sizeof(cl_uchar),data,&err); checkOpenCLErr(err, "clCreateBuffer()"); clFinish(m_queue); cl_image_format imgFormat; imgFormat.image_channel_order = CL_RGBA; imgFormat.image_channel_data_type = CL_FLOAT; cl_mem outImg = clCreateImage2D( m_context, CL_MEM_READ_WRITE, &imgFormat, imageWidth, imageHeight, 0, NULL, &err ); checkOpenCLErr(err,"get2DImage()"); clFinish(m_queue); size_t kernelRegion[]={imageWidth,imageHeight}; size_t kernelWorkgroup[]={1,1}; //Fill kernel with data clSetKernelArg(kernel,0,sizeof(cl_mem),&inBuffer); clSetKernelArg(kernel,1,sizeof(cl_mem),&outImg); clSetKernelArg(kernel,2,sizeof(cl_uint),&offsetX); clSetKernelArg(kernel,3,sizeof(cl_uint),&imageWidth); clSetKernelArg(kernel,4,sizeof(cl_uint),&maxVal); //Run kernel err = clEnqueueNDRangeKernel(m_queue,kernel,2,NULL,kernelRegion,kernelWorkgroup,0,NULL,NULL); checkOpenCLErr(err,"RunKernel"); clFinish(m_queue); //Check resulting data for validty cl_float* computedData = new cl_float[resSize];; size_t region[]={imageWidth,imageHeight,1}; const size_t offset[] = {0,0,0}; err = clEnqueueReadImage(m_queue,outImg,CL_TRUE,offset,region,0,0,computedData,0,NULL,NULL); checkOpenCLErr(err, "readDataFromImage()"); clFinish(m_queue); for( int i = 0; i < resSize; i++ ){ if( fabs(expectedData[i]-computedData[i])>0.1 ){ std::cout << "Expected: \n"; for( int j = 0; j < resSize; j++ ){ std::cout << expectedData[j] << " "; } std::cout << "\nComputed: \n"; std::cout << "\n"; for( int j = 0; j < resSize; j++ ){ std::cout << computedData[j] << " "; } std::cout << "\n"; throw std::string("Error, computed and expected data are not the same!\n"); } } }catch(std::string& e){ std::cout << "\nCaught an exception: " << e << "\n"; return 1; } std::cout << "Works fine\n"; return 0; } I also uploaded the source code for you to make it easier to test it: http://www.file-upload.net/download-3513797/strangeOpenCLError.cpp.html Please can you tell me if I've done wrong anything? Is there any mistake in the code or is this a bug in my driver? Best reagards, Alex

    Read the article

  • No alternative drivers appearing on Software Sources and manual install leads to no unity

    - by Gausie
    I just got a new laptop, installed Ubuntu 12.10 and am trying to install proprietary nvidia drivers. Once I understood the change from jockey, I did a fresh install and followed these instructions: http://techhamlet.com/2012/11/install-nvidia-drivers-in-ubuntu-12-10/ But when I do, Unity crashes on startup. My hardware on lspci | grep VGA is as follows 00:02.0 VGA compatible controller: Intel Corporation 3rd Gen Core processor Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GK107 [GeForce GT 650M] (rev a1) I've followed a couple of nvidia 12.10 tutorials on Google but none have helped. Can I get any specific advice?

    Read the article

  • what packages should I install in ubuntu 12.04 to fulfill opengl requirements for using nouveau driver?

    - by karolszk
    I try to switch from nvidia to nouveau driver via script: !/bin/bash stop gdm rmmod nvidia sed -i "s/nouveau/nvidia/" /etc/modprobe.d/blacklist-nvidia-nouveau.conf update-alternatives --set gl_conf /usr/lib/mesa/ld.so.conf ldconfig modprobe nouveau cp /etc/X11/xorg.conf{.nouveau,} start gdm and driver is loaded and X started but compiz it doesn't. In .xsession-errors I see: Compiz (opengl) - Fatal: Root visual is not a GL visual compiz (opengl) - Error: initScreen failed compiz (core) - Error: Couldn't activate plugin 'opengl' Compiz (opengl) - Fatal: Root visual is not a GL visual Compiz (opengl) - Fatal: Root visual is not a GL visual Compiz (opengl) - Fatal: Root visual is not a GL visual Compiz (opengl) - Fatal: Root visual is not a GL visual Compiz (opengl) - Fatal: Root visual is not a GL visual Compiz (opengl) - Fatal: Root visual is not a GL visual Compiz (opengl) - Fatal: Root visual is not a GL visual Compiz (opengl) - Fatal: Root visual is not a GL visual gnome-session[19075]: WARNING: App 'compiz.desktop' respawning too quickly gnome-session[19075]: WARNING: Application 'compiz.desktop' killed by signal gnome-session[19075]: WARNING: App 'compiz.desktop' respawning too quickly what I'm doing wrong??

    Read the article

  • Sound problems in Unity - input but no output

    - by ana
    I am new to Ubuntu, having just installed it for the first time on my Lenovo Thinkpad. Since I installed it I have no sound output whatsoever. However, I can see from the graphical interface in Sound Preferences Input that the sound input appears to be working correctly. I have tried the following: https://help.ubuntu.com/community/HdaIntelSoundHowto https://wiki.ubuntu.com/Audio/InstallingLinuxAlsaDriverModules ubuntu-bug audio I have two sound cards, cat card0/codec* | grep Codec Codec: Conexant CX20585 Codec: Conexant ID 2c06 cat card1/codec* | grep Codec Codec: Nvidia GPU 0b HDMI/DP Codec: Nvidia GPU 0b HDMI/DP Codec: Nvidia GPU 0b HDMI/DP Codec: Nvidia GPU 0b HDMI/DP And now have pretty much run out of ideas. Can anybody help?

    Read the article

  • HDMI port not recognized on Sony Vaio

    - by julio
    I am running Ubuntu 11.10 64bit with a Sony VAIO VPC F11. It has an NVIDIA GeForce 310M video card, with the latest Nvidia drivers for the 64 bit linux, and a Windows partition with Win7 64bit. NVIDIA driver version is NVIDIA-Linux-x86_64-280.13 External monitor is Samsung SyncMaster P2770 If I boot into the Windows partition, the HDMI works as expected, with sound and video-- under linux, the HDMI port is not recognized at all, apparently, and provides no signal to the attached monitor. The nividia-settings tool does not recognize any monitor connected to the HDMI port. Disper is installed and cannot recognize an attached external monitor. Can anyone help me diagnose this issue and fix it if possible? The laptop has only the one HDMI port to connect any external monitor, so it I can't get this working I'm stuck using either the laptop screen or Windows. Thanks

    Read the article

  • Does an onboard video affect the X windows configuration?

    - by Timothy
    Does the onboard video on the motherboard affect the X windows configuration? My system has onboard and pcie video. The onboard video is a NVIDIA GeForce 7025 GPU, On Board Graphic Max. Memory Share Up to 512MB(Under OS By Turbo Cache). I have a pcie dual head video card installed with two monitors. The video card is a GeForce 8400 GS, with 512mb memory. When installing Ubuntu 12.04, only one monitor worked. When pulling up system settings- Displays it shows a laptop. This is a desktop pc. I did get both monitors to work using nvidia using twinview -- A complicated process! When checking nvidia now it shows the monitors disabled. The Nvidia X server setting does show the GPU and all the information. I was thinking it's seeing the onboard video on the motherboard. Why else would it show laptop?

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >