Search Results

Search found 5564 results on 223 pages for 'multi gpu'.

Page 118/223 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • How to name setter that does data conversion?

    - by IAdapter
    I'm struggling with how to name this method, I don't like the "set" prefix, because I feel it should be reserved for normal "dumb" setters and some tools might not like it (i did not check it in checkstyle, pmd, etc., but I got a feeling they won't like it.) for example (in java, but I feel its language agnostic) public void setActionListenerClicked(boolean actionListenerClicked) { this.actionListenerClicked = actionListenerClicked ? "1" : "0"; } The only purpose of this method is ONLY to set, this method is needed and cannot be joined with any other (because of framework used). P.S. I DO know that question is similar to How to name multi-setter?, but I feel its not the same question.

    Read the article

  • AMD E-450 APU with HD-6320 graphics produces jerky videos

    - by user80424
    I try to make videos smooth playing on a Lenovo E325 laptop equipped with AMD E-450 APU. This processor have Ati HD-6320 GPU integrated. I installed ATI proprietary driver (Catalyst 12.04) as described here. Everything went fine and got no errors. However I can not play smooth HD videos. Almost every second frame has been dropped in VLC with hardware acceleration enabled. vainfo shows: libva: VA-API version 0.32.0 Xlib: extension "XFree86-DRI" missing on display ":0". libva: va_getDriverName() returns 0 libva: Trying to open /usr/lib/x86_64-linux-gnu/dri/fglrx_drv_video.so libva: va_openDriver() returns 0 vainfo: VA-API version: 0.32 (libva 1.0.15) vainfo: Driver version: Splitted-Desktop Systems XvBA backend for VA-API - 0.7.8 vainfo: Supported profile and entrypoints VAProfileH264High : VAEntrypointVLD VAProfileVC1Advanced : VAEntrypointVLD fglrxinfo says: display: :0 screen: 0 OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: AMD Radeon HD 6320 Graphics OpenGL version string: 4.2.11631 Compatibility Profile Context and fgl_glxgears produces ~250fps. Why are HD video frames dropped? CPU doesn't goes above 50% during playback.

    Read the article

  • Should I use OpenGL or DX11 for my game?

    - by Sundareswaran Senthilvel
    I'm planning to write a game from scratch (a BIG Game, for commercial purpose). I'm aware that there are certain compute libraries like OpenCL, AMD APP SDK, C++ AMP as well as DirectCompute - both from MS (NOT interested in CUDA) are available in the market. I'm planning to write the game from the scratch, which includes the following engines... Physics Engine AI Engine Main Game Engine (... and if anything is missed). I'm aware that, there are some free physics engine libraries in the market. Not sure about free AI engine libraries. I'm bit confused in choosing between the OpenCL, AMD APP SDK, and C++ AMP libraries (as already mentioned i'm NOT interested in CUDA). I want my game to be published in Windows/Android/Mac OSX. It means it should be a cross-platform game. I will be having "one source code" that i'll compile for various platforms like Windows/Android/Mac OSX, and any others if i missed. Note: Since I'm NOT a Java guy, kindly do NOT suggest me the Java Language. For Graphics language should i use OpenGL or DirectX 11? I have heard that OpenGL runs on a single core, and not sure of DirectX 11. Between OpenGL and DirectX which one should i follow? or else, are there any other graphics language that i need to start with? I want to make use of the parallelism in GPU as well as CPU.

    Read the article

  • Restoring Virtualbox machine images from old hard drive

    - by memilanuk
    I recently replaced the HDD in my laptop, and re-installed Windows & Ubuntu. Now I want to restore the various virtual machines I had set up on the old HDD, which is mounted in an external USB enclosure. I can read the HDD okay, and the 'bad' spots seemed to be in the Windows partition... but whenever I try to restore the VDI files the copy errors out. I've tried drag-n-drop in Nautilus, I've tried grsync, etc. Always bombs out on the VDI files. I've copied over multi-GB dvd iso images with no problem, but the VDI files always fail the checksums. Any ideas? TIA, Monte

    Read the article

  • Looking for PHP/MySQL-based ad manager

    - by user359650
    Could you recommend based on your experience a PHP/MySQL-based admin interface for managing your website ads? In order to be really useful, such application should have: -basic CRM functionality to track who is providing the ads -multilingual multi country support: have the ability to specify for the same ad, different versions for multiple languages/countries -predefined ad formats (google Ads, flash ads...) and sizes with corresponding PHP helpers so as to insert in the HTML code the necessary markup to properly integrate the ad. Ideally if that application could be desgined for Zend Framework that would be awesome (but I think I'm dreaming at this point).

    Read the article

  • Derby 10.9.1.0 released

    - by kah
    Earlier today, the release of Apache Derby 10.9.1.0 was announced. In addition to the usual chunk of bug fixes, this release includes the following new features: NATIVE authentication, a new authentication mechanism with better support for managing credentials. See this section of the developer's guide for an introduction. JDBC 4.1 escape syntax completes Derby's support for JDBC 4.1. Allow multi-column subqueries in EXISTS predicates (SQL:2003 Feature T501, Enhanced EXISTS predicate) to support auto-generated SQL from some persistence frameworks. Download it now and try it out!

    Read the article

  • How do I install D-Link DWA-140 on Ubuntu 12.04?

    - by Jerrod Griffiths
    When I try to run the .exe file, this error notice comes up. Archive: /media/DWA-140/DWA140.exe [/media/DWA-140/DWA140.exe] End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. zipinfo: cannot find zipfile directory in one of /media/DWA-140/DWA140.exe or /media/DWA-140/DWA140.exe.zip, and cannot find /media/DWA-140/DWA140.exe.ZIP, period. Is there any steps I can take to get this to run? Thanks!

    Read the article

  • How do I improve terrain rendering batch counts using DirectX?

    - by gamer747
    We have determined that our terrain rendering system needs some work to minimize the number of batches being transferred to the GPU in order to improve performance. I'm looking for suggestions on how best to improve what we're trying to accomplish. We logically split our terrain mesh into smaller grid cells which are 32x32 world units. Each cell has meta data that dictates the four 256x256 textures that are used for spatting along with the alpha blend data, shadow, and light mappings. Each cell contains 81 vertices in a 9x9 grid. Presently, we examine each cell and determine the four textures that are being used to spat the cell. We combine that geometry with any other cell that perhaps uses the same four textures regardless of spat order. If the spat order for a cell differs, the blend map is adjusted so that the spat order is maintained the same as other like cells and blending happens in the right order too. But even with this batching approach, it isn't uncommon when looking out across an area of open terrain to have between 1200-1700 batch count depending upon how frequently textures differ or have different texture blends are between cells. We are only doing frustum culling presently. So using texture spatting, are there other alternatives that can reduce the batch count and allow rendering to be extremely performance-friendly even under DirectX9c? We considered using texture atlases since we're targeting DirectX 9c & older OpenGL platforms but trying to repeat textures using atlases and shaders result in seam artifacts which we haven't been able to eliminate with the exception of disabling mipmapping. Disabling mipmapping results in poor quality textures from a distance. How have others batched together terrain geometry such that one could spat terrain using various textures, minimizing batch count and texture state switches so that rendering performance isn't negatively impacted?

    Read the article

  • Which creative framework can create these games? [closed]

    - by Rahil627
    I've used a few game frameworks in the past and have run into limitations. This lead me to "creative frameworks". I've looked into many, but I cannot determine the limitations of some of them. Selected frameworks ordered from highest to lowest level: Flash, Unity, MonoGame, OpenFrameworks (and Cinder), SFML. I want to be able to: create a game that handles drawing on an iPad create a game that uses computer vision from a webcam create a multi-device iOS game create a game that uses input from Kinect Can all of the frameworks handle this? What is the highest level framework that can handle all of them?

    Read the article

  • I'm the .1x programmer at my company. How can I best contribute?

    - by invaliduser
    I work at a newly-minted startup of five people. We have a Ph. D in machine learning, a former member of the RSpec core team, and the guy who compiles the Git binary for OS X. That's just the employees; the founder has a Ph. D and was CTO for a multi-billion-dollar corporation before leaving to start a (successful) startup, and has now left that to start this one. We also might get a guy with a Ph. D in math. Aaaaaaaaand then there's me, college-dropout intern. I think I'm pretty smart and I'm reading non-stop, but the delta of experience, skill, and knowledge between me and my co-workers is just breathtaking. So put yourself in their shoes: you've got a bright young intern who has a lot to learn but is at least energetic. What would be annoying? What use would you hope to get out of him in the here and now? What would be pleasantly surprising if it happened?

    Read the article

  • Can I copy large files faster without using the file cache?

    - by Veazer
    After adding the preload package, my applications seem to speed up but if I copy a large file, the file cache grows by more than double the size of the file. By transferring a single 3-4 GB virtualbox image or video file to an external drive, this huge cache seems to remove all the preloaded applications from memory, leading to increased load times and general performance drops. Is there a way to copy large, multi-gigabyte files without caching them (i.e. bypassing the file cache)? Or a way to whitelist or blacklist specific folders from being cached?

    Read the article

  • Calculating distance from viewer to object in a shader

    - by Jay
    Good morning, I'm working through creating the spherical billboards technique outlined in this paper. I'm trying to create a shader that calculates the distance from the camera to all objects in the scene and stores the results in a texture. I keep getting either a completely black or white texture. Here are my questions: I assume the position that's automatically sent to the vertex shader from ogre is in object space? The gpu interpolates the output position from the vertex shader when it sends it to the fragment shader. Does it do the same for my depth calculation or do I need to move that calculation to the fragment shader? Is there a way to debug shaders? I have no errors but I'm not sure I'm getting my parameters passed into the shaders correctly. Here's my shader code: void DepthVertexShader( float4 position : POSITION, uniform float4x4 worldViewProjMatrix, uniform float3 eyePosition, out float4 outPosition : POSITION, out float Depth ) { // position is in object space // outPosition is in camera space outPosition = mul( worldViewProjMatrix, position ); // calculate distance from camera to vertex Depth = length( eyePosition - position ); } void DepthFragmentShader( float Depth : TEXCOORD0, uniform float fNear, uniform float fFar, out float4 outColor : COLOR ) { // clamp output using clip planes float fColor = 1.0 - smoothstep( fNear, fFar, Depth ); outColor = float4( fColor, fColor, fColor, 1.0 ); } fNear is the near clip plane for the scene fFar is the far clip plane for the scene

    Read the article

  • HP Pavilion G6 1209 temperature higher than usual and fan working in 11.10

    - by vanjadjurdjevic
    Installing Ubuntu on this new machine i had various problems, so I asked around ask ubuntu for a solution. This is the latest one! :D When I start the pc it shows the temperature around 50-55. When I open chromium it shows 60+ (61,62,63). It even gets to 67-68 when multi-tasking 2 apps. The fan is working slightly louder than in windows 7. Talking about windows 7, the temperature is 45-50 when idle, 50-53 when working in browser. Im already loosing it with this machine. You can find specs here It says 'technishe daten' below the picture. Click that tab and you will reach the specs.

    Read the article

  • Data Source Security Part 1

    - by Steve Felts
    I’ve written a couple of articles on how to store data source security credentials using the Oracle wallet.  I plan to write a few articles on the various types of security available to WebLogic Server (WLS) data sources.  There are more options than you might think! There have been several enhancements in this area in WLS 10.3.6.  There are a couple of more enhancements planned for release WLS 12.1.2 that I will include here for completeness.  This isn’t intended as a teaser.  If you call your Oracle support person, you can get them now as minor patches to WLS 10.3.6.   The current security documentation is scattered in a few places, has a few incorrect statements, and is missing a few topics.  It also seems that the knowledge of how to apply some of these features isn’t written down.  The goal of these articles is to talk about WLS data source security in a unified way and to introduce some approaches to using the available features.  Introduction to WebLogic Data Source Security Options By default, you define a single database user and password for a data source.  You can store it in the data source descriptor or make use of the Oracle wallet.  This is a very simple and efficient approach to security.  All of the connections in the connection pool are owned by this user and there is no special processing when a connection is given out.  That is, it’s a homogeneous connection pool and any request can get any connection from a security perspective (there are other aspects like affinity).  Regardless of the end user of the application, all connections in the pool use the same security credentials to access the DBMS.   No additional information is needed when you get a connection because it’s all available from the data source descriptor (or wallet). java.sql.Connection conn =  mydatasource.getConnection(); Note: You can enter the password as a name-value pair in the Properties field (this not permitted for production environments) or you can enter it in the Password field of the data source descriptor. The value in the Password field overrides any password value defined in the Properties passed to the JDBC Driver when creating physical database connections. It is recommended that you use the Password attribute in place of the password property in the properties string because the Password value is encrypted in the configuration file (stored as the password-encrypted attribute in the jdbc-driver-params tag in the module file) and is hidden in the administration console.  The Properties and Password fields are located on the administration console Data Source creation wizard or Data Source Configuration tab. The JDBC API can also be used to programmatically specify a database user name and password as in the following.  java.sql.Connection conn = mydatasource.getConnection(“user”, “password”); According to the JDBC specification, it’s supposed to take a database user and associated password but different vendors implement this differently.  WLS, by default, treats this as an application server user and password.  The pair is authenticated to see if it’s a valid user and that user is used for WLS security permission checks.  By default, the user is then mapped to a database user and password using the data source credential mapper, so this API sort of follows the specification but database credentials are one-step removed from the application code.  More details and the rationale are described later. While the default approach is simple, it does mean that only one database user is doing all of the work.  You can’t figure out who actually did the update and you can’t restrict SQL operations by who is running the operation, at least at the database level.   Any type of per-user logic will need to be in the application code instead of having the database do it.  There are various WLS data source features that can be configured to provide some per-user information about the operations to the database. WebLogic Data Source Security Options This table describes the features available for WebLogic data sources to configure database security credentials and a brief description.  It also captures information about the compatibility of these features with one another. Feature Description Can be used with Can’t be used with User authentication (default) Default getConnection(user, password) behavior – validate the input and use the user/password in the descriptor. Set client identifier Proxy Session, Identity pooling, Use database credentials Use database credentials Instead of using the credential mapper, use the supplied user and password directly. Set client identifier, Proxy session, Identity pooling User authentication, Multi Data Source Set Client Identifier Set a client identifier property associated with the connection (Oracle and DB2 only). Everything Proxy Session Set a light-weight proxy user associated with the connection (Oracle-only). Set client identifier, Use database credentials Identity pooling, User authentication Identity pooling Heterogeneous pool of connections owned by specified users. Set client identifier, Use database credentials Proxy session, User authentication, Labeling, Multi-datasource, Active GridLink Note that all of these features are available with both XA and non-XA drivers. Currently, the Proxy Session and Use Database Credentials options are on the Oracle tab of the Data Source Configuration tab of the administration console (even though the Use Database Credentials feature is not just for Oracle databases – oops).  The rest of the features are on the Identity tab of the Data Source Configuration tab in the administration console (plan on seeing them all in one place in the future). The subsequent articles will describe these features in more detail.  Keep referring back to this table to see the big picture.

    Read the article

  • How do I download a corrupted package again?

    - by user64720
    Ubuntu 12.04 can't install Firefox 13 update, because the package is corrupted. While attempting to install, returns this error (I translated it from my language to English). /var/cache/apt/archives/firefox_13.0+build1-0ubuntu0.12.04.1_i386.deb W: Waited for dpkg --assert-multi-arch but was not there - dpkgGo (10: There are no "child" processes). I can tell that the package at /var/cache/apt/archives/firefox_13.0+build1-0ubuntu0.12.04.1_i386.deb is corrupted, but even as admin, I can't delete it in order to be downloaded again. How should I proceed? EDIT: There was a single package causing this conflict, please report here to understand all the situation: Why can't I install from software center?

    Read the article

  • Provincial Forum & the Best of Oracle OpenWorld for Public Sector

    - by user511693
              Provincial Ministries, Crowns and Agencies are transforming in an effort to meet increasing service expectations from citizens, legislative mandates, and current economic pressures. There is a need to be more efficient and accountable, providing services and information to constituents expeditiously and cost-effectively. However, legacy information systems typically support single program functions. These disparate systems pose a complex canvas upon which to compose a more efficient government systems landscape. Please join your fellow government leaders and Oracle on December 6, 2011 to discuss these challenges and learn how government agencies are leveraging IT as a core tool to streamline multi-organization operations thereby delivering a more cost-effective, citizen- centric, and sustainable government. Register here.

    Read the article

  • Backups, What Are They Good For?

    We've heard the confessional story from Pixar that Toy Story 2 was almost lost due to a bad backup, but sometimes there is no 'almost'. Grant Fritchey casts a sympathetic eye over some catastrophic data losses, and gives advice on how to avoid what he has termed an RGE (résumé generating event). New! SQL Monitor 3.0 Red Gate's multi-server performance monitoring and alerting tool gets results from Day One.Simple to install and easy to use – download a free trial today.

    Read the article

  • Cheap ways to do scaling ops in shader?

    - by Nick Wiggill
    I've got an extensive world terrain that uses vec3 for the vertex position attribute. That's good, because the terrain has endless gradations due to the use of floating point. But I'm thinking about how to reduce the amount of data uploaded to the GPU. For my terrain, which uses discrete / grid-based vertex positions in x and z, it's pretty clear that I can replace my vec3s (floats, really) with shorts, halving the per-vertex position attribute cost from 12 bytes each to 6 bytes. Considering I've got little enough other vertex data, and an enormous amount of terrain data to push into the world, it's a major gain. Currently in my code, one unit in GLSL shaders is equal to 1m in the world. I like that scale. If I move over to using shorts, though, I won't be able to use the same scale, as I would then have a very blocky world where every step in height is an entire metre. So I see these potential solutions to scale the positional data correctly once it arrives at the vertex shader stage: Use 10:1 scaling, i.e. 1 short unit = 1 decimetre in CPU-side code. Do a division by 10 in the vertex shader to scale incoming decimetre values back to metres. Arbirary (non-PoT) divisions tend to be slow, however. Use (some-power-of-two):1 scaling (eg. 8:1), which enables the use of a bitshift (eg. val >> 3) to do the division... not sure how performant this is in shaders, though. Not as intuitive to read values, but possibly quite a bit faster than div by a non-PoT value. Use a texture as lookup table. I've heard that this is really fast. Or whatever solutions others can offer to achieve the same results -- minimal vertex data with sensible scaling.

    Read the article

  • Can't login to Unity 3d after enabling Xinerama for a short moment

    - by Amir Adar
    Today I connected a second monitor to my computer. I set it up using nVidia's control panel, and all was working quite well, so I figured it won't be a problem to try Xinerama, just to see the difference between that and twinview. After enabling Xinerama and restarting the X session, I saw that I was logged into a Unity 2d session. I thought it was a problem with Xinerama, so I switched back to twinview, but it still logged me into Unity 2d. I tried disconnecting the second monitor, no luck: still Unity 2d. I tried changing GPU drivers and installing drivers from a separate ppa, and still I was logged into Unity 2d. Up until this point, I didn't have any problem logging into Unity 3d. It only happened after I tried using Xinerama. I should note that I was doing all this while updates were going on in the background, so it could be something related to that, though I can't imagine what (I tried booting with another kernel, but no luck). So what exactly happened? Did changing the mode to Xinerama triggered some other changes that I'm not aware of? Did these updates cause a certain malfunction in the driver? Is it something else?

    Read the article

  • Maintenance Wizard

    - by LuciaC
    The Maintenance Wizard is an E-Business Suite upgrade tool that can guide you through the code line upgrade process from 11.5.10.2 to 12.1.3 with an 11gR2 database. Additionally, it includes maintenance features for most releases of E-Business Suite applications. The Tool: Presents step-by-step upgrade and maintenance processes Enables validation of each step, tracks the completion of the steps, and maintains a log and status Is a multi-user tool that enables the System Administrator to give different users assignments based on any combination of category, product family or task Automatically installs many required patches Provides project management utilities to record the time taken for each task, completion status and project reporting For More Information:Review Doc ID 215527.1 for additional information on the Maintenance Wizard.See Doc ID 430732.1 to download the new Patch.

    Read the article

  • Trouble installing 12.04 from cd, blank screen with cursor

    - by Master Morality
    I should preface this with saying that this is not my first rodeo. I started playing with Linux in 1999 (Red Hat) and I'm currently typing this on a ThinkPad running 12.04... When I put in the live cd, I boots, and I can get to the option menu to decide whether to install/run etc, but at any point beyond that I get only a single caret. I can type stuff in, but it goes no where. I've tried the usual stuff like running with nomodeset (thinking it was the intel HD400 graphics, but it is integrated graphics...) here is the setup: ASUS p8z77 pro (with the Atheros AR9485 wifi supposedly, but I'm not that far yet) i7 3370k 8gig X 2 G.Skill Crucial M4 (256 gig) LG Super-multi DVD re-writer no video card UPDATE: I though may be it was a bad image on the CD, so I downloaded another iso, and used a USB disk. now it boots to a blank screen. I can see the "press any key to get options" screen, but after that, it just goes to a blank screen.

    Read the article

  • Distributing a very simple application

    - by vanna
    I have a very simple working console application written in C++ linked with a light static library. It is just for testing purposes. Now that the coding part is done, I would like to know the process of actually distributing the program. I wrote a very basic CMakeLists.txt that create makefiles or VS projects to build the sources. I also have a program that calls the static library in order to make some google tests. To me, the distribution of this application goes like this : to developpers : the src directory with the CMakeLists.txt file (multi-platform distribution) with a README.txt and an INSTALL.txt to users : the executable and a README.txt on my git repo : everything mentionned above plus the sources for testing and the gtest external lib A this point : considering the complexity of my application, am I doing it right ? Is there any reference that would formalize this distribution process so I can get better and go further ? Say I would like to add dynamic libraries that can be updated, external libraries like boost : how should I package this to distribute it in a professionnal way ?

    Read the article

  • Les conditions d'utilisation de l'App Store seraient incompatibles avec la licence GPL, l'application libre VLC vient d'en faire les frais

    Les conditions d'utilisation de l'App Store seraient incompatibles avec la licence GPL, l'application libre VLC vient d'en faire les frais On vous a offert un iPhone à Noël et vous vouliez y installer un lecteur multi codecs gratuit sur votre terminal ? Trop tard. L'application VLC vient d'être supprimée de l'App Store, où elle est désormais "persona non grata". Ceux qui possèdent déjà le logiciel pourront en revanche le conserver. Quel a été le problème ? L'un des développeurs ayant participé à la création de VLC, Rémi Denis-Courmont, s'est indigné auprès d'Apple du non-respect de la licence GPL du produit, puisqu'elle implique que l'utilisateur puisse copier, distribuer et modifier à sa convenance le logi...

    Read the article

  • Kubuntu not showing eth0

    - by Laurbert515
    I just installed Kubuntu 12.04.2 and do not have an internet connection. I also cannot access the Desktop and only have the terminal. When I enter xlcock I get Error: Can't open display:. I believe this is because I do not have the correct drivers and need to download them ... using the internet which I can't access. So here's what's going on ... ifconfig gives: lo Link ecap: Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:16 errors:0 dropped:0 overrruns:0 frame:0 TX packets:16 errors:0 dropped:0 overrruns:0 frame:0 collisions:0 txqueuelen:0 RX bytes:1296 (1.2 KB) TX bytes:1296 (1.2 KB) wlan0 Link encap: Ethernet HWaddr 68:17:29:58:49:4a UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:16 errors:0 dropped:0 overrruns:0 frame:0 TX packets:16 errors:0 dropped:0 overrruns:0 frame:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) lspci -nnk gives: 07:00.0 Network Controller [0280] Intel Corporation Centrino Wireless-N 2230 [8086:0887] (rev c4) Subsystem: Intel Coporation Centrino Wireless-N 2230 BGN [8086:4062] Kernel driver in use: iwlwifi Kernel modules: iwlwifi 0d:00.0 Ethernet Controller [0200]: Atheros Communications Inc. AR8161 Gigabit Ethernet [1969:1091] (rev 10) Subsystem: Toshiba America Info Systems Device [1179:fa77] I believe this means that it is using the ethernet connection but thinks it is a wireless connection. sudo lshw -class network gives: *-network description: Wireless interface product: Centrino Wireless=N 2230 ... *network UNCLAIMED description: Ethernet controller product: AR8161 Gigabit Ethernet I want to get the internet working so I can fix the GPU driver, etc. but I can't seem to get it working even though I have my ethernet cable plugged in (and I'm sure it is working).

    Read the article

  • Random compositing lag

    - by user1020567
    My laptop specs: 512 mb of RAM, out of which 64 mb are shared with an integrated GPU - ATI Radeon Xpress 200 M. Intel 1,6 Ghz Celeron M single-core processor. I've spent months trying to figure out why compositing and effects sometimes lag on any distro I try. Now I've come to realise that no matter what drivers I try (the default ones work for me on pretty much any linux) compositing lag is random. When I used Ubuntu 10.10, for example, sometimes window compositing would lag and sometimes it wouldn't. The PC is able to render those effects so hardware is not the problem. It's completely random and unpredictable - sometimes when I turn on the computer the effects lag horribly and sometimes it's completely smooth. I've also checked startup items and there doesn't seem to be any unnecessary entries. I also tried building my own OS with Arch Linux and the problem persists there, therefore I can only assume that it's a driver issue of some sort. By default there are lots of drivers supplied with linux distributions. Could it be that they're in the way? The ones that I need are ati/radeon (or both? What's the difference between them?) and there seem to be a lot of others... What should I do?

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >