Search Results

Search found 16794 results on 672 pages for 'memory usage'.

Page 361/672 | < Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >

  • Scrambled screen on 12.04 with Radeon HD 7670M/2GB when scrolling the page

    - by Mihkel
    I have Ubuntu 12.04 LTS 64 bit and I have installed proprietary drivers for my Radeon HD 7670M with 2GB memory. But if I scroll page or do anything like move a window then I get blurred screen (more like scrambled maybe) for a second and if I try to take PrtScr of it, it is goes to normal. I have tried other drivers and it does not solve my problem. And I do not want to go over 32 bit Ubuntu because I have 6 GB ram and I would lose so much of it. Also if it helps, my processor is Intel® Core™ i5-3210M CPU @ 2.50GHz × 4.

    Read the article

  • Probably the dumbest and poitnless question to ask

    - by Anthony Adams
    How can I dual boot this with Windows 8? I've tried to burn to a CD, never have a enough memory or the program tells me that the CD isn't writable. So, I want to run from USB. But I never understood how to run the program from the USB, how to download it on to the USB and how to set up the computer to run the USB before the Hard Drive. I am a beginner trying to learn Linux, if any one could help a newbie like me, that would be much appreciated.

    Read the article

  • Does C# give you "less rope to hang yourself" than C++?

    - by user115232
    Joel Spolsky characterized C++ as "enough rope to hang yourself". Actually, he was summarizing "Effective C++" by Scott Meyers: It's a book that basically says, C++ is enough rope to hang yourself, and then a couple of extra miles of rope, and then a couple of suicide pills that are disguised as M&Ms... I don't have a copy of the book, but there are indications that much of the book relates to pitfalls of managing memory which seem like would be rendered moot in C# because the runtime manages those issues for you. Here are my questions: Does C# avoid pitfalls that are avoided in C++ only by careful programming? If so, to what degree and how are they avoided? Are there new, different pitfalls in C# that a new C# programmer should be aware of? If so, why couldn't they be avoided by the design of C#?

    Read the article

  • design for a parser to handle very large files

    - by user619818
    I have written a program which records protocol messages between an application and a hardware device which matches each application request with each hardware response. This is so that I can later remove the hardware, connect a 'replay' application to the main application and wait for an application request and reply with a matched copy of the requisite hardware reply message. My replay application saves the matched request/response in a list (using C++ std::list). This works fine on a small interaction session. My problem now is that I need to be able to use the replay over a long long session. With my current implementation, the replay program eventually uses up all available memory on my computer and crashes. So I need some sort of lookahead - and not parse the whole session in one go. Can anyone make any suggestions on how to get started?

    Read the article

  • How do you set up PhysFS for use in a game?

    - by ThePlan
    After my recent question on GD I've been advised to use PhysFS to pack all my game data in 1 file. So I have, and the decission wasn't light, because I've tried out every library in my answers but none contained a single good tutorial whatsoever, in fact PhysFS is the poorest documented library I've ever seen. After attempting to set up PhysFS in my game I realized it's not as simple as adding the headers to the project, it appears something much more complicated, in fact after my first attempt to install PhysFS the compiler ran out of memory to display errors, it reached the critical count of 50 errors. So basically what I'm asking here is: How can I set up PhysFS on my game? I'm using Code::Blocks IDE on Windows XP SP3;

    Read the article

  • Determining Whether a String Is Contained Within a String Array (Case Insensitive)

    About once every couple of months I need to write a bit of code that does one thing if a particular string is found within an array of strings and something else if it is not ignoring differences in case. For whatever reason, I never seem to remember the code snippet to accomplish this, so after spending 10 minutes of research today I thought I'd write it down here in an effort to help commit it to memory or, at the very least, serve as a quick place to find the answer when the need arises again.So without further adieu, here it is:Visual Basic Version:If stringArrayName.Contains("valueToLookFor", StringComparer.OrdinalIgnoreCase) Then ... Else ... End IfC# Version:if (stringArrayName.Contains("valueToLookFor", StringComparer.OrdinalIgnoreCase)) ... else ...Without the StringComparer.OrdinalIgnoreCase the search will be case-sensitive. For more information on comparing strings, see: New Recommendations for Using Strings in Microsoft .NET 2.0.Happy Programming!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Humor in Documentation

    - by Lex Fridman
    Is a small amount of lighthearted wording or humor acceptable in source code documentation? For example, I have an algorithm that has a message hop around a graph (network) until its path forms a cycle. When this happens it is removed from the queue of the node it last resided on which removes it from memory. I write that in a comment, and finish the comment with "Rest in peace, little guy". That serves very little documenting purpose, but it cheers me up a bit, and I imagine it might cheer up other people I'm working with as they read through the code. Is this an acceptable practice, or should my in-code documentation resemble as much as possible the speeches of 2004 presidential candidate John Kerry? ;-)

    Read the article

  • Is slower performance, of programming languages, really, a bad thing?

    - by Emanuil
    Here's how I see it. There's machine code and it's all that computers needs in order to run something. Computers don't care about programming languages. It doesn't matter to them whether the machine code comes from Perl, Python or PHP. Programming languages don't serve computers. They serve programmers. Some programming languages run slower than others but that's not necessarily because there is something wrong with them. In many cases, it's because they do more things that programmers would otherwise have to do (i.e. memory management) and by doing these things, they are better in what they are supposed to do - serve programmers. So, is slower performance, of programming languages, really, a bad thing?

    Read the article

  • Expected time for an CakePHP MVC form/controller and db make up

    - by hephestos
    I would like to know, what is an average time for building a form in MVC pattern with for example CakePHP. I build 8 functions, two of them do custom queries, return json data, split them, expand them in a model in memory and delivers to the view. Those are three queries if you consider and an array to feed view for making some combo box. Why? all these, because I have data from json and I split them in order to make row of data like a table. Like that I changed a bit the edit.ctp but not a lot. And I created a javascript outside, with three functions. One collects data the other upon a change of a combo returnes the selected values, and does also some redirection flow. All this, in average how much time should it take ?

    Read the article

  • Creating my own kill cam

    - by DalexL
    I plan on creating my own kill cam system for a sandbox tool set. After thinking about the mechanics of the kill cam itself, however, I'm quite lost. I'm trying to recreate the ones commonly seen in call of duty games that show, from the view of the killer, the actual killing scene. My Thoughts: -I can't just keep in memory when people kill others because I wouldn't know when to start the 'recording process'. There is on way for me to accurately determine when somebody is 'about' to kill someone. -My only real idea so far is to have a complete duplicate of everything loaded off to the side copying all the movement from the original world but with a 10 second delay. That way, all the kill cams would be 10 seconds long and the persons camera would just be moved to the second world of their killer. My Questions: Is there already an accepted way to do this? Does anybody have any good ideas for something like this? Thanks if you can!

    Read the article

  • Interacting with scene cocos2d

    - by cjroebuck
    I'm attempting to make my first cocos2d (for iphone) multiplayer game and having difficulty understanding how to interact with a scene once it is running. The game is a simple turn-based one and so I have a GameController class which co-ordinates the rounds. I also have a GameScene class which is the actual scene that is displayed during a round of the game. The basic interaction I need is for the GameController to be able to pass messages to the GameScene class.. such as StartRound/StopRound etc. The thing that complicates this is that I am loading the GameScene with a LoadingScene class which simply initialises the scene and replaces the current scene with this one, so there is no reference from GameController to GameScene, so passing messages is quite tricky. Does anyone have any ways to get around this, ideally I would still like to use a Loading class as it smooths out the memory hit when replacing scenes.

    Read the article

  • Problem with dash - there are no programs on the list

    - by sky
    As I said. There are no programs on the dash list (seacher on appmenu). Yesterday I logged into my account and I tried to find some program but there wasn't any! Additionaly, I tried to view installed programs and manually find program which I looked for, but nothing was displayed :( And today, when I want to turn on Ubuntu Software Center, it just don't turn on .< I'm using Ubuntu 11.10 (64bit). I installed "fresh" O.S. few days ago. Ubuntu is updated and has many Gigabytes of disk memory available. Please help me and my unfortunate O.S. Thank you for all answers.

    Read the article

  • Ubuntu install and boot failure 11.10

    - by Robert Moody
    I installed Ubuntu 11.10 on my machine alongside Vista, and upgraded to 12.4. I decided I liked 11.10 better, so I tried to install that again as my only OS, except I increased the size of the swap file partition to 2 gigs. It boots up fine off the CD, but when I install, it gives me a non-specific error, and returns me to the desktop. When attempting to boot off the hard drive, I get a black screen with a blinking underscore that starts in the corner, drops a couple spaces, and stays there. I managed to install 9.04, and am currently using that. The computer is a little outdated, but was fired up for the very first time last week, so the hard drive is in new condition and the CD rom drive is fine too. Running a 3GHzX2 processor. I ran a memory test, which came back fine, and being new to the linux environment, I've been scratching my head for the last couple days. How can I fix this?

    Read the article

  • How Byte loading/storing is implemented By the CPU?

    - by AlexDan
    I know that in 32bit machine, cpu read from memory 32bits at a time. since the registers in this case is 32bit in size too, I can understand how this works. What I don't understand is how the cpu implement load instructions of 1 byte. does it load the whole word where the single byte is located to the register, then perform some kind of "byte shifting", or does the cpu can load a single byte, in this case when does the byte masking happen, is it until the byte got loaded in the register, or it happen when byte is send through the data bus ? P.S. The cpu Im using is MIPS, the instructions Im talking about are: lb or lbu

    Read the article

  • Objects won't render when Texture Compression + Mipmapping is Enabled

    - by felipedrl
    I'm optimizing my game and I've just implemented compressed (DXTn) texture loading in OpenGL. I've worked my way removing bugs but I can't figure out this one: objects w/ DXTn + mipmapped textures are not being rendered. It's not like they are appearing with a flat color, they just don't appear at all. DXTn textured objs render and mipmapped non-compressed textures render just fine. The texture in question is 256x256 I generate the mips all the way down 4x4, i.e 1 block. I've checked on gDebugger and it display all the levels (7) just fine. I'm using GL_LINEAR_MIPMAP_NEAREST for min filter and GL_LINEAR for mag one. The texture is being compressed and mipmaps being created offline with Paint.NET tool using super sampling method. (I also tried bilinear just in case) Source follow: [SNIPPET 1: Loading DDS into sys memory + Initializing Object] // Read header DDSHeader header; file.read(reinterpret_cast<char*>(&header), sizeof(DDSHeader)); uint pos = static_cast<uint>(file.tellg()); file.seekg(0, std::ios_base::end); uint dataSizeInBytes = static_cast<uint>(file.tellg()) - pos; file.seekg(pos, std::ios_base::beg); // Read file data mData = new unsigned char[dataSizeInBytes]; file.read(reinterpret_cast<char*>(mData), dataSizeInBytes); file.close(); mMipmapCount = header.mipmapcount; mHeight = header.height; mWidth = header.width; mCompressionType = header.pf.fourCC; // Only support files divisible by 4 (for compression blocks algorithms) massert(mWidth % 4 == 0 && mHeight % 4 == 0); massert(mCompressionType == NO_COMPRESSION || mCompressionType == COMPRESSION_DXT1 || mCompressionType == COMPRESSION_DXT3 || mCompressionType == COMPRESSION_DXT5); // Allow textures up to 65536x65536 massert(header.mipmapcount <= MAX_MIPMAP_LEVELS); mTextureFilter = TextureFilter::LINEAR; if (mMipmapCount > 0) { mMipmapFilter = MipmapFilter::NEAREST; } else { mMipmapFilter = MipmapFilter::NO_MIPMAP; } mBitsPerPixel = header.pf.bitcount; if (mCompressionType == NO_COMPRESSION) { if (header.pf.flags & DDPF_ALPHAPIXELS) { // The only format supported w/ alpha is A8R8G8B8 massert(header.pf.amask == 0xFF000000 && header.pf.rmask == 0xFF0000 && header.pf.gmask == 0xFF00 && header.pf.bmask == 0xFF); mInternalFormat = GL_RGBA8; mFormat = GL_BGRA; mDataType = GL_UNSIGNED_BYTE; } else { massert(header.pf.rmask == 0xFF0000 && header.pf.gmask == 0xFF00 && header.pf.bmask == 0xFF); mInternalFormat = GL_RGB8; mFormat = GL_BGR; mDataType = GL_UNSIGNED_BYTE; } } else { uint blockSizeInBytes = 16; switch (mCompressionType) { case COMPRESSION_DXT1: blockSizeInBytes = 8; if (header.pf.flags & DDPF_ALPHAPIXELS) { mInternalFormat = GL_COMPRESSED_RGBA_S3TC_DXT1_EXT; } else { mInternalFormat = GL_COMPRESSED_RGB_S3TC_DXT1_EXT; } break; case COMPRESSION_DXT3: mInternalFormat = GL_COMPRESSED_RGBA_S3TC_DXT3_EXT; break; case COMPRESSION_DXT5: mInternalFormat = GL_COMPRESSED_RGBA_S3TC_DXT5_EXT; break; default: // Not Supported (DXT2, DXT4 or any compression format) massert(false); } } [SNIPPET 2: Uploading into video memory] massert(mData != NULL); glGenTextures(1, &mHandle); massert(mHandle!=0); glBindTexture(GL_TEXTURE_2D, mHandle); commitFiltering(); uint offset = 0; Renderer* renderer = Renderer::getInstance(); switch (mInternalFormat) { case GL_RGB: case GL_RGBA: case GL_RGB8: case GL_RGBA8: for (uint i = 0; i < mMipmapCount + 1; ++i) { uint width = std::max(1U, mWidth >> i); uint height = std::max(1U, mHeight >> i); glTexImage2D(GL_TEXTURE_2D, i, mInternalFormat, width, height, mHasBorder, mFormat, mDataType, &mData[offset]); offset += width * height * (mBitsPerPixel / 8); } break; case GL_COMPRESSED_RGB_S3TC_DXT1_EXT: case GL_COMPRESSED_RGBA_S3TC_DXT1_EXT: case GL_COMPRESSED_RGBA_S3TC_DXT3_EXT: case GL_COMPRESSED_RGBA_S3TC_DXT5_EXT: { uint blockSize = 16; if (mInternalFormat == GL_COMPRESSED_RGB_S3TC_DXT1_EXT || mInternalFormat == GL_COMPRESSED_RGBA_S3TC_DXT1_EXT) { blockSize = 8; } uint width = mWidth; uint height = mHeight; for (uint i = 0; i < mMipmapCount + 1; ++i) { uint nBlocks = ((width + 3) / 4) * ((height + 3) / 4); // Only POT textures allowed for mipmapping massert(width % 4 == 0 && height % 4 == 0); glCompressedTexImage2D(GL_TEXTURE_2D, i, mInternalFormat, width, height, mHasBorder, nBlocks * blockSize, &mData[offset]); offset += nBlocks * blockSize; if (width <= 4 && height <= 4) { break; } width = std::max(4U, width / 2); height = std::max(4U, height / 2); } break; } default: // Not Supported massert(false); } Also I don't understand the "+3" in the block size computation but looking for a solution for my problema I've encountered people defining it as that. I guess it won't make a differente for POT textures but I put just in case. Thanks.

    Read the article

  • Verifying Office 2010 SP1 Installation

    - by Chris Heacock
    So you downloaded and installed SP1, but now you want to verify that SP1 actually installed! Looking at Outlook's Help Screen under Help->About, it isn't readily apparent that SP1 is installed. Like me, you probably expected to see 14.1.something. Perhaps 14.0.something SP1, right? If you click on that "Additional Version and Copyright Information", another window will pop up and show you a bit more useful info (if you don't have the version numbers committed to memory) That window *does* give us that comforting SP1, and now we can determine that if you have Office 14.1.6023.1000 (and beyond) you are indeed runnning Office 2010 SP1!

    Read the article

  • Case convention- Why the variation between languages?

    - by Jason
    Coming from a Java background, I'm very used to camelCase. When writing C, using the underscore wasn't a big adjustment, since it was only used sparingly when writing simple Unix apps. In the meantime, I stuck with camelCase as my style, as did most of the class. However, now that I'm teaching myself C# in preparation for my upcoming Usability Design class in the fall, the PascalCase convention of the language is really tripping me up and I'm having to rely on intellisense a great deal in order to make sure the correct API method is being used. To be honest, switching to the PascalCase layout hasn't quite sunk in the muscle memory just yet, and that is frustrating from my point of view. Since C# and Java are considered to be brother languages, as both are descended from C++, why the variation in the language conventions? Was it a personal decision by the creators based on their comfort level, or was it just to play mindgames with new introductees to the language?

    Read the article

  • Ubuntu Hangs Suddenly (Dell Latitude E5530)

    - by iFadey
    I recently bought (a month ago) Dell Latitude E5530 which comes pre-installed with Ubuntu 11.10. I removed Ubuntu 11.10 and installed 12.04 LTS right after the purchase. Everything worked out of box but occasionally Ubuntu completely hangs. The screen freezes and I can't even switch to other terminals by pressing (CTRL+ALT+F*). Whenever the screen freezes, CPU fan speed also increases. This is not happening when running particular applications. I mean it can hang without giving any reason or error displayed and while running any application. In short currently I can't able to reproduce system hang myself. I also want to mention that sometimes it never hangs complete day. Here are the specs of my laptop: Processor: Core i7-3520M CPU @ 2.90GHz Memory: 8GB HDD: 500GB, 7200rpm (Model=ST9500423AS) Graphics: Intel HD 4000 Operating System: Ubuntu 12.04 LTS (64-bit) Thanks!

    Read the article

  • What are the most common stumbling blocks when it comes to learning programming, in order of difficulty?

    - by blueberryfields
    I seem to remember that linked lists, recursion, pointers, and memory management are all good examples of stumbling blocks - places where the aspiring programmer typically ends up spending significant time trying to understand a concept before moving on and improving, and many end up giving up and not improving. I'm looking for a complete/comprehensive list of these types of stumbling blocks, in rough estimated order of difficulty to learn, with the goal of making sure that an educational program for programmers is structured to properly guide students through them Is this information available somewhere? Ideally, the difficulty to learn will be measured in some sort of objective manner (ie, % of students which consistently fail to learn the concept) What sources are most appropriate for obtaining this information?

    Read the article

  • Trouble with 12.10 lag

    - by Brennan
    Well basicly lately I have been having lag problems with 12.10. I will post my specs, but before the update to 12.10, it said that I had intel graphics. Now it says I have Gallium. My specs: *Memory: 3.9 GiB *Processor: Pentium(R) Dual-Core CPU E5500 @ 2.80GHz × 2 *Graphics: Gallium 0.4 on llvmpipe (LLVM 3.2, 128 bits) (used to say intel graphics) *OS Type: 32-bit *Disk: 486.1 GB The output of the command sudo apt-cache check is this: E: Invalid operation check The output of the command sudo lspci -nnk | grep -A5 VGA is this: 00:02.0 VGA compatible controller [0300]: Intel Corporation 4 Series Chipset Integrated Graphics Controller [8086:2e32] (rev 03) Subsystem: ASUSTeK Computer Inc. Device [1043:836d] Kernel driver in use: i915 00:1b.0 Audio device [0403]: Intel Corporation NM10/ICH7 Family High Definition Audio Controller [8086:27d8] (rev 01) Subsystem: ASUSTeK Computer Inc. Device [1043:8445] Kernel driver in use: snd_hda_intel

    Read the article

  • Upgrade 11.10 to 12.04 on Eee PC failed. GLIBC_2.14 not found

    - by user69170
    I was upgrading my Eee PC from 11.10 to 12.04 as prompted and the upgrade locked up (the lappy was plugged into power - so that wasn't the problem). The system won't boot now and I'm getting an error that "GLIBC_2.14 not found". I am unable to boot into any recovery mode or prior version. The lappy was a dual boot, but when I installed Ubuntu the first time, I opted out of that option (whoops). Now I can't load, it goes into a forever 'ubuntu' loading screen. I have a fresh install on a memory stick, but when I set the lappy to boot from it - it doesn't work. Help. I have no idea what to do next... ??

    Read the article

  • Data Source Connection Pool Sizing

    - by Steve Felts
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} One of the most time-consuming procedures of a database application is establishing a connection. The connection pooling of the data source can be used to minimize this overhead.  That argues for using the data source instead of accessing the database driver directly. Configuring the size of the pool in the data source is somewhere between an art and science – this article will try to move it closer to science.  From the beginning, WLS data source has had an initial capacity and a maximum capacity configuration values.  When the system starts up and when it shrinks, initial capacity is used.  The pool can grow to maximum capacity.  Customers found that they might want to set the initial capacity to 0 (more on that later) but didn’t want the pool to shrink to 0.  In WLS 10.3.6, we added minimum capacity to specify the lower limit to which a pool will shrink.  If minimum capacity is not set, it defaults to the initial capacity for upward compatibility.   We also did some work on the shrinking in release 10.3.4 to reduce thrashing; the algorithm that used to shrink to the maximum of the currently used connections or the initial capacity (basically the unused connections were all released) was changed to shrink by half of the unused connections. The simple approach to sizing the pool is to set the initial/minimum capacity to the maximum capacity.  Doing this creates all connections at startup, avoiding creating connections on demand and the pool is stable.  However, there are a number of reasons not to take this simple approach. When WLS is booted, the deployment of the data source includes synchronously creating the connections.  The more connections that are configured in initial capacity, the longer the boot time for WLS (there have been several projects for parallel boot in WLS but none that are available).  Related to creating a lot of connections at boot time is the problem of logon storms (the database gets too much work at one time).   WLS has a solution for that by setting the login delay seconds on the pool but that also increases the boot time. There are a number of cases where it is desirable to set the initial capacity to 0.  By doing that, the overhead of creating connections is deferred out of the boot and the database doesn’t need to be available.  An application may not want WLS to automatically connect to the database until it is actually needed, such as for some code/warm failover configurations. There are a number of cases where minimum capacity should be less than maximum capacity.  Connections are generally expensive to keep around.  They cause state to be kept on both the client and the server, and the state on the backend may be heavy (for example, a process).  Depending on the vendor, connection usage may cost money.  If work load is not constant, then database connections can be freed up by shrinking the pool when connections are not in use.  When using Active GridLink, connections can be created as needed according to runtime load balancing (RLB) percentages instead of by connection load balancing (CLB) during data source deployment. Shrinking is an effective technique for clearing the pool when connections are not in use.  In addition to the obvious reason that there times where the workload is lighter,  there are some configurations where the database and/or firewall conspire to make long-unused or too-old connections no longer viable.  There are also some data source features where the connection has state and cannot be used again unless the state matches the request.  Examples of this are identity based pooling where the connection has a particular owner and XA affinity where the connection is associated with a particular RAC node.  At this point, WLS does not re-purpose (discard/replace) connections and shrinking is a way to get rid of the unused existing connection and get a new one with the correct state when needed. So far, the discussion has focused on the relationship of initial, minimum, and maximum capacity.  Computing the maximum size requires some knowledge about the application and the current number of simultaneously active users, web sessions, batch programs, or whatever access patterns are common.  The applications should be written to only reserve and close connections as needed but multiple statements, if needed, should be done in one reservation (don’t get/close more often than necessary).  This means that the size of the pool is likely to be significantly smaller then the number of users.   If possible, you can pick a size and see how it performs under simulated or real load.  There is a high-water mark statistic (ActiveConnectionsHighCount) that tracks the maximum connections concurrently used.  In general, you want the size to be big enough so that you never run out of connections but no bigger.   It will need to deal with spikes in usage, which is where shrinking after the spike is important.  Of course, the database capacity also has a big influence on the decision since it’s important not to overload the database machine.  Planning also needs to happen if you are running in a Multi-Data Source or Active GridLink configuration and expect that the remaining nodes will take over the connections when one of the nodes in the cluster goes down.  For XA affinity, additional headroom is also recommended.  In summary, setting initial and maximum capacity to be the same may be simple but there are many other factors that may be important in making the decision about sizing.

    Read the article

  • Problem with NVIDIA G86 on Kubuntu 12.04

    - by Stefan
    I got problems some weeks ago with my NVIDIA G86 (8500 GT), supposedly due to the infamous 295.40 version of the driver. I got errror messges like NVRM: RmInitAdpterFailed. Tried varous sugestions about setting kernel acpi and memory options, but no luck. I pulled in x-swat and got 302.17, if I remember correctly. It did not help. People recommended xorg-edgers , so I pulled that in and got kernel 3.5.0.12 and nvidia 304.43 but the problem remained. Getting slightly panicked, I tried to back back to vanilla 12.04, so I purged nvidia* and located and removed anything on the system that smelled nvidia. I installed nouveau, cause people said it was great, but as it turns out, my card does not seem to be supported. :-( Sigh... So now I fear that I have atmessed up system, and graphics is terrible. Any help would be appreciated. Xorg.0.log: http://paste.ubuntu.com/1189616/ kern.log: http://paste.ubuntu.com/1189634/

    Read the article

  • R Statistical Analytics with Faster Performance for Enterprise Database Access and Big Data

    - by Mike.Hallett(at)Oracle-BI&EPM
    Further demonstrating commitment to the open source community, Oracle has just released enhanced support of the R statistical programming language for Oracle Solaris and AIX in addition to Linux and Windows, connectivity to Oracle TimesTen In-Memory Database in addition to Oracle Database, and integration of hardware-specific Math libraries for faster performance.  Oracle’s Open Source distribution of R is available with the Oracle Big Data Appliance and available for download now. Oracle also offers Oracle R Enterprise, a component of Oracle Advanced Analytics that enables R processing on Oracle Database servers.   This all goes to make big data analytics more accessible in the enterprise and improving data scientist productivity with faster performance Since its introduction in 1995, R has attracted more than two million users and is widely used today for developing statistical applications that analyze big data. Analyst Report: Oracle Advances its Advanced Analytics Strategy  

    Read the article

  • .NET mulithreading and quad core processors

    - by w0051977
    I have a single threaded application that runs on a machine with a quad core processor. The scheduled tasks that run VB.NET forms are too slow. I am new to multi threading and parallel computing. If you have a single threaded application that runs on a server with a multi core processor then does the application only ever use one of the processors? What happens if you have multiple scheduled tasks and multiple instances are in memory at the same time? I have read this question on Stackoverflow: http://stackoverflow.com/questions/607775/how-to-write-net-applications-that-utilize-multi-core-processors, but I am still not clear.

    Read the article

< Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >