Search Results

Search found 3618 results on 145 pages for 'huge'.

Page 3/145 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • The Huge Disaster Within The Linux 2.6.35 Kernel

    <b>Phoronix:</b> "While the 2.6.35 code has not even seen its first release candidate yet, there are some massive performance drops in a variety of different tests that have yet to be corrected and nothing like we have encountered with previous kernel release cycles especially for a regression that has lived now for about one week."

    Read the article

  • Nvidia Linux Driver Huge Resolution

    - by darxsys
    I'm trying to setup a working CUDA SDK on my Linux Mint. I'm new to Linux and everything connected with it. So, I tried following some steps on how to install CUDA. Firstly, I downloaded a Linux driver from here: http://developer.nvidia.com/cuda/cuda-downloads version 295.41. After that, I barely found a way to run it. I did it like this: 1. typed in sudo init 1 in terminal and switched to root 2. typed service mdm stop 3. ran the *.run file downloaded from the link above Then it started installing the driver. It gave some warning messages, but I ignored it. After installation, I typed init 5 and it came back to GUI screen, BUT everything is huge. I restarted, still huge. My screen resolution is 640x480 on a 17 inch laptop monitor. I tried running Nvidia X Server Settings, but it says: "You do not appear to be using Nvidia X Driver. Please edit your X configuration file." I tried that. Nothing happened. I cant change the resolution because that Nvidia Settings thing gives no options. Then I googled some things, installing some packages - nothing. The biggest problem is I don't understand whats really going on. My laptop is a Samsung with i7 and Nvidia Gt 650M with optimus. I cant even install bumblebee, but that is something I will try if I manage to get my resolution to default. Please, help!

    Read the article

  • Deleting huge chunks of data from mysql innodb

    - by mingyeow
    I need to delete a huge chunk of my data in my production database, which runs about 100GB in size. If possible, i would like to minimize my downtime. My selection criteria for deleting is likely to be DELETE * FROM POSTING WHERE USER.ID=5 AND UPDATED_AT<100 What is the best way to delete it? Build an index? Write a sequential script that deletes via paginating through the rows 1000 at a time?

    Read the article

  • Sql Server huge tables with no rows

    - by Mike Gates
    I have a Sql Server database that has a few tables with zero row count but take up a combined 10 GB of space. I can see this by doing right-click/properties on the tables in question (data space is huge, between 1 and 6 GB, and row count is zero on these tables). I have no clue what could be causing this as I would assume zero rows would mean nearly zero space taken. Any ideas?

    Read the article

  • Huge area of stuck pixels

    - by pixelady
    A toddler slammed down my laptop screen while an iPod was laying on top of the keyboard. The damage resulted in a massive area of stuck pixels on the laptop screen, approximately 2 inches by 10 inches in area. I've tried running various programs that rapidly flick the pixels in different colors, as well as massaging the screen with heat and also without heat. These are the standard methods I read about for fixing a stuck pixel. But none of the online articles I read said how to fix a huge area of pixels, not just single pixels. What else can I try to get the many pixels unstuck? My computer is no longer under warranty and I don't want to buy a new one.

    Read the article

  • how to make a small image become really huge

    - by DennyHalim.com
    all webmaster should already know about hotlinking stuffs. and we know how to ban those bad referer too... but i want to get revenge... i want to replace the hotlinked images with one huge image with few megs in size. i have found one good image. yet it less than 100k. i already use it to replace all bad hotlinkers. how can i convert this image to become few megs?

    Read the article

  • sync two huge filesystems

    - by guettli
    I need to sync two huge file systems. Both sides run linux with full root access. My preferred solution: I can read the list of changed files and directories and sync only the changed files. Here are some solutions and why they don't fit: rsync: Needs to check recursively all files. There are some million files and only little changes. The check takes too long. unison: the same: needs to check all files. inotify: I need a handler for every directory and there too many. Inotify was not build for "watch all files" scenarios. DRDB: Both sides should run independent.

    Read the article

  • NAT cause huge External (actually internal) bandwidth usage

    - by user67953
    We have 4 servers running in a data center, with internal IP: 192.168.3.* assigned. A hardware (FORTIGATE) firewall configured NAT, and it will lead the traffic as: external IP: 111.222.333.10 -> 192.168.3.10 www.server1.com 111.222.333.11 -> 192.168.3.11 www.server2.com 111.222.333.12 -> 192.168.3.12 www.server3.com In DNS, we have www.server1.com A 111.222.333.10 Now if I send a lot of data to www.server1.com from www.server2.com, the data will be send through 111.222.333.10 (external IP) and this cause our bandwidth usage huge (expensive!). The work around I have is to add a local host mapping to server2: 192.168.3.10 www.server1.com. That way when send files from server2 to www.server1.com, it will be internal. However, we are having more and more servers, it would be hard to manually add mapping to every server. Just wondering do we have another solution for this? Can we do something in the FORTIGATE firewall? ps. The DNS server being used is public, such as opendns, Google dns etc.

    Read the article

  • How to make a huge ram drive?

    - by Brandon Moore
    At my old job when a report was needed I could sit down with someone and pull up results and get immediate feedback, and then refine my queries and ultimately have the data we needed, in the format we needed within 30-90 minutes. I just started working for a new company with a database containing millions of records and I spent my whole 8 hours making a report that I feel I could have made in less than 2 hours if it were not for the massive amount of data the queries are working with, and the fact that I couldn't ask the person needing the data to sit down with me and give me feedback as I pulled up results as I am used to. So I am trying to think of how we can make the server faster... much faster, so that I can have the same level of productivity I'm used to. One thought that just came to mind is that memory is so cheap these days, and by my calculations I could buy 10 8gig ram sticks for 1000 bucks. What I have never heard of though is a device that would let me combine these into a huge ram drive. So I'd like to know if any such device exists, and if not what is the largest ram drive I could realistically make and how would I go about doing so? EDIT: To you guys who are saying the database shema needs to be analyzed... you can't make a query such as "Select f1, f2, f3, etc from SomeTable" run any faster by normalizing or indexing the table. What I'm talking about IS ABSOLUTELY a need for improved performance at the hardware level. I am used to having results come back to me in a few seconds, not a few minutes or much less a half an hour. Maybe that's what you guys are used to who have 100 billion record tables and you feel like that's fast, but I'm looking for results back from tables with about 10 million records to come back to me withing less than half a minute TOPS.

    Read the article

  • Huge anon blocks in pmap

    - by Parik
    I am doing a pmap on a tomcat process and I am seeing some huge anon blocks. From what I read anon blocks are used for thread stacks and for JNI. I have a very moderate thread count. How can I go about finding out what is causing these huge anon blocks? 00000000ee0d0000 26752K rwx-- [ anon ] 00000000efaf0000 33792K rwx-- [ anon ] 00000000f1bf0000 25856K rwx-- [ anon ] 00000000f3530000 39680K rwx-- [ anon ] ( on a side note is pmap the correct way to measure how much memory is allocated to tomcat?)

    Read the article

  • Including huge string in our c++ programs ?

    - by Xinus
    I am trying to include huge string in my c++ programs, Its size is 20598617 characters , I am using #define to achieve it. I have a header file which contains this statement #define "<huge string containing 20598617 characterd>" When I try to compile the program I get error as fatal error C1060: compiler is out of heap space I tried following command line options with no success /Zm200 /Zm1000 /Zm2000 How can I make successful compilation of this program? Platform: Windows 7

    Read the article

  • Huge performance difference between two web servers, odd behavior seen using process monitor

    - by Francis Gagnon
    We have two Coldfusion servers that have a huge performance difference running the exact same code on the exact same input data. The code in questions instantiates a large amount of CFCs (Coldfusion Components, which are similar to objects in OOP languages). I compared the two servers by running Process Monitor and then calling the problematic code on both machines. I learned two things. First, Coldfusion opens CFC files every time it instantiates an object. Both servers do this, so it cannot be the cause of the performance difference. Second, the fast server opens the CFC files directly while the server with the performance problem seems to navigate its way through the path until it reaches the desired CFC file. It does this for every file, even the ones it has previously loaded, and because the code instantiates so many CFCs it becomes very slow. See below the partial Promon traces that show this behavior. It can take over 60 seconds for the slow server to do what the fast one does in 2 seconds. Can anyone tell me what causes this behavior? Is it a Coldfusion setting? Since Coldfusion runs on top of Java, is it a Java setting? Is it an OS option? The fast server is running Windows XP and I think the slow server is a Windows Server 2003. Bonus question: Coldfusion doesn't seem to perform any READ FILE operations on any of the CFC or CFM files. How can this be? Sample of the fast server opening CFC files: 11:25:14.5588975 jrun.exe QueryOpen C:\CF\wwwroot\APP\com\HtmlUtils.cfc 11:25:14.5592758 jrun.exe CreateFile C:\CF\wwwroot\APP\com\HtmlUtils.cfc 11:25:14.5595024 jrun.exe QueryBasicInformationFile C:\CF\wwwroot\APP\com\HtmlUtils.cfc 11:25:14.5595940 jrun.exe CloseFile C:\CF\wwwroot\APP\com\HtmlUtils.cfc 11:25:14.5599628 jrun.exe CreateFile C:\CF\wwwroot\APP\com\HtmlUtils.cfc 11:25:14.5601600 jrun.exe QueryBasicInformationFile C:\CF\wwwroot\APP\com\HtmlUtils.cfc 11:25:14.5602463 jrun.exe CloseFile C:\CF\wwwroot\APP\com\HtmlUtils.cfc Equivalent sample of the slow server opening CFC files: 11:15:08.1249230 jrun.exe CreateFile D:\ 11:15:08.1250100 jrun.exe QueryDirectory D:\org 11:15:08.1252852 jrun.exe CloseFile D:\ 11:15:08.1259670 jrun.exe CreateFile D:\org 11:15:08.1260319 jrun.exe QueryDirectory D:\org\cli 11:15:08.1260769 jrun.exe CloseFile D:\org 11:15:08.1269451 jrun.exe CreateFile D:\org\cli 11:15:08.1270613 jrun.exe QueryDirectory D:\org\cli\cpn 11:15:08.1271140 jrun.exe CloseFile D:\org\cli 11:15:08.1279312 jrun.exe CreateFile D:\org\cli\cpn 11:15:08.1280086 jrun.exe QueryDirectory D:\org\cli\cpn\APP 11:15:08.1280789 jrun.exe CloseFile D:\org\cli\cpn 11:15:08.1291034 jrun.exe CreateFile D:\org\cli\cpn\APP 11:15:08.1291709 jrun.exe QueryDirectory D:\org\cli\cpn\APP\com 11:15:08.1292224 jrun.exe CloseFile D:\org\cli\cpn\APP 11:15:08.1300568 jrun.exe CreateFile D:\org\cli\cpn\APP\com 11:15:08.1301321 jrun.exe QueryDirectory D:\org\cli\cpn\APP\com\HtmlUtils.cfc 11:15:08.1301843 jrun.exe CloseFile D:\org\cli\cpn\APP\com 11:15:08.1312049 jrun.exe CreateFile D:\org\cli\cpn\APP\com\HtmlUtils.cfc 11:15:08.1314409 jrun.exe QueryBasicInformationFile D:\org\cli\cpn\APP\com\HtmlUtils.cfc 11:15:08.1314633 jrun.exe CloseFile D:\org\cli\cpn\APP\com\HtmlUtils.cfc 11:15:08.1315881 jrun.exe CreateFile D:\ 11:15:08.1316379 jrun.exe QueryDirectory D:\org 11:15:08.1316926 jrun.exe CloseFile D:\ 11:15:08.1330951 jrun.exe CreateFile D:\org 11:15:08.1338656 jrun.exe QueryDirectory D:\org\cli 11:15:08.1339118 jrun.exe CloseFile D:\org 11:15:08.1526468 jrun.exe CreateFile D:\org\cli 11:15:08.1527295 jrun.exe QueryDirectory D:\org\cli\cpn 11:15:08.1527989 jrun.exe CloseFile D:\org\cli 11:15:08.1531977 jrun.exe CreateFile D:\org\cli\cpn 11:15:08.1532589 jrun.exe QueryDirectory D:\org\cli\cpn\APP 11:15:08.1533575 jrun.exe CloseFile D:\org\cli\cpn 11:15:08.1538457 jrun.exe CreateFile D:\org\cli\cpn\APP 11:15:08.1539083 jrun.exe QueryDirectory D:\org\cli\cpn\APP\com 11:15:08.1539553 jrun.exe CloseFile D:\org\cli\cpn\APP 11:15:08.1544126 jrun.exe CreateFile D:\org\cli\cpn\APP\com 11:15:08.1544980 jrun.exe QueryDirectory D:\org\cli\cpn\APP\com\HtmlUtils.cfc 11:15:08.1545482 jrun.exe CloseFile D:\org\cli\cpn\APP\com 11:15:08.1551034 jrun.exe CreateFile D:\org\cli\cpn\APP\com\HtmlUtils.cfc 11:15:08.1552878 jrun.exe QueryBasicInformationFile D:\org\cli\cpn\APP\com\HtmlUtils.cfc 11:15:08.1553044 jrun.exe CloseFile D:\org\cli\cpn\APP\com\HtmlUtils.cfc Thanks

    Read the article

  • Huge EAR deployment

    - by bozo
    Hello all, I'm trying to figure out how to deploy a huge (40-50 MB) EAR file to the server through a rather slow VPN connection. The EAR contains EJB and WAR projects created in Glassfish, and 90% of the file size is from external dependency libraries used. Has anyone came up with a strategy for elegant deployment to production system from Netbeans, where the deployment (over the network) is done only for what is really needed (i.e. just one WAR, not the entire EAR, or just one lib, not the entire libraries subproject). Related to the first point, how to separate external dependency libs from project in Netbeans, so that the project compiles on development machine, but when the EAR/WAR/EJB is created it does not contain all the dependency JARs, which are making it huge. Perhaps we need to write custom ant script? Start using maven? Thank you all for kind answers, Bozo

    Read the article

  • How to store images without taking up huge amounts of RAM

    - by Sheeo
    I'm working on a silverlight project where users get to create their own Collages. The problem When loading a bunch of images by using the BitmapImage class, Silverlight hogs up huge unreasonable amounts of RAM. 150 pictures where single ones fill up at most 4,5mb takes up about 1,6GB of RAM--thus ending up throwing memory exceptions. I'm loading them through streams, since the user selects their own photos. What I'm looking for A class, method or some process to eliminate the huge amount of RAM being sucked up. I've tried using a WriteableBitmap to render the images into, but I find this method forces me to reinvent the wheel when it comes to drag/drop and other things I want users to be able to do with the images.

    Read the article

  • Quickly retrieve the subset of properties used in a huge collection in C#

    - by ccornet
    I have a huge Collection (which I can cast as an enumerable using OfType<()) of objects. Each of these objects has a Category property, which is drawn from a list somewhere else in the application. This Collection can reach sizes of hundreds of items, but it is possible that only, say, 6/30 of the possible Categories are actually used. What is the fastest method to find these 6 Categories? The size of the huge Collection discourages me from just iterating across the entire thing and returning all unique values, so is there a faster method of accomplishing this? Ideally I'd collect the categories into a List.

    Read the article

  • Unescpaing huge single-line string on Linux

    - by Lajos Nagy
    I ended up with a huge, single line string literal (don't ask me how) where everything is escaped (mostly), including new lines and double quotes. Problem is, I want the original string. The string is huge so I'm not even sure how to begin. Here's what I have: "This\n is \"nice\",\nain\'t it?" This is what I want: This is "nice", ain't it? Again, the problem is that other shell sensitive stuff is not escaped (like $, or !), and that the string is couple of megabytes.

    Read the article

  • JBoss 5.1.0.GA and huge vfs-nested.tmp

    - by Petteri Hietavirta
    I noticed this while running a performance test with JMeter. For first half an hour everything was fine and the /server/all/tmp directory size was around 36M. Then suddenly the tmp directory grew up to 6.1G. The space was taken by jar files inside vfs-nested.tmp. I found https://jira.jboss.org/jira/browse/JBAS-7126 but adding that config but it made no difference.

    Read the article

  • Huge Image Problem

    - by Amira Elsayed
    Hi All, I have a great problem , and I have no idea how to solve it , I have create a chart (Mind map) in Smart draw, what I want now is to print this Mind map , the mind map is very large when i Export it as Image I have a big image , so can any one tell me if there is any software that can divide this big image into small parts of size (A4) to be able to print it on several papers and show it to my boss please help as soon as you can Thanks in Advance

    Read the article

  • Excel file growing huge (>150 MB)

    - by Josh
    There is one particular Excel file that is used by a number of employees at my company. It is edited from both Excel 2003 and 2007, with the "Sharing" feature turned on to allow multiple writers at once. The file has a decent amount of data on several sheets with some basic formatting, and used to be about 6MB, which seems reasonable for its content. But after a few weeks of editing, the file grew to 10, then 20 MB, and eventually skyrocketed to more than 150 MB, even though it still has about the same amount of data as before. It now takes 5-10 minutes to open it, and that much time again to save it. The first time this happened, I copied the content of each sheet into a new, blank workbook, and saved the new workbook; this brought it back down to about 6MB. Now, it has blown up again. The workbook uses the "Data Validation" feature to limit the values in certain columns to the contents of a few named ranges. Copying all the data into a new workbook means re-setting up all the data validation, which is a pain and not something that we want to do every month. As a troubleshooting step, I tried saving the file in "XML Spreadsheet 2003" format, hoping to get some insight into what was being stored. Sure enough, the file was almost a gig, and almost all of the 10 million lines look like this: <NamedCell ss:Name="Z_21D5114F_E50C_46AC_AA4F_C3FF540C717F_.wvu.FilterData"/> <NamedCell ss:Name="Z_1EE2BA5E_3011_4F9A_8ACD_E58835250FC4_.wvu.FilterData"/> <NamedCell ss:Name="Z_1E3BDCEA_6A72_4ECC_BF4F_7B03CC66181E_.wvu.FilterData"/> I've seen a few VBScripts online to manage and enumerate named cells that are hidden in Excel's built-in interface, though I wonder how they'd handle my 10 million named cells. What I really need, though, is an understanding of why this keeps happening. What actions in excel could be causing this?

    Read the article

  • Why the huge difference between etch and lenny MySQL

    - by rmarimon
    I've been working on a program for the last year. The development environment is working with a database in MySQL running on debian etch version mysql Ver 14.12 Distrib 5.0.32, for pc-linux-gnu (i486) using readline 5.2. The production environment is working on debian lenny with version mysql Ver 14.12 Distrib 5.0.51a, for debian-linux-gnu (i486) using readline 5.2. I was just timing some database access and what takes in the development environment 150 seconds, takes 300 in the production environment. I checked the /etc/mysql/my.cnf files on both systems and the only differences are # development bind-address = 10.168.1.82 log_bin = /var/log/mysql/mysql-bin.log # production bind-address = 127.0.0.1 myisam-recover = BACKUP #log_bin = /var/log/mysql/mysql-bin.log I dump a database from the production and load it into the development and with the same server everything takes half the time !!! What should I check?

    Read the article

  • converting huge MPEG audio files to something smaller

    - by john
    I've got some large MPEG audio files (144 MB each) that I'm looking to convert to something smaller so I can send them out as attachments to an email. Any suggestions on the software to use? I'm looking for something free that will run on Windows. I don't really care what the destination file is, mp3 would be nice. If there's a web service out there that would do this without the need to download any software to my machine, that would be even better, but I would be more than happy just getting it done any way I can. Thanks!

    Read the article

  • Huge email sizes when using mail merge in Word 2010

    - by Nic
    So I've designed an HTML template to send out some emails on. The code is fine, everything looks great there, and it tests just fantastically. I was sending out putting my recipients in the BCC field, but I decided to make it a little more personal and open the file in Word and do an email merge. The HTML file itself is 3.06kb and contains an img src to an absolute URL, which is about 125kb (a little large, I know, but it's very important). When I merge the file from Word 2010 - Outlook 2010, the email size jumps to about 250kb. It's not much, I know, but I'm a gigantic nerd and I'm stuck thinking it should be about 5kb with MIME overhead. Here's the file list on one of the test emails: File Size image001.png 104366 image002.gif 43 MESSAGE 1259 Mime.822 152575 TEXT.htm 5712 Since the img src is specified, I'm not sure why these are coming through. If this is an issue inherent to Outlook, I'd be happy to explore other options.

    Read the article

  • SQL Server 2005, Huge LDF file.

    - by Scott Jackson
    Hi, I have a database running on SQL Server 2005. The database is 20Gb and the LDF file is 35Gb ! I'm now running low on disk space and want to shrink the log file. How can I do this and how can I stop this happening again ? Many thanks Scott

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >