Search Results

Search found 12282 results on 492 pages for 'memory deallocation'.

Page 319/492 | < Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >

  • windows 7 frequently crashes when running 64 bit debian on virtualbox

    - by erin c
    I've been having some hard times with Virtualbox. I use it to run 64-bit Debian guest on a Windows 7 host to be able to code for linux systems. It generally crashes when I build my code in eclipse cdt, or if I am doing some intensive operations. Should I lower the memory and core usage? Is this some sort of virtualbox problem? I upgraded to virtualbox v4.1.8 and the problem still occurs. My virtual machine instance uses 1736MB out of 4GB of ram, and I use 2 out of 8 processor cores. But still, the whole thing crashes 1 or 2 times every single day.

    Read the article

  • What is the meaining of "deassert" in this context?

    - by Sam.Rueby
    The English majors over at Dell provided me with this error message provided by a PowerEdge 2950. CPU2 Status: Processor sensors for CPU2, IERR was deasserted I've Googled it, random forum posts aren't providing me with a clear answer. It's also apparently not a word: http://dictionary.reference.com/browse/deassert?s=t I can guess the meaning. Assert: to state with assurance, confidence, or force Okay. So the negative of that. The state of lack-of-confidence? What is this error message trying to tell me? Memory errors were grouped with this one: is it trying to say that IERR for CPU2 should be set, but is not? That the current system state is SNAFU but CPU2 sees everything as fine?

    Read the article

  • Remove postgres from Mac - installed in /usr/local/, can I just delete files?

    - by Richard
    I want to completely uninstall postgres and start from scratch - the version I have is refusing to work with PostGIS 2.0. I have read other answers on how to do this, but none of them seem to fit the way postgres is set up on this machine. I'm not sure postgres was originally installed on this machine - it wasn't via brew or Postgres.app or the EnterpriseDB installer - but it seems to be living in /usr/local: $ which psql /usr/local/pgsql-9.1/bin/psql The postgres binary itself is in /usr/local/var/postgres/. How can I kill it forever? Can I simply go to /usr/local and do rm -rf pgsql-9.1, and the same in /usr/local/var, and make sure there are no paths in my profile file? Or is there more to it than that? From memory I think I'll need to delete the database files too somehow. Thanks for the help.

    Read the article

  • Which Windows OS Supports 8 GB RAM in a Laptop and Suggestions for a Better Laptop for Personal & De

    - by Ellen
    I am about to purchase a laptop and have zeroed on the following two of them. Toshiba L500-ST2544 Toshiba L505-ES5034 The Common Specification for both of them are as follows - RAM - 4GB DDR3 Memory HDD - 320 GB Processor - Intel® Core™ i3-330M Processor WebCam and Mic - Available HDMI Port - Available Numeric Key Pad - Available Windows 7 (64 bit) Home Premium Now, the only difference between ST2544 and ES5034 is that, the ST2544 has a maximum of 2 slots with 2 GB in each. So, you can have a max of 4 GB RAM in that. The ES5034 can support 8 GB RAM, so, in a couple of years, if I want to add another 4 GB RAM I will be able to do it. The price for ST2544 is USD 629.00 whereas, the price for a ES5034 is USD685. A difference is USD 55.00 (not a major amount, but still something extra). Is it worthwhile going for the ES5034? Which Windows Operating System supports 8 GB of RAM?

    Read the article

  • Alternatives to amavis for RAM-bound server

    - by rsuarez
    I'm running a small VPS server that works as web and mail server. It has only 256MB of RAM, and it's sucking 100MB of swap constantly. I've found that one of the culprits is amavis, taking about 30MB of resident memory, and would like to ditch it and use some alternative. I don't have much mail daily, so it being a bit slower wouldn't be a problem. I'd like to avoid Spamassassin altogether, if possible, because it's quite big even if used in offline mode. I'm already using RBLs and a few small blacklists, and used greylisting for a while but abandoned it because it gave me a few problems (don't remember which; I think it was related to not configuring properly white lists for several big ISPs). So, is there some alternative to amavis that I could use without much RAM (and if possible, CPU) usage? Thanks in advance.

    Read the article

  • How to get Bash shell history range

    - by Aniti
    How can I get/filter history entries in a specific range? I have a large history file and frequently use history | grep somecommand Now, my memory is pretty bad and I also want to see what else I did around the time I entered the command. For now I do this: get match, say 4992 somecommand, then I do history | grep 49[0-9][0-9] this is usually good enough, but I would much rather do it more precisely, that is see commands from 4972 to 5012, that is 20 commands before and 20 after. I am wondering if there is an easier way? I suspect, a custom script is in order, but perhaps someone else has done something similar before.

    Read the article

  • Issue with Visual C++ 2010 (Express) External Tools command

    - by espais
    Hi all, Normally we develop in VS 2005 Pro, but I wanted to give VS 2010 a spin. We have custom build tools based off of GNU make tools that are called when creating an executable. This is the error that I see whenever I call my external tool: ...\gnu\make.exe): *** couldn't commit memory for cygwin heap, Win32 error 487 The caveat is that it still works perfectly fine in VS2005, as well as being called straight from the command line. Also, my external tool is setup exactly the same as in VS 2005. Is there some setting somewhere that could cause this error to be thrown?

    Read the article

  • Lightweight, low cost enterprise backup solution

    - by Scott
    Looking for a backup solution primarily for Windows clients (XP/7), that will either back up to 2 different servers (1 on site, 1 off site - internet - can be our own server), or back up to 1 server and then we would need to somehow backup that server offsite/internet. By lightweight, I mean the backup client software should not eat up much memory and processor since some of the client machines are older. I am used to using Crashplan for home use - the pricing is nice for the amount of backup I get, and it works great / easy to install and get going - I can back up to my own machines locally and over the net. However, the price is going to be a little steep for enterprise level backup, 1500+ machines. Possibly ZManda and Bacula are good choices to consider? Are they light weight? Can the clients/agents be set to go over the net and/or multiple backup servers?

    Read the article

  • Linux (non-transparent) per-process hugepage accounting

    - by Dan Pritts
    I've recently converted some java apps to run with linux manually-configured hugepages. I've got about 10 tomcats running on a system and I am interested in knowing how much memory each one is using. I can get summary information out of /proc/meminfo as described in Linux Huge Pages Usage Accounting. But I can't find any tools that tell me about the actual per-process hugepage usage. I poked around in /proc/pid/numa_stat and found some interesting information that led me to this grossity: function pshugepage () { HUGEPAGECOUNT=0 for num in `grep 'anon_hugepage.*dirty=' /proc/$@/numa_maps | awk '{print $6}' | sed 's/dirty=//'` ; do HUGEPAGECOUNT=$((HUGEPAGECOUNT+num)) done echo process $@ using $HUGEPAGECOUNT huge pages } The numbers it gives me are plausible, but i'm far from confident this method is correct. Environment is a quad-CPU dell, 64GB ram, RHEL6.3, oracle jdk 1.7.x (current as of 20130728)

    Read the article

  • txt file descriptor in lsof

    - by wfaulk
    In my experience, files that have the file descriptor of txt in lsof output are the executable file itself and shared objects. The lsof man page says that it means "program text (code and data)". While debugging a problem, I found a large number of data files (specifically, ElasticSearch database index files) that lsof reported as txt. These are definitely not executable files. The process was ElasticSearch itself, which is a java process, if that helps point someone in the right direction. I want to understand how this process is opening and using these files that gets it to be reported in this way. I'm trying to understand some memory utilization, and I suspect that these open files are related to some metrics I'm seeing in some way. The system is Solaris 10 x86.

    Read the article

  • 1tera flop cluster?

    - by Adobe
    I want to buy a $40000 1 tera flop cluster to keep it in a room. What are the standard configurations? Cluster is supposed to do molecular dynamics simulations on biological systems. I'm proposed a 4 pc with 8 cores each by the selling company I'm deadling with. It looks like I also need infiniband. Does some one has an experience -- what phisical memory should I buy etc? I know things change very quickly... Still there might be a point or two to state. Edit: OS is supposed to be linux, application is gromacs.

    Read the article

  • vps like [load] graphs

    - by foober
    I investigated a couple of tools but they were really annoying and not polished. kSar for exampe is supposed to graph sar output, but it doesn't work. There's a perl script around (sar2rrd) that's supposed to convert sar output in rrd format and generate graphs. Doesn't work. (at least it doesn't like the output of "atsar" as per debian/ubuntu package). Tried munin but it wants to mess with http servers, and for some reason it didn't really work, too. It displayed errors in the webpage generated by the http server it put on port 4949. So, is there a simple install and forget tool to generate daily load,cpu,memory,network graphs? It seems strange to me that this problem has not been solved, maybe I'm looking in the wrong places

    Read the article

  • Windows CE Remote Kernel Tracker - gathering data in one (more) file during a log period of time

    - by Nic
    I'm using the "Windows CE Kernel Tracker" tool to gather data from my embedded device. This is working fine for short period of time. It seems that the tool is getting data in memory and not on disk. I'm wondering if there is a way to take the data from the device and log it in one or more file on my development computer. This could be useful for long time test period : for instance, one night or one entire day. Any ideas? p.s. I don't want to log on to the device, I want to log on my development PC.

    Read the article

  • Could a computer act (dependably) as a wireless router for 200+ clients? [closed]

    - by awkwardusername
    That is, I have a Core 2 Duo E7500 at 2.93GHz, with 2GB memory. I plan to install either Windows Server 2012 or Zeroshell 2.0RC1, and it also (planning to) includes two PCIe Wireless Card Adapters. It also has one ethernet port, and I will connect that to another machine which will be a Database and a Web Server. My plan is to have a corporate level wireless intranet with 200+ clients. I cannot afford to buy routers because I want to operate at zero costs as possible, utilizing my available resources. Is that plan plausible? Also, what minimum specs should my wireless card have? @SvenW: Oh, I meant corporate on the deployment level. I am still an undergraduate and this is more of an educational and expiremental work than an actual project. I got Windows Server 2012 for free though, and this isn't actually for commercial use.

    Read the article

  • Configure GNU screen so that it stores command histories in files

    - by user65950
    I would like to configure GNU screen such that it stores the command histories of all the different windows in different files. I know by default GNU screen does not store the command histories (of its different windows) in a file at all (it stores them in memory instead), but it might be possible to tell it to store them in files instead? The different command history files should have the names <session>.<window>.history, or similar. Does anyone have an idea how to do that? (Just to be clear, I want each GNU screen window to write a different file. I like that each window has a different history, and I typically run different types of commands in the different windows.)

    Read the article

  • IIS7.5 App Pool recycling. What is the best schedule for Recycling

    - by mikedopp
    I have been using IIS7.5 Since its release. I am also using commerce server 2007sp2. Due to Commerce Servers Need for memory and processor I have the app pool the website is assigned to recycling at midnight every night. My Question is what is the best time table to recycle heavy web app pools? I am looking to keep speed and not bump potential customers while recycling multiple times a day if possible. Another issue is that every few days the same app pool will hang and I have to force a reset of IIS to get it working again.

    Read the article

  • Tracking down Data Execution

    - by Agnel Kurian
    I have some malware infecting one of our machines at home. It first showed up as winulty.exe. After investigating, I am of the opinion that winulty.exe itself is an uninfected file but is being modified after it has loaded into memory. Turning on Data Execution Prevention for all processes and services has confirmed this to be true. How do I track down the process responsible for this? I've used File Monitor from sysinternals.com to monitor winulty.exe and see this being accessed by the svchost.exe instance hosting most of the system services and also by dfrgntfs.exe. How do I know which service or which DLL has been infected?

    Read the article

  • How to add addtional disks to a Windows 2008 KVM based Guest?

    - by taazaa
    I have a Win 2008 KVM based guest VM running on a Ubuntu 10 host. It is a raw image of 22G. I want to add a "data" drive which would show up as "D:\" drive on the guest. I first created a raw image using: qemu-img create -f raw ~/vmdisk2.img 50G Then, tried attaching it using virsh attach-disk. When that did not work, I tried editing the xml file of the VM directly. Both did not seem to work. I would greatly appreciate any help on how to do this and what the best practice is. I want to keep the base image small, so that I can clone it (hopefully) and then attach necessary storage based on the application at hand. Update: The xml of the vm before adding the second drive: <domain type='kvm'> <name>win08e-vm1</name> <uuid>183a4ba0-1c0b-0b04-ad01-aa7c3a4cb390</uuid> <memory>1048576</memory> <currentMemory>1048576</currentMemory> <vcpu>2</vcpu> <os> <type arch='x86_64' machine='pc-0.12'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='localtime'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/var/lib/libvirt/images/win08e-vm1.img'/> <target dev='hda' bus='ide'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/home/taazaa/iso/Win08ER264.iso'/> <target dev='hdc' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='1' unit='0'/> </disk> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <interface type='bridge'> <mac address='52:54:00:7f:a7:ae'/> <source bridge='br0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <input type='tablet' bus='usb'/> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' keymap='en-us'/> <video> <model type='vga' vram='9216' heads='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </memballoon> </devices> </domain> Thanks!

    Read the article

  • Patriot-2 RAM on ASUS PK5-E not running at PC2-8500?

    - by evan
    I have a P5K-E Motherboard and recently upgraded to 8GB of RAM (4 2GB sticks of PC2-8500 1066 MHz - Patriot Viper 2). When looking at the RAM with PC Wizard 2010 it's showing the RAM being recognized PC2-6500. I've read elsewhere that this is a common problem and requires manually changing the DRAM Voltage in BIOS to 2.2. I've done this and I'm still getting it recognized as PC2-6500. (I also manually set the FSB speed to 1066MHz instead of the AUTO). Any ideas on how to get this memory working properly? Thanks in advance!

    Read the article

  • How do I revert back to official Linksys firmware from dd-wrt on WRT56G2 v1?

    - by Chris Moore
    I've been having trouble with dd-wrt on my Linksys WRT56G2 v1 router and want to go back to the stock Linksys firmware for it. The router has only 2MB of flash memory, and so I'm running the 'micro' version of dd-wrt. My question is what is the best way to do that? I could use the http://router/Upgrade.asp dd-wrt "firmware upgrade" web interface to do it, in which case there's a dropdown menu choice for "After flashing, reset to": "don't reset" or "reset to default settings". Which should I pick? Some people say that I should use a program called tftp.exe instead. I can probably gain access to a Windows machine if this is necessary. Which of these is the way to proceed? I don't want to brick the router if at all possible! Note: I used the 'wrt54g' tag because I wasn't allowed to create a 'wrt54g2' tag due my low rep here.

    Read the article

  • Nginx + PHP-FPM on Centos 6.5 gives me 502 Bad Gateway (fpm error: unable to read what child say: Bad file descriptor)

    - by Latheesan Kanes
    I am setting up a standard LEMP stack. My current setup is giving me the following error: 502 Bad Gateway This is what is currently installed on my server: Here's the configurations I've created/updated so far, can some one take a look at the following and see where the error might be? I've already checked my logs, there's nothing in there (http://i.imgur.com/iRq3ksb.png). And I saw the following in /var/log/php-fpm/error.log file. sidenote: both the nginx and php-fpm has been configured to run under a local account called www-data and the following folders exits on the server nginx.conf global nginx configuration user www-data; worker_processes 6; worker_rlimit_nofile 100000; error_log /var/log/nginx/error.log crit; pid /var/run/nginx.pid; events { worker_connections 2048; use epoll; multi_accept on; } http { include /etc/nginx/mime.types; default_type application/octet-stream; # cache informations about FDs, frequently accessed files can boost performance open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # to boost IO on HDD we can disable access logs access_log off; # copies data between one FD and other from within the kernel # faster then read() + write() sendfile on; # send headers in one peace, its better then sending them one by one tcp_nopush on; # don't buffer data sent, good for small data bursts in real time tcp_nodelay on; # server will close connection after this time keepalive_timeout 60; # number of requests client can make over keep-alive -- for testing keepalive_requests 100000; # allow the server to close connection on non responding client, this will free up memory reset_timedout_connection on; # request timed out -- default 60 client_body_timeout 60; # if client stop responding, free up memory -- default 60 send_timeout 60; # reduce the data that needs to be sent over network gzip on; gzip_min_length 10240; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml; gzip_disable "MSIE [1-6]\."; # Load vHosts include /etc/nginx/conf.d/*.conf; } conf.d/www.domain.com.conf my vhost entry ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } /etc/php-fpm.d/www-data.conf my php-fpm pool config ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } I've got a file in /home/www-data/public_html/index.php with the code <?php phpinfo(); ?> (file uploaded as user www-data).

    Read the article

  • Google Chrome gets really slow on 10+ open tabs

    - by Anton
    For some time already I face a problem with Google Chrome. I really love this browser, but on Windows 7 on a pretty decent machine (i5, 4GB RAM) it gets REALLY slow when I open for instance 10 techcrunch.com pages. Once I do that it becomes virtually difficult to scroll through pages and the general responsiveness of the browser gets down. And if I open 20+ or 30+ tabs there is a good chance all of them will crash. Does anyone got an idea? This happens to me on several PCs with Windows 7 64bit. At 10 tabs there is 600-700MB memory used by Chrome. Two systems have the issue are both laptops with integrated graphics. One by Intel, the other an nVidia GeForce 310M.

    Read the article

  • Firewire 800 with Windows

    - by Amitesh
    I have two windows machine with windows server 2003 installed on them. I am running a Lab View script on 1 machine and storing the data. But since it has less memory, I want to transfer data to another machine using firewire 800. Is it possible to configure out the second machine just as an external HDD attached to it and write data directly to it? (This is possible with MACs) Dont want to use the ethernet (internet/ TCP/IP prot.) to transfer the data. Is it possible? Thanks.

    Read the article

  • Do I need to conver the older Access Database, and, if so, how?

    - by octopusgrabbus
    I have an Access 2003 database. When I click on a pivot table, I get this message MS Access There isn't enough memory to complete the Automation object operation on the worksheet object. There is a lot of discussion concerning this message. Here is one link. http://community.spiceworks.com/topic/113228-access-2003-file-pivot-table-issue-when-opening-in-access-2010 But this particular link's explanation doesn't really go into fixing the problem in general, like fixing the pivot tables and getting things all nicely back together in the original Access database. That's why I am also interested in converting the database to 2010 format if that is possible. Are there instructions -- I cannot currently find them and would very much appreciate a link -- on dealing with this problem in a nice stepwise fashion?

    Read the article

  • Virtual Server 2005 R2 kungfu

    - by AngryHacker
    Does Virtual Server 2005 R2 have a command line interface, that's versatile enough? Here is a situation. I run a Win2k VM on an old memory constrained machine. I allocate it 378MB of RAM and the VM runs just fine. Once a month, inside the VM, I backup the (a very large) database, compress it using 7Zip and ftp it to the backup site (all in a script). Unfortunately the compression part takes a massive amount of RAM (far exceeding the 378MB), it goes for the paging file and brings absolutely everything to a crawl and literally takes 2-3 days, if left unattended. So to fix this, I have to shutdown the VM, give it temporarily 768MB of RAM and then the whole thing finishes in 20 minutes. So, is there a way do the following automatically from the host machine in a script? Shutdown the guest OS (I think, I got this part) Change the RAM allocation from 378 to 768 Start the guest OS again then, 1 hour later, do everything in reverse.

    Read the article

< Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >