Search Results

Search found 15637 results on 626 pages for 'memory efficient'.

Page 419/626 | < Previous Page | 415 416 417 418 419 420 421 422 423 424 425 426  | Next Page >

  • IIS7.5 App Pool recycling. What is the best schedule for Recycling

    - by mikedopp
    I have been using IIS7.5 Since its release. I am also using commerce server 2007sp2. Due to Commerce Servers Need for memory and processor I have the app pool the website is assigned to recycling at midnight every night. My Question is what is the best time table to recycle heavy web app pools? I am looking to keep speed and not bump potential customers while recycling multiple times a day if possible. Another issue is that every few days the same app pool will hang and I have to force a reset of IIS to get it working again.

    Read the article

  • Alternatives to amavis for RAM-bound server

    - by rsuarez
    I'm running a small VPS server that works as web and mail server. It has only 256MB of RAM, and it's sucking 100MB of swap constantly. I've found that one of the culprits is amavis, taking about 30MB of resident memory, and would like to ditch it and use some alternative. I don't have much mail daily, so it being a bit slower wouldn't be a problem. I'd like to avoid Spamassassin altogether, if possible, because it's quite big even if used in offline mode. I'm already using RBLs and a few small blacklists, and used greylisting for a while but abandoned it because it gave me a few problems (don't remember which; I think it was related to not configuring properly white lists for several big ISPs). So, is there some alternative to amavis that I could use without much RAM (and if possible, CPU) usage? Thanks in advance.

    Read the article

  • txt file descriptor in lsof

    - by wfaulk
    In my experience, files that have the file descriptor of txt in lsof output are the executable file itself and shared objects. The lsof man page says that it means "program text (code and data)". While debugging a problem, I found a large number of data files (specifically, ElasticSearch database index files) that lsof reported as txt. These are definitely not executable files. The process was ElasticSearch itself, which is a java process, if that helps point someone in the right direction. I want to understand how this process is opening and using these files that gets it to be reported in this way. I'm trying to understand some memory utilization, and I suspect that these open files are related to some metrics I'm seeing in some way. The system is Solaris 10 x86.

    Read the article

  • windows 7 frequently crashes when running 64 bit debian on virtualbox

    - by erin c
    I've been having some hard times with Virtualbox. I use it to run 64-bit Debian guest on a Windows 7 host to be able to code for linux systems. It generally crashes when I build my code in eclipse cdt, or if I am doing some intensive operations. Should I lower the memory and core usage? Is this some sort of virtualbox problem? I upgraded to virtualbox v4.1.8 and the problem still occurs. My virtual machine instance uses 1736MB out of 4GB of ram, and I use 2 out of 8 processor cores. But still, the whole thing crashes 1 or 2 times every single day.

    Read the article

  • Remove postgres from Mac - installed in /usr/local/, can I just delete files?

    - by Richard
    I want to completely uninstall postgres and start from scratch - the version I have is refusing to work with PostGIS 2.0. I have read other answers on how to do this, but none of them seem to fit the way postgres is set up on this machine. I'm not sure postgres was originally installed on this machine - it wasn't via brew or Postgres.app or the EnterpriseDB installer - but it seems to be living in /usr/local: $ which psql /usr/local/pgsql-9.1/bin/psql The postgres binary itself is in /usr/local/var/postgres/. How can I kill it forever? Can I simply go to /usr/local and do rm -rf pgsql-9.1, and the same in /usr/local/var, and make sure there are no paths in my profile file? Or is there more to it than that? From memory I think I'll need to delete the database files too somehow. Thanks for the help.

    Read the article

  • What are best monitoring tool customizable for cluster / distributed system?

    - by Adil
    I am working on a system having multiple servers. I am interested in monitoring some server specific data like CPU/memory usage, disk/filesystem usage, network traffic, system load etc. and some other my process specific data. What are available open source that can serve my purpose? If it provides to customize the parameter to be monitored and monitor your own data by creating plugin / agent. Any suggestions? I heard of Nagios, Zabbix and Pandora but not sure if they provide such interface.

    Read the article

  • Nginx + PHP-FPM on Centos 6.5 gives me 502 Bad Gateway (fpm error: unable to read what child say: Bad file descriptor)

    - by Latheesan Kanes
    I am setting up a standard LEMP stack. My current setup is giving me the following error: 502 Bad Gateway This is what is currently installed on my server: Here's the configurations I've created/updated so far, can some one take a look at the following and see where the error might be? I've already checked my logs, there's nothing in there (http://i.imgur.com/iRq3ksb.png). And I saw the following in /var/log/php-fpm/error.log file. sidenote: both the nginx and php-fpm has been configured to run under a local account called www-data and the following folders exits on the server nginx.conf global nginx configuration user www-data; worker_processes 6; worker_rlimit_nofile 100000; error_log /var/log/nginx/error.log crit; pid /var/run/nginx.pid; events { worker_connections 2048; use epoll; multi_accept on; } http { include /etc/nginx/mime.types; default_type application/octet-stream; # cache informations about FDs, frequently accessed files can boost performance open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # to boost IO on HDD we can disable access logs access_log off; # copies data between one FD and other from within the kernel # faster then read() + write() sendfile on; # send headers in one peace, its better then sending them one by one tcp_nopush on; # don't buffer data sent, good for small data bursts in real time tcp_nodelay on; # server will close connection after this time keepalive_timeout 60; # number of requests client can make over keep-alive -- for testing keepalive_requests 100000; # allow the server to close connection on non responding client, this will free up memory reset_timedout_connection on; # request timed out -- default 60 client_body_timeout 60; # if client stop responding, free up memory -- default 60 send_timeout 60; # reduce the data that needs to be sent over network gzip on; gzip_min_length 10240; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml; gzip_disable "MSIE [1-6]\."; # Load vHosts include /etc/nginx/conf.d/*.conf; } conf.d/www.domain.com.conf my vhost entry ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } /etc/php-fpm.d/www-data.conf my php-fpm pool config ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } I've got a file in /home/www-data/public_html/index.php with the code <?php phpinfo(); ?> (file uploaded as user www-data).

    Read the article

  • Firewire 800 with Windows

    - by Amitesh
    I have two windows machine with windows server 2003 installed on them. I am running a Lab View script on 1 machine and storing the data. But since it has less memory, I want to transfer data to another machine using firewire 800. Is it possible to configure out the second machine just as an external HDD attached to it and write data directly to it? (This is possible with MACs) Dont want to use the ethernet (internet/ TCP/IP prot.) to transfer the data. Is it possible? Thanks.

    Read the article

  • Configure GNU screen so that it stores command histories in files

    - by user65950
    I would like to configure GNU screen such that it stores the command histories of all the different windows in different files. I know by default GNU screen does not store the command histories (of its different windows) in a file at all (it stores them in memory instead), but it might be possible to tell it to store them in files instead? The different command history files should have the names <session>.<window>.history, or similar. Does anyone have an idea how to do that? (Just to be clear, I want each GNU screen window to write a different file. I like that each window has a different history, and I typically run different types of commands in the different windows.)

    Read the article

  • Laptop Locking Up

    - by David
    I am having a very weird issue on a Lenovo W510 laptop. It will lock up randomly. I have had it lock up during post, during the boot-up of Linux, during login, and after the login. The following are tests that I have performed on the laptop. I ran memtest I took out the extra memory module. I swapped the HDD with another HDD that had Windows 7 on it. (It BSOD'd, and before anyone could possibly read the error line, it restarts.) I tried taking the battery out and booting with only the Power cord. The only other options I can think of the problem being are the motherboard or the PSU. If anyone has any advice, I appreciate it. If not, the HP guy will be here in a few days to fix it. I would just love to call them up and tell them that the service is no longer needed.

    Read the article

  • Do I need to conver the older Access Database, and, if so, how?

    - by octopusgrabbus
    I have an Access 2003 database. When I click on a pivot table, I get this message MS Access There isn't enough memory to complete the Automation object operation on the worksheet object. There is a lot of discussion concerning this message. Here is one link. http://community.spiceworks.com/topic/113228-access-2003-file-pivot-table-issue-when-opening-in-access-2010 But this particular link's explanation doesn't really go into fixing the problem in general, like fixing the pivot tables and getting things all nicely back together in the original Access database. That's why I am also interested in converting the database to 2010 format if that is possible. Are there instructions -- I cannot currently find them and would very much appreciate a link -- on dealing with this problem in a nice stepwise fashion?

    Read the article

  • What is the computer "doing" when it is running slow and task manager is not showing any CPU activity?

    - by Joakim Tall
    Typical example is when shutting down a memoryintensive application. It can take quite a while before the computer gets back up to speed. Is there some inherent cost in releasing memory? Or is it throttled by some kind of harddrive activity, and if so is there any good way to track that? I usually bring up task manager when a computer is running slow, and usually sorting by cpu activity can show what process is causing the problem, but sometimes there is no activity showing. And yes I "show processes from all users", I have been wondering this since the days win2k :)

    Read the article

  • What is the meaining of "deassert" in this context?

    - by Sam.Rueby
    The English majors over at Dell provided me with this error message provided by a PowerEdge 2950. CPU2 Status: Processor sensors for CPU2, IERR was deasserted I've Googled it, random forum posts aren't providing me with a clear answer. It's also apparently not a word: http://dictionary.reference.com/browse/deassert?s=t I can guess the meaning. Assert: to state with assurance, confidence, or force Okay. So the negative of that. The state of lack-of-confidence? What is this error message trying to tell me? Memory errors were grouped with this one: is it trying to say that IERR for CPU2 should be set, but is not? That the current system state is SNAFU but CPU2 sees everything as fine?

    Read the article

  • Asus laptop not retaining Aero theme when unplugged

    - by expiredninja
    The problem i'm having occurs when i unplug my laptop, it disregards several elements of my theme. The taskbar becomes unlocked. My desktop background is turned white. The transparency on my windows goes away. I can fix these things by running the Aero trouble shooting utility, which mentions something about the color bit depth These are these specs for the computer: Asus Laptop / Intel® Pentium® Processor / 15.6" Display / 4GB Memory Model: X54H-BD3MA SKU: 4005394 I'm sure there are duplicates of this, just not sure whick one refers to me. Thank you. Also i think someone should add an "unplugged" tag.

    Read the article

  • How to add addtional disks to a Windows 2008 KVM based Guest?

    - by taazaa
    I have a Win 2008 KVM based guest VM running on a Ubuntu 10 host. It is a raw image of 22G. I want to add a "data" drive which would show up as "D:\" drive on the guest. I first created a raw image using: qemu-img create -f raw ~/vmdisk2.img 50G Then, tried attaching it using virsh attach-disk. When that did not work, I tried editing the xml file of the VM directly. Both did not seem to work. I would greatly appreciate any help on how to do this and what the best practice is. I want to keep the base image small, so that I can clone it (hopefully) and then attach necessary storage based on the application at hand. Update: The xml of the vm before adding the second drive: <domain type='kvm'> <name>win08e-vm1</name> <uuid>183a4ba0-1c0b-0b04-ad01-aa7c3a4cb390</uuid> <memory>1048576</memory> <currentMemory>1048576</currentMemory> <vcpu>2</vcpu> <os> <type arch='x86_64' machine='pc-0.12'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='localtime'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/var/lib/libvirt/images/win08e-vm1.img'/> <target dev='hda' bus='ide'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/home/taazaa/iso/Win08ER264.iso'/> <target dev='hdc' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='1' unit='0'/> </disk> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <interface type='bridge'> <mac address='52:54:00:7f:a7:ae'/> <source bridge='br0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <input type='tablet' bus='usb'/> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' keymap='en-us'/> <video> <model type='vga' vram='9216' heads='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </memballoon> </devices> </domain> Thanks!

    Read the article

  • Problem opening password encrypted .docx file on Word 2003

    - by molecule
    Hi all, I am having a problem opening a .docx file on my Word 2003. I have installed the Compatibility pack for 2007 but when i try to open this particular file, I receive the error "Word experienced an error trying to open the file. Try these suggestions. 1. Check the file permissions for the document, 2. Make sure there is sufficient free memory and disk space, 3. Open the file with the Text Recovery converter. I do not think it is any of the errors as I am able to open it on a different PC with Word 2003 as well. I also do not have any issues opening any non-password encrypted .docx files. Has anyone experienced the same issue? Most posts on the internet relate to "open and repair" but as mentioned, I am able to open this file on another PC without any problems. Any advice is greatly appreciated. Thanks, George

    Read the article

  • How should I capture Linux kernel panic stack traces?

    - by Alnitak
    What's current best practice to capture full kernel stack traces on a Linux system (RHEL 5.x, kernel 2.6.18) that occasionally panics in a device driver? I'm used to the "old" SunOS way of doing things - crash dumps get written to swap, and on reboot the dump gets retrieved in the local file system. man 8 crash refers to diskdump, but that appears to be unsupported. and/or deprecated. I've played with kdump, but it's unclear whether I can get a stack trace from that. Triggering a panic via Magic SysRq didn't create one. It also seems wasteful to reserve so much memory (128MB) just for a kexec crash recovery kernel.

    Read the article

  • How to check if a server runs in pressing mode

    - by Ice
    Hi there, Layout: i have at customer side a server (win2003 R2 SP2 standard edition 32-bit) with a sql-server 2005 and some databases. This system starts with the /3GB-Switch. The system reports 3.25 GB RAM and taskmanager reports the process of sqlserver.exe with 2758255 K as the process with the highest consumption. The OS separates RAM for applications and for itself, normaly 50:50. But here we have the /3GB-Switch aktivated and i think the part for the applications is more than 50% of RAM. Knowledge (or better not knowledge): Somebody told me that if the OS runs out of memory within his part of RAM, the server runs into pressing mode. Questions: What is this pressing mode? Is pressing mode possible at all in this szenario? What should be done to get more performance out of this sql-server, beside optimizing the database and all this stuff.

    Read the article

  • How do I associate server traffic to a domain hosted on that server?

    - by morley
    I have three or four Linux servers, each of which hosts anywhere from 5 to 50 domains. Each domain has its own folder: /www/projectname/web/ Logs go in: /www/projectname/log However, if there's a traffic spike (or, as I see it on my end, a memory usage spike), I'm not sure how to figure out which domain is responsible for the traffic without running tail -f on each of the projects and making an educated guess based on how fast things scroll. There's got to be a better way! There probably is, but I haven't seen it. And the last time I checked, bandwidth monitors only report system-wide load. So if anyone knows how to do this the right way, please let me know. Thanks!

    Read the article

  • Computer turns off and on after start ..then goes dead

    - by Shiki
    I built a new PC from the following components: - CPU: Intel Core i7 950 - MB: Gigabyte X58A-UD3R - RAM: 2x2gb i7 Corsair memory - VGA: Zotac AMP2 GTX260 - HDD: 1 GreenSATA HDD (Western Digital 500gb RE2) When I turn it on, it goes for a few seconds, fans at maximum speed, then turns off. The again, it starts by itself.. and goes with fans on max speed, nothing happens. First I suspected my PSU. It's a Chieftec 450AA PSU. After I borrowed a Chieftec 550AA PSU, I tried to start with that. Exact same story. Any idea ? Do I need a bigger PSU? Reason why its not localized. I never seen this turn on, off, on. If you give answer for that, it would already help people like me, with the same problem.

    Read the article

  • Top ten security tips for non-technical users

    - by Justin
    I'm giving a presentation later this week to the staff at the company where I work. The goal of the presentation is to serve as a refresher/remidner of good practices that can help keep our network secure. The audience is made up of both programmers and non-technical staff, so the presentation is geared for non-technical users. I want part of this presentation to be a top list of "tips". The list needs to be short (to encourage memory) and be specific and relevant to the user. I have the following five items so far: Never open an attachment you didn't expect Only download software from a trusted source, like download.com Do not distribute passwords when requested via phone or email Be wary of social engineering Do not store sensitive data on an FTP server I have two questions: Do you suggest any additional items? Do you suggest any changes to existing items?

    Read the article

  • Virtual Server 2005 R2 kungfu

    - by AngryHacker
    Does Virtual Server 2005 R2 have a command line interface, that's versatile enough? Here is a situation. I run a Win2k VM on an old memory constrained machine. I allocate it 378MB of RAM and the VM runs just fine. Once a month, inside the VM, I backup the (a very large) database, compress it using 7Zip and ftp it to the backup site (all in a script). Unfortunately the compression part takes a massive amount of RAM (far exceeding the 378MB), it goes for the paging file and brings absolutely everything to a crawl and literally takes 2-3 days, if left unattended. So to fix this, I have to shutdown the VM, give it temporarily 768MB of RAM and then the whole thing finishes in 20 minutes. So, is there a way do the following automatically from the host machine in a script? Shutdown the guest OS (I think, I got this part) Change the RAM allocation from 378 to 768 Start the guest OS again then, 1 hour later, do everything in reverse.

    Read the article

  • Server monitoring for medium scale UNIX network

    - by nbartolomeo
    I'm looking for suggestions for a good monitoring tools, or tools, to handle a mixed Linux (RedHat 4-5) and HPUX environment. Currently we are using Hobbit which is working reasonably well but it is becoming harder to keep track of what alerts are sent out for what servers. Features I'd like to see: Easy configuration of servers. The ability to monitor CPU, network, memory, and specific processes I've looked into Nagios but from what I have seen it won't be easy to set up the configuration for all of our servers ~200 and that without installing a plugin into each agent I won't be able to monitor processes.

    Read the article

  • Does the mplayer-mozilla plugin have a problem with long DIVX videos or is this a problem with the U

    - by creamcheese
    Pretty consistently when I'm watching a long DIVX movie in the browser on Karmic - say 2 hours or so - the mplayer-mozilla player stops after a half-hour or an hour and resets back to 0, so I have to reload the whole movie again over the web. It happens repeatedly so I never get past the half-way point in the DIVX version of the movie - I have to watch a flash version instead. I don't know if this is an mplayer issue or a Firefox memory issue. Anyone have any idea how to resolve this?

    Read the article

  • How to configure an ASUS motherboard.

    - by Absolute0
    I have an ASUS P7P55-M motherboard running with an Intel Core i5-750 processor and 4 GB RAM with 1600 MT/s speed. For some reason the default settings of the motherboard make all the components run at half their optimums. I have switched to the "D.O.C.P." profile and supposedly everything is as it's supposed to be (verified with CPU-Z). There is also an "X.M.P." profile and a manual one. Are either of the DOCP or XMP safe to go with? I wouldn't use the manual mode as I would likely mess something up real bad. XMP seems to be more memory oriented.

    Read the article

< Previous Page | 415 416 417 418 419 420 421 422 423 424 425 426  | Next Page >