Search Results

Search found 90278 results on 3612 pages for 'server 2012'.

Page 412/3612 | < Previous Page | 408 409 410 411 412 413 414 415 416 417 418 419  | Next Page >

  • Googlebot repeatedly looks for files that aren't on my server

    - by John at CashCommons
    I'm hosting a site for a volunteer organization. I've moved the site to WordPress, but it wasn't always that way. I suspect at one point it was hacked badly. My Apache error log file has grown to 122 kB in just the past 18 hours. The large majority of the errors logged are of this form -- it's repeated hundreds of times today alone in my log files: [Mon Nov 12 18:29:27 2012] [error] [client xx.xxx.xx.xxx] File does not exist: /home/*******/public_html/*******.org/calendar.php [Mon Nov 12 18:29:27 2012] [error] [client xx.xxx.xx.xxx] File does not exist: /home/*******/public_html/*******.org/404.shtml (I verified that xx.xxx.xx.xxx was a Google server.) I suspect there was a security hole somewhere before, likely in calendar.php, that was exploited. The files don't exist anymore, but there may be many backlinks that exist that reference here, hence why googlebot is so interested in crawling them. How do I fix this gracefully? I still would like Google to index the site. I just want to tell it somehow not to look for these files anymore.

    Read the article

  • 500 Internal Server Error when setting up Apache on localhost

    - by Martin Hoe
    I downloaded and installed XAMPP, and to keep my projects nicely separated I want to create a VirtualHost for each one based on its future domain name. For example, in my first project (we'll say it's project.com) I've put this in my Apache configuration: NameVirtualHost 127.0.0.1 <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/ ServerName localhost ServerAdmin admin@localhost </VirtualHost> <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/sub/ ServerName sub.project.com ServerAdmin [email protected] </VirtualHost> <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/project/ ServerName project.com ServerAdmin [email protected] </VirtualHost> And this in my hosts file: # development 127.0.0.1 localhost 127.0.0.1 project.org 127.0.0.1 sub.project.org When I go to project.com in my browser, the project loads up successfully. Same if I go to sub.project.com. But, if I navigate to: http://project.com/register (one of my site pages) I get this error: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. The error log shows this: [Sun May 20 02:05:54 2012] [error] [client 127.0.0.1] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://project.com/ Sun May 20 02:05:54 2012] [error] [client 127.0.0.1] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://project.com/ Any idea what config items I got wrong or how to get this working? It happens on any page that's not in in the root directory of project.com. Thanks.

    Read the article

  • SQLSTATE[HY000]: General error: 2006 MySQL server has gone away

    - by Barkat Ullah
    Server details: RAM: 16GB HDD: 1000GB OS: Linux 2.6.32-220.7.1.el6.x86_64 Processor: 6 Core Please see the link below for my # top preview: I can often see the error mentioned in title in my plesk panel and my /etc/my.cnf configuration are as below: bind-address=127.0.0.1 local-infile=0 datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql max_connections=20000 max_user_connections=20000 key_buffer_size=512M join_buffer_size=4M read_buffer_size=4M read_rnd_buffer_size=512M sort_buffer_size=8M wait_timeout=300 interactive_timeout=300 connect_timeout=300 tmp_table_size=8M thread_concurrency=12 concurrent_insert=2 query_cache_limit=64M query_cache_size=128M query_cache_type=2 transaction_alloc_block_size=8192 max_allowed_packet=512M [mysqldump] quick max_allowed_packet=512M [myisamchk] key_buffer_size=128M sort_buffer_size=128M read_buffer_size=32M write_buffer_size=32M [mysqlhotcopy] interactive-timeout [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid open_files_limit=8192 As my server httpd conf is set to /etc/httpd/conf.d/swtune.conf and the configuration is as below: at prefork.c: <IfModule prefork.c> StartServers 8 MinSpareServers 10 MaxSpareServers 20 ServerLimit 1536 MaxClients 1536 MaxRequestsPerChild 4000 </IfModule> If I run grep -i maxclient /var/log/httpd/error_log then I can see everyday this error: [root@u16170254 ~]# grep -i maxclient /var/log/httpd/error_log [Sun Apr 15 07:26:03 2012] [error] server reached MaxClients setting, consider raising the MaxClients setting [Mon Apr 16 06:09:22 2012] [error] server reached MaxClients setting, consider raising the MaxClients setting I tried to explain everything that I changed to keep my server okay, but maximum time my server is down. Please help me which parameter can I change to keep my server okay and my sites can load fast. It is taking too much time to load my sites.

    Read the article

  • Oracle Application Server Performance Monitoring and Tuning (CPU load high)

    - by Berkay
    Oracle Application Server Performance Monitoring and Tuning (CPU load high) i have just hired by a company and my boss give me a performance issue to solve as soon as possible. I don't have any experience with the Java EE before at the server side. Let me begin what i learned about the system and still couldn't find the solution: We have an Oracle Application Server (10.1.) and Oracle Database server (9.2.), the software guys wrote a kind of big J2EE project (X project) using specifically JSF 1.2 with Ajax which is only used in this project. They actively use PL/SQL in their code. So, we started the application server (Solaris machine), everything seems OK. users start using the app starting Monday from different locations (app 200 have user accounts,i just checked and see that the connection pool is set right, the session are active only 15 minutes). After sometime (2 days) CPU utilization gets high,%60, at night it is still same nothing changed (the online user amount is nearly 1 or 2 at this time), even it starts using the CPU allocated for other applications on the same server because they freed If we don't restart the server, the utilization becomes %90 following 2 days, application is so slow that end users starts calling. The main problem is software engineers say that code is clear, and the System and DBA managers say that we have the correct configuration,the other applications seems OK why this problem happens only for X application. I start copying the DB to a test platform and upgrade it to the latest version, also did in same with the application server (Weblogic) if there is a bug or not. i only tested by myself only one user and weblogic admin panel i can track the threads and dump them. i noticed that there are some threads showing as a hogging. when i checked the manuals and control the trace i see that it directs me the line number where PL/SQL code is called from a .java file. The software eng. says that yes we have really complex PL/SQL codes but what's the relation with Application server? this is the problem of DB server, i guess they're right... I know the question has many holes, i'd like to give more in detail but i appreciate the way you guide me. Thanks in advance ... Edit: The server both in CPU and Memory enough to run more complex applications

    Read the article

  • Is there a setting in Exchange Server 2007 that we can set to make these headers propogate and be received by a POP/IMAP client?

    - by Ruruboy
    When using EWS Managed API to send Email via Exchange Server 2007. I noticed that MAPI clients like MS Outlook display all custom headers. But when I use POP3/IMAP clients like MS Outlook Express. I have noticed that these custom headers do not display in the message opened from MS Outlook Express. Is there a setting in Exchange Server 2007 that we can set to make these custom headers propagate and be received by a POP/IMAP client? Also why do custom headers in example below display up in lower case in MAPI clients like MS Outlook? But surprisingly if we use SMTPClient class to send email then these headers display as sent with Case Sensitive letters. eg. Header. Example of Headers received by a MAPI client like MS Outlook via Exchange Server 2007 Received: from EXMAILVS1.blabla.com ([192.168.191.136]) by cashtp02.blabla.com ([XXX.XXX.XX.XXX]) with mapi; Mon, 20 Dec 2010 12:17:05 -0800 Content-Type: application/ms-tnef; name="winmail.dat" Content-Transfer-Encoding: binary From: asfsdf <[email protected]> To: asdsdf <[email protected]> Date: Mon, 20 Dec 2010 12:17:04 -0800 Subject: Please send me this header Thread-Topic: Please send me this header Thread-Index: AQHLoILek7g5cFgHQU6lHHfiKkdUMg== Message-ID: <[email protected]> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-Exchange-Organization-SCL: -1 X-MS-TNEF-Correlator: <[email protected]> customheader1: hello ali customheader2: hello Jace MIME-Version: 1.0

    Read the article

  • In Windows 7, why can't I use perfmon against a remote server?

    - by SomeGuy
    I am on Windows 7 and trying to run perfmon against Windows 2003 and Windows 2008 servers. I am running into the same issue with all remote machines. When creating a data collector set, I specify a domain account that is in the administrators group on the remote machines (and "Performance Log Users" and "Performance Monitor Users" to be safe). On the "Available Counters" screen, When I type in a remote computer name, PerfMon locks up for a good 2-3 minutes before I can add any counters. I can then save the collector set. However, when I save it, the go/stop buttons are disabled if I click the set in the left panel, and missing if I click the Data collector set itself in the right panel. See the screens below. I can run data collector sets against my local machine with no problem. I am opening perfmon with my local account in both scenarios. I also have Remote Registry Service started on each remote machine. What is going on?

    Read the article

  • How do I set a postgresql password in pgpass.conf for the Administrator account on Windows Server 2008?

    - by brad
    I have a pgpass.conf file that works well for my default user. It is in C:/Users/myuser/AppData/Roaming/postgresql/pgpass.conf. It reads like so; localhost:5432:*:postgres:password1 I have a process that runs under the Administrator account. When I run whoami under this process I get nt authority/system. I want to be able to access the database from this process but it gets stuck because it needs a password. I have tried putting the above pgpass.conf into C:/Users/Administrator/AppData/postgresql/pgpass.conf and C:/Users/Administrator/AppData/Roaming/postgresql/pgpass.conf but it does not work. Is this the correct place for this file? Am I even able to do this as the Administrator. Unfortunately I cannot change the user that this process runs under.

    Read the article

  • solved: puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work? Update I added verbose logging to the puppet master and restarted nginx; here's the additional info I see in logs Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Could not resolve 10.209.47.31: no name for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 access[/] (info): defaulting to no access for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 Puppet (warning): Denying access: Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 10.209.47.31 - - [10/Dec/2012:18:19:15 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" On the agent machine facter fqdn and hostname both return a fully qualified host name [amisr1@blramisr195602 ~]$ sudo facter fqdn blramisr195602.XXXXXXX.com I then updated the agent configuration to add dns_alt_names = 10.209.47.31 cleaned all certificates on master and agent and regenerated the certificates and signed them on master using the option --allow-dns-alt-names [amisr1@bangvmpllDA02 ~]$ sudo puppet cert sign blramisr195602.XXXXXX.com Error: CSR 'blramisr195602.XXXXXX.com' contains subject alternative names (DNS:10.209.47.31, DNS:blramisr195602.XXXXXX.com), which are disallowed. Use `puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com` to sign this request. [amisr1@bangvmpllDA02 ~]$ sudo puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com Signed certificate request for blramisr195602.XXXXXX.com Removing file Puppet::SSL::CertificateRequest blramisr195602.XXXXXX.com at '/var/lib/puppet/ssl/ca/requests/blramisr195602.XXXXXX.com.pem' however, that doesn't help either; I get same errors as before. Not sure why in the logs it shows comparing access rules by IP and not hostname. Is there any Nginx configuration to change this behavior?

    Read the article

  • The RPC services keeps stopping

    - by oshirowanen
    For some reason, the RPC service keeps stopping which prevents me from remoting into the server. When this happens I have to manually make my way to the server to restart it as simply starting the service (which does not complain when starting), does not let me remote into the server. I have to restart the server manually to be able to remove into the server again. It seems to be happening a few times a day. Does anyone know why this might be happening?

    Read the article

  • How to remove all Couchdb versions in Ubuntu 10.04 (server)? ( after multiple installs )

    - by DjangoRocks
    Hi all, I have done multiple installs of CouchDB using sudo aptitude install couchdb sudo ap-get install couchdb and more recently based on the instructions found at L http://wiki.apache.org/couchdb/Installing_on_Ubuntu May I know how do I uninstall or remove all the above installations? Best Regards. +++++++++++++++++++UPDATE++++++++++++++++++++++++ I've tried running the following commands: apt-get remove couchdb apt-get purge couchdb but received the following errors: (Reading database ... 39814 files and directories currently installed.) Removing couchdb ... invoke-rc.d: initscript couchdb, action "stop" failed. dpkg: error processing couchdb (--remove): subprocess installed pre-removal script returned error exit status 1 invoke-rc.d: initscript couchdb, action "start" failed. dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: couchdb E: Sub-process /usr/bin/dpkg returned an error code (1) May I know how do i fix this? ON issuing the command : dpkg -l | grep couchdb I received the following response: rF couchdb 0.10.0-1ubuntu2 RESTful document oriented database, system D iF couchdb-bin 0.10.0-1ubuntu2 RESTful document oriented database, programs How do i uninstall CouchDB ? I think there's some file corruption?

    Read the article

  • How can I disable ipv6 on Ubuntu Server 8.04?

    - by Boden
    I'm trying to run Dell OMSA on Ubuntu 8.04. However, it's binding to ipv6 and not to an ipv4 address. I can't seem to figure out how to change this behavior. So, since I don't need ipv6 support, I'd like to just disable it and see if that clears things up. I've tried blacklisting ipv6 in /etc/modprobe.d/blacklist (blacklist ipv6), and turning it off in /etc/modprobe.d/aliases (alias net-pf-10 off). I'm seeing both solutions recommended in forums and blogs, but neither works.

    Read the article

  • Adding License to VMware Server 2 via scripting command?

    - by andyt25
    Hi all, I recently discovered the vimsvc/license command in vmware-vim-cmd and was trying to use that to automatically add my license key to a fresh vmware installation. vmware-vim-cmd -H hostip -O portnumber vimsvc/license --source file '/path/to/plaintext-file-that-contains-my-license-key.txt' plaintext-file-that-contains-my-license-key.txt contains my key in XXXXX-XXXXX-XXXXX-XXXXX format, I've also tried it with an extra carriage return at the end. Adding the key that way doesn't work, however. I always get the following error message: [200] Reading local file: /path/to/plaintext-file-that-contains-my-license-key.txt [200] Size of file is 24 bytes. returned were XXXXX-XXXXX-XXXXX-XXXXX [200] Changing license source to: file:/path/to/plaintext-file-that-contains-my-license-key.txt [500] Caught unexpected exception Type: N5Vmomi5Fault17NotEnoughLicenses9ExceptionE what() =vmodl.fault.NotEnoughLicenses GetMsg() = There are not enough licenses installed to perform the operation. It's kinda silly to require a license to be able to add a license, don't you think? ;-) So how do I go about and add the key via script? I would like to avoid any interaction as I have the rest of the install fully scripted and non-interactive. Kind Regards, Stefan

    Read the article

  • How to achieve the following RTO & RPO with logshipping only using SQL Server?

    - by Jimmy Chandra
    Trying to come up with viable backup restore & logshipping solution for achieving the following: 15 minutes Recovery Point Objective (no more than 15 minutes data loss at any time) 5 minutes Recovery Time Objective (must be able to get the db up and running back by 5 minutes) Considering using logshipping only (which I think is kind of pushing it, but I want to know if anyone else know how to achieve this). Some other info for consideration: Using 40 Gbit / sec fiber channel between the primary and disaster recovery (DRC) sites The sites are about 600 km apart. At close of business, the amount of data generated is predicted to be about 150 MB/sec. Log backup is planned for every 5 min. Doing some rough calculation I came up w/ the following numbers: 40 Gbit / sec = 5 MB / sec @ 100% network efficiency. 5 MB / sec = 300 MB / min. @ 300 MB / min, the total amount of data that can be transfer considering the 5min RTO is about 1.5GB, but that will left no time for the actual backup and restore, so if we cut it down to 3min logshipping time, which equals to ~900 MB over 3 minutes at 100% network efficiency, that will left about 1 min backup time and 1 minute restore time. Currently don't have any information if the system being used is capable of restoring 900 MB in 1 min, but assume it can. for COB scenario... 150 MB/sec, and considering the 3 min logshipping time, which should equal to about 27 GB of data over 3 mins...??? I think this is where the SLA will break... since there is no way to transfer 27 GB of data over a 40Gbit/sec line in 3 min. Can I get someone else opinion? I am thinking database mirroring might be a better answer for this...

    Read the article

  • Can Resource Governor for SQL Server 2008 be scripted?

    - by blueberryfields
    I'm looking for a method to, in real-time, automatically, adjust Resource Governor settings. Here's an example: Imagine that I have 10 applications, each hitting a different database on the same database machine. For normal operations, they do not hit the database very hard, so I might want each one to have 10% CPU power reserved. Occasionally, though, one or two of them might spike, and run an operation which could really use the extra power to run faster. I'd like to be able to adjust to compensate (say, reducing the non-spiking apps to 3%, and splitting the difference between the spiking apps). This is a kind of poor man's method of trying to dynamically adjust resource allocation and priorities. Scripts (or something script-like) is preferred, since the requirement is for meta-level adjustments to be possible in real-time, also.

    Read the article

  • List of common ways those could shut down server unexpectedly ?

    - by SpawnCxy
    After running a bash fork bomb which made my webserver down, I think I should be more careful even not under root.I thought it would be totally fine while I'm not under root.So I ignored the warning and ran the bash fork bomb which is :() { :|:& }; : .(Please don't run it if u don't understand this code cuz it will make you system down).And I think I need a list of common ways those could cause a sever shutting down unexpectly even not under root. Any suggestion would be appreciated. Regards `

    Read the article

  • Hosting discussion forums on Windows 2008 Server / IIS, what software to use?

    - by Lasse V. Karlsen
    A plan to look into hosting discussion forums related to our product, on our own servers, has arisen, and I can't seem to find any pure .NET or Windows-based discussion forums. Since we're a pure Windows-based company, installing something that requires MySQL or Linux is going to require administration knowledge we don't currently have. What are our options? Every site I find that shows how to set up discussion forums on IIS involves just taking one of the many LAMP-based ones and tweaking IIS to run PHP or similar to run it. Isn't there a .NET-based discussion forum software? Free or not doesn't matter at this stage, right now we're just looking for options.

    Read the article

  • How to create a service running a .bat file on Windows 2008 Server?

    - by abyx
    I've created the service using sc create myService binpath=myservice.bat But when I start it, it fails with the following error message: [SC] StartService FAILED 1053: The service did not respond to the start or control request in a timely fashion. On Win2k3 I used the srvany.exe from the Resource kit, but there's no resource kit for win2k8. For the time being I've installed the srvany.exe on my machine, but I don't think that's the best way to do it. Thanks!

    Read the article

  • What kind of server hardware is roughly necessary to serve website to 10k users?

    - by jcmoney
    I've been looking at VPS's and the specs they offer for entry level setups seems somewhat surprising to me. I'm am new to this topic but many of VPS offer less than 512MB of memory and my laptop has 4GB of memory so I am curious what does it actually take in terms of hardware to serve say 10k users (say 5k daily active users)? I figure a large number of factors can probably sway this a lot but just for benchmarking, say the site is a social networking site written in php using mysql + apache that's not really doing anything unusual like serving lots of media. So essentially a very basic Facebook minus the absurd number of photos and videos. What about 100k users (50k daily active)? 1 million (500k daily active)? Thanks in advance.

    Read the article

  • ISC DHCP - Force clients to get a new IP address, instead of the being re-issued their previous lease's IP

    - by kce
    We are in the middle of a migration of our DHCP and DNS services from a Debian-based server to a Windows Server 2008 R2 implementation. The Debian server is running isc-dhcpd-V3.1.1. All of workstations are configured to have fixed-addresses between .3 and .40 (the motivation behind that choice is mostly management/political much like here). DHCP leases are given out in the range of .100 to .175. Statically configured servers live in the .200 block and above (which is mostly empty). When we move to the Windows platform, management/political considerations require me to move the IP ranges around again. We would like to keep .1 - .10 reserved for network appliances, switches, and other infrastructure. .200 will remain designated for servers. The addressing space in between should be available to clients and IPs should be dynamically allocated (Edit: instead of automatic as originally mentioned) by the server. My Address Pool on the Windows Server looks like this: 192.168.0.1 192.168.0.254 (Address range for distribution) 192.168.0.1 192.168.0.10 (IP addresses excluded from distribution) 192.168.0.200 192.168.0.254 (IP addresses excluded from distribution) Currently, we have all of our clients still on the .3 - .40 range, and a few machines still active in the .100 - .175 (although there are lots devices that are powered off that still have expired leases with IPs from that range). Since the lease "database" isn't shared between the old and new DHCP server how can I prevent clients from receiving a lease with an IP address that is currently being held by client with a non-expired lease from the old DHCP server? If I just expand the range on the Debian DHCP server to be 192.168.0.10 - 192.168.0.199 is there a way to force clients to not re-use their old IP address when they send their DHCPDISCOVER? Can I make the Windows DHCP server be authoritiative like the ISC implementation? The dhcpd.conf from the Debian server: ddns-update-style none; authoritative; default-lease-time 43200; #12 hours max-lease-time 86400; #24 hours subnet 192.168.0.0 netmask 255.255.255.0 { option routers 192.168.0.1; option subnet-mask 255.255.255.0; option broadcast-address 192.168.0.255; range 192.168.0.100 192.168.0.175; } host workstation-1 { hardware ethernet 00:11:22:33:44:55; fixed-address 192.168.0.3; } ... and so on until 192.168.0.40

    Read the article

< Previous Page | 408 409 410 411 412 413 414 415 416 417 418 419  | Next Page >