Search Results

Search found 91854 results on 3675 pages for 'server hardware'.

Page 537/3675 | < Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >

  • IIS web service responds on server, not from remote client

    - by Aharon Manne
    I have installed a web service on a server running IIS (v6, as far as I can tell). There is another service installed, which responds as expected. My service responds correctly when a browser is pointed to localhost, but there is no response when a remote client tries to query the service. Fiddler on the remote client simply reports a timeout. Wireshark on the remote client shows no response at all from the server, no NACK, nothing. Wireshark on the server detects no query at the relevant port (the service is installed on port 8080). There are no relevant entries in the event viewer. Obviously there is some issue of permissions or authentication. I have tried to compare my service to the service that works, but I have not been able to locate relevant parameters. Any help would be greatly appreciated.

    Read the article

  • MySQL ADO.NET Connector & MSSQL Integration Services

    - by user1114330
    Here I am, day three... attempting to sync a data view on a Windows Vista box (64 bit) running MSSQL 2012 and Visual Studio 2010. Sanity is slipping and hunger for progress fills my attention. I went through hell trying to get the MySQL ODBC drivers to get the job but to no avail...everyone seems to be lost and all the threads I can find are solutions that do not work for me. The problem: System DSN's not being seen by SSIS. SSIS DSN Not Showing as ODBC Data Source I make the decision to try out the ADO.NET connector...and to my surprise it is actually in the selection list in data sources in SSIS. So I take off running to create a Data Flow Task, create an ADO.NET Source (a local MSSQL DB)...all is good as usual. Then I move swiftly to creating a ADO.NET Destination, enter my credentials...wow, I am selecting a database finally on my linux server! Happy thinking that I finally have figured a way to get the job done. Then I move to mappings...nope, something is wrong...I am getting an error that hurts my eyes: Pipeline component has returned HRESULT error code 0xC0208457 from a a method call. Error at Data Flow Task [ADO NET Destination [81]]: Failed to get properties of external columns. The table name you entered may not exist or you do not have SELECT permission on the table object and an alternative attempt to get column properties through connection has failed. Detailed error messages are" You have an error in your SQL syntax check the manual that corresponds to your MySQL server version for the right syntax to use near "database".tablename" at line 1. The descriptor files on path C:\Program Files (x86)\Microsoft SQL Server\110\DTS\ProviderDescriptors\ does not contain schema information for connection of type MySQL.Data.MySqlClient.MySqlConnection. So it looks like it can't the information and therefore I cannot map the tables properly. Any ideas on this would be ultra helpful...thanks in advance to All!

    Read the article

  • Catastrophic Failure opening ODBC via Citrix

    - by Joshdan
    We recently had our Citrix server crash unexpectedly. When it came back up, there was a new issue -- every ODBC connection fails with "Catastrophic Failure" (0x8000FFFF). The issue is limited to Citrix / ICA connections; logging in as the same user via RDP works as usual. The following code is my minimal test case (for wscript): ''// test_odbc.vbs strConn = "Driver={Microsoft Text Driver (*.txt; *.csv)};Dbq=c:\files\;" Set rs = CreateObject("ADODB.recordset") strSQL = "SELECT * FROM myFile.csv" wscript.echo "Press OK to Test" ''// This line breaks over Citrix, but not over Terminal Services ''// ---------------------- rs.open strSQL, strConn, 3,3 ''// ---------------------- wscript.echo rs("a") Any insight would be greatly appreciated. Windows Server 2003 SP1, Citrix MetaFrame Presentation Server 4.0. Clients include at least versions 10.2-11 running on 2000-Vista, OS X. ODBC error happens whether a DSN is used or not, on at least Access, MS-SQL, and CSV. Connections both through the SSL Gateway and directly. There have been a few users actually able to log in without trouble, but I can't pin down anything special about them.

    Read the article

  • /data/tmp on database server?

    - by Mellon
    I am on a Linux ubuntu machine with MySQL installed. My teacher gave out an assignment which mentioned "copy cars.dat to /data/tmp on the MySQL database server" without any explanations, I do not know what is the "/data/tmp on database server" means exactly? Basically after that I need to execute SQL statement like LOAD DATA INFILE '/data/tmp/cars.dat' INTO TABLE cars So, what does copy cars.dat to /data/tmp on the database server means as there is no /data/tmp directory even? Personally, I checked /etc/mysql/my.cnf file, inside which there are definitions of : ... basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp ... Does it mean to copy cars.dat to the tmpdir which is just /tmp under root directory??

    Read the article

  • Troubleshooting mailserver (Postfix, Dovecot) on Ubuntu Server 9.10?

    - by Christoffer
    I have configured a mail server with Postfix and Dovecot on Ubuntu Server 9.10. I followed the guidelines here (using Maildir): https://help.ubuntu.com/community/Postfix https://help.ubuntu.com/community/Dovecot The tests seemed alright so I connected it to GMail which is able to connect and fetch e-mails. But since there's no e-mail in the Maildir/ directory I can't decide if the problem is Postfix or Dovecot. And I am totally new to mailservers so I don't know where to start troubleshooting. So, I want to start by testing Dovecot. How can I create a fake "Hello World"-email directly on the server (using a text editor) so that I can try to fetch it with GMail? If Dovecot is alright, where do I start looking for errors in Postfix? Thank you for your time. Christoffer

    Read the article

  • Installing and running two postgresql versions on different ports (or two instances of same server)

    - by Andrius
    I have postgresql 9.1 installed on my machine (Ubuntu). I need another postgresql server that would run next to the old one. Exact version does not matter, but I'm thinking of using 9.2 version. How could I properly install and run another postgresql version without screwing old one (like upgrading). So those versions would run independently on different ports. Old one on 5432 and new one on 5433 for example. The reason I need this is for two OpenERP versions databases. If I run two OpenERP servers (with different versions) on single postgresql port, it crashes because new OpenERP version detects old versions database and tries to run it, but it crashes because it uses another schemes. P.S or maybe I could just run same postgresql server on two ports? Update So far I tried this: /usr/lib/postgresql/9.1/bin/pg_ctl initdb -D main2 It created new cluster. I changed port to 5433 in new clusters directory postgresql.conf file. Then ran this: /usr/lib/postgresql/9.1/bin/pg_ctl -D main2 -l logfile start I got response server starting. But when I tried to enter new cluster's template database with: psql template1 -p 5433 I got this error: psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5433"? Also now when I try to stop server with: /usr/lib/postgresql/9.1/bin/pg_ctl -D main2 -l logfile start I get this error: pg_ctl: PID file "main2/postmaster.pid" does not exist Is server running? So I don't understand if server is running and what I'm missing here? Update Found what was wrong. Stupid me. I didn't notice that when I changed port in .conf file, that line was commented already. So actually I didn't change anything first time, but thought I did and it used default 5432 port.

    Read the article

  • Connecting to a LDAPS server

    - by Pavanred
    I am working on a development machine and I am trying to connect to my LDAP server. This is what I do - telnet ldaps- 686 then the response is - Could not open connection to the host on port 686 : connect failed But, the strange part is when I connect to my server - telnet ldap- 389 then the connection is successful. My question is, why does this happen? Do I have to install SSL certificate on the client machine where I make the call from? I do not know much about this. I know for a fact that the LDAP server is working fine because other applications are successfully using it currently.

    Read the article

  • PHP Mail Relay via Remote smtp Server

    - by Toqeer
    We have a php application running on Linux which sends emails to there users. Currently its setup like php.ini is configured to send via local sendmail server but we have separate mail server for our organization for this domain. I want to send the php application emails via that remote smtp server so these emails can have the correct SPF records and sign via DKIM. But I could not see such option in php.ini to specify the remote host address to forward emails to that, its for windows only. I saw some post which suggest phpMailer but I could not find how to configure that so all our php application could send via our remote SMTP.

    Read the article

  • After upgrading to 2008 R2 Enterprise and installing more RAM, Windows can only see 4.00 GB

    - by Tom Crane
    (I have also posted this on technet but I'm running out of ideas) I've upgraded from Windows Server 2008 R2 Standard to Enterprise in order to make use of more RAM. The server previously had 32GB of RAM. The upgrade from Standard to Enterprise, using DISM, seemed to go OK, so I powered down and installed the RAM. This a Dell Poweredge T710, I was taking it from 32GB to 72GB. The BIOS recognised the RAM, although I needed to change from "Advanced ECC" to "Optimizer" mode for it to use all of it. After rebooting, windows can see the RAM but in the system panel will display: Installed memory (RAM): 72.0 GB (4.00 GB usable) In the resource monitor, the remainder of the RAM is showing as reserved for hardware. I've tried various RAM configurations, including reverting it to the same chips and same configuration as before the upgrade, but always just 4.00 GB is showing up as usable. Following some threads on these forums I've gone into msconfig and set the maximum memory "by hand" but that doesn't fix the problem. BIOS doesn't seem to have anything that looks like memory remapping which is another suggestion that has come up. How do I make this RAM available to Windows? It was available before the upgrade, because I could use the full 32GB RAM the server had to start with. A screenshot (this is after reverting to the original RAM configuration) http://screencast.com/t/5FuzevdNb I don't know if it's related, but my remote desktop configuration has also disappeared: screencast.com/t/mYedomeQWS (the bottom half of this dialog should allow me to configure Remote Desktop, it was working before the upgrade but now it isn't).

    Read the article

  • SharePoint 2010 deployment problem after added a new server to existing farm

    - by mrt
    I have SharePoint 2010 farm with one server. I'm developing some features in a sharepoint farm solution (not sandbox because there are some user rights problem). All feature scopes are set to "Site". I can deploy the solution to SharePoint with no problem. I added a new web front-end server to my existing farm. Then when I try deploy my solution, VS2010 shows this error: Error occurred in deployment step 'Activate Features': Feature with Id 'xxx' is not installed in this farm, and cannot be added to this scope I login with AD administrator account to development server. Administrator account is in site collection admins on the target web application. The farm account is in local administrators group. Is there a solution for this error?

    Read the article

  • How to get uncaught PHP errors from fcgi server

    - by jason
    My web hosting company recently replaced suPHP with fcgi on my dedicated server because I needed opcode functionality. Since then I see loads of 500 errors in apache error and php error log is empty. I have no way to figure out whats the root cause. One reason I found out was time out so my hosting company changed FcgidConnectTimeout and FcgidIOTimeout to a value of 200. I believe there are no more timeout errors in my php script. My question is how do I capture PHP error before 500 internal server error page display to user? I am using Centos 5.8 server, WHM 11.34.0 (build 9), PHP 5.3.18 and Apache/2.2.23 (Unix) mod_ssl/2.2.23 OpenSSL/0.9.8e-fips-rhel5 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 mod_fcgid/2.3.6

    Read the article

  • Windows server 2008 R2 nas connection

    - by Msmit1993
    I am a student and i'm experimenting with my own home server running windows server 2008 R2. Now i have a NAS in my network an i can connect with it trough my server but only if i am on the administrator account. My own account is in the administrators group but can't establish a connection with the NAS unless the administrator account has already connected. How do i make my own account connect with the NAS without first connecting with the admin account. The error message on my account: An error occurred while reconnecting Y: to \NAS\Download Microsoft Windows Network: The user name could not be found. This connection has not been restored.

    Read the article

  • Run a script on user connection on the VM host

    - by Scott Chamberlain
    I have a server running a Virtual Desktop Managed Pool, what I would like to do is when a user logs in I would like a script to check the number of available VMs and if below a threashold add additional VMs to the pool. The script to check the load and add to the pool is not the problem, I have that already figured out: $collectionName = "Test1"; $rdvh = "vmHost.example.com"; $minAvailableVMs = 2; Import-Module RemoteDesktop; $pool = Get-VirtualDesktopCollection -CollectionName $collectionName; $availableVMs = $pool.Size - ($pool.Size * $pool.PercentInUse / 100); $status = Get-VirtualDesktopCollectionJobStatus $collectionName #only add new servers if we are below the threashold and in the JOB_COMPLETEED state if($availableVMs -lt $minAvailableVMs -and $status.Status -eq [Microsoft.RemoteDesktopServices.Management.VirtualDesktopCollectionJobStatus]::JOB_COMPLETED) { Add-RDVirtualDesktopToCollection -CollectionName $collectionName -VirtualDesktopAllocation @{"$rdvh" = 1} } The problem I am having is, how do I run the above script on the Virtualization Host/Connection Broker/Some other server when a user connects?. I don't think it would be appropriate to run this as a logon script inside the VM, I think there is a way to do this on the management side but I don't know the new scripting interface in Server 2012 R2 well enough to know which commandlets I should look for to schedule this. EDIT: I know System Center is perfect for this but I do not have a license and was denied when I asked for it to be added to the budget.

    Read the article

  • poor performance when deleteing many files

    - by choppy
    I've got two machines: The first is IBM Blade with 24 cores 96GB RAM and single local hard drive with 278GB divided to 4 partitions: 1. c: - 40GB; 3GB free 2. d: - 40GB; 37GB free 3. e: - 198322GB; 198.1 free 4. 100MB (EFI system Partition) Formatted with GPT The other is pizza server with 4 cores 8GB RAM and single local hard drive with 273GB divided to 3 partitions: 1. c: - 136.81; 20GB free 2. d: - 88.74GB; 87.91 free 3. e: - 47.85GB; 46.91 free Formatted with MBR I have two scripts, the first creates 20,000 files in one directory, each file size is 192KB, the second delete the folder (recursive) and prints how much time it toke to delete all files. The problem is on the first server (blade) it takes about 2 minutes to delete all 20,000 files while on the second (pizza) it takes about 4 seconds!? Both servers have clean windows server 2008R2 with no special application running on background. Any ideas what is going on?

    Read the article

  • Advice for migrating email server

    - by Chris Adams
    Hi there, I'm planning to migrate a Zimbra server with about 200gb of data from a server hosted in an office into a datacentre, to increase uptime (we've had a couple of outages when our network here started flaking out, and we have people in other countries relying on this server too). However, I'm not sure how best to migrate the data into the data centre without rendering the connection unusable during office hours, because there's far too much to send in over night over the two meg upstream connection we have here. I'm familiar with using tools like nice to stop a long running process degrading machine performance - is there a simple way to throttle a connection between office hours, so the long running transfer doesn't block the pipe, but then opens up outside of office hours to make the most of the bandwidth? I'm aware the alternative here is to simply mail a hard drive to the data centre, but I'd like to avoid doing that if I could. We're using Centos Linux for our servers, in the office and the datacentre, so extra points for an open source linux answer.

    Read the article

  • DNS is resolving fine but can't access the server (unless changing /etc/hosts)

    - by victor hugo
    Hi all, I have a VPS server with a public IP, I added some A entries in my name server like svn.example.com - 1.1.1.1 Also I added some entries in my workstation /etc/hosts file in order to work with the domains meanwhile the DNSs were refreshed. It's been around 3 days from this and I configures everything in my server (using the hosts file), the DNSs are ready and I removed the entries but for my surprise I can access the servers nor anything in my domain or sub-domains (even a ping doesn't work). I've triple checked and the DNSs are OK. I don't know too much about DNSs . Any help would be appreciated. The IP address of my VPS is 74.63.223.43 I have these domain names, all pointing to the same IP (using A entries) hartoingenio.com www.hartoingenio.com svn.hartoingenio.com

    Read the article

  • Upload a directory recursively to an FTP server

    - by Nicolas Raoul
    I am writing a Linux shell script to copy a local directory to a remote server (removing any existing files). Local server: ftp and lftp commands are available, no ncftp or any graphical tools. Remote server: only accessible via FTP. No rsync nor SSH nor FXP. I am thinking about listing local and remote files to generate a lftp script and then run it. Is there a better way? Note: Uploading only modified files would be a plus, but not required

    Read the article

  • client denied by server configuration , Options ExecCGI is off in this directory

    - by John Smiith
    Error log [Wed Jul 04 06:01:54 2012] [error] [client 127.0.0.1] File does not exist: C:/xampp/www/htdocs/localhost/favicon.ico [Wed Jul 04 06:01:54 2012] [error] [client 127.0.0.1] client denied by server configuration: C:/xampp/Bin/apache [Wed Jul 04 06:01:58 2012] [error] [client 127.0.0.1] Options ExecCGI is off in this directory: C:/xampp/www/htdocs/wsgi_app/wsgi_handler.py [Wed Jul 04 06:01:58 2012] [error] [client 127.0.0.1] client denied by server configuration: C:/xampp/Bin/apache [Wed Jul 04 06:01:58 2012] [error] [client 127.0.0.1] File does not exist: C:/xampp/www/htdocs/localhost/favicon.ico [Wed Jul 04 06:01:58 2012] [error] [client 127.0.0.1] client denied by server configuration: C:/xampp/Bin/apache Virtual host <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "C:/xampp/www/htdocs/localhost" ServerName localhost ServerAlias www.localhost ErrorLog "logs/localhost-error.log" CustomLog "logs/localhost-access.log" combined </VirtualHost> mod_wsgi httpdconf WSGIScriptAlias /wsgi "C:/xampp/www/htdocs/wsgi_app/wsgi_handler.py" <Directory "C:/xampp/www/htdocs/wsgi_app/"> AllowOverride None Options None Order deny,allow Allow from all </Directory> http://localhost/wsgi giving 403 error

    Read the article

  • Implementing an isolated guest WLAN via IPSec VPN on Windows

    - by sysadmin1138
    We are attempting to set up a guest WLAN network that is isolated from the rest of our network. This is proving difficult due to a couple of technical reasons. My first choice was to use a separate VLAN, on which our Firewall's handy WLAN port would handle DHCP, DNS and the network isolation we need. Unfortunately, due to the fact that our main office and our Internet connection itself are in different locations connected by way of a Metro Ethernet connection, I'm at the mercy of our ISP for VLAN transit. They won't pass a second VLAN between our two sites. And my hardware doesn't support 802.1ad "Q-in-Q", which would also solve this problem. So I can't use the VLAN method for isolation. At least not without spending money. As our Firewall can handle IPSec site-to-site VPN connections, I hope it is possible to connect a Server 2008R2 (standard) server I have in the office location to the WLAN and provide gateway services to the firewall. Thusly: Unfortunately, I don't know if it is possible to connect the two this way. The firewall has a pretty flexible IPSec/L2TP implementation (I've used it to connect iPads in the wild), but is neither Kerberized or supports NTLM. The Connection Security Rules view on the Windows server seems to get close to what I think needs to be done, but I'm failing on figuring out how to get it to do what I need it to do. Is this even possible, or do I need to pursue alternate solution?

    Read the article

  • How to setup a virtual machine in Ubuntu desktop to run Debian Server

    - by stickman
    I want to run a virtual machine in my Ubuntu desktop that runs a Debian server. The purpose of this is to generate Debian packages. I have some C++ applications that were originally developed on my Ubuntu machine, and I need to (re)compile them on a Debian server in order to: build Deb packages for deployment on a Debian server make sure that the applications will definitely work on a debian server The idea is so that I can do 90% of my development on Ubuntu (where I am more comfortable), and deploy a binary package that definitely works on Debian. BTW, I am developing on Karmic Kola (Ubuntu 9.10). [Edit] Following the advice I got so far, I have installed debootstrap and Debian 'Lenny' on /srv/chroot/debian_lenny on my machine. I am not sure this is the server version, but in any case I dont think that matters for my purposes (though it would be useful to know how to specifically install the server version). At the moment though, I am like a fish out of water, since there is no GUI, and it is only a console that I have in the chroot jail. I had a look in the home folder (I cheated, by using the KNavigator in Ubuntu), and there are no folders there - which presumably mean that no users have been set up as yet in the Debian "system". I would like to know how to do the following: Download and install the dev tools needed for (re)compiling my C++ apps Copy my projects from the Ubuntu "system" to the Debian "system" After building the binaries, I would like to create a debian binary package containing all of my binaries, so that I can install the package on a Debian server (my remote server)

    Read the article

  • Trouble opening my router to my web server

    - by Justin Heather Barrios
    Here's the story. I have a webs server created and connected to my router. The website works great when I'm connected to the router, but when I'm off the network I can't access the website. I got the IP for my router by googling "what is my ip." I have opened ports 80 to 10080 to link to the server in the router. One odd thing that I don't understand. When I am in network if I access XXX.XXX.XX.XX:80 I can access the web page no problem. If I access XXX.XXX.XX.XX:81 (or any other port) I get the error "Cannot access server." Any idea what the problem could be? Could it be my ISP?

    Read the article

  • Load Testing Linux Virtual Server

    - by Anubhav Agarwal
    I have configured a Linux virtual network with following configuration 172.17.6.112- VIP 172.17.6.111- Linux Director | |----------172.17.6.113 --- Real Server 1 |----------172.17.6.114 --- Real Server 2 I am using direct routing technique. I am unable to test my LVS network. Are there some good scripts/softwares available for load testing. I am running apache2.0 service on them. I came across with testlvs on the internet but am unable to understand its documentation. Are there more simpler ones I want to test the response time of server using various scheduling algorithms .

    Read the article

  • Disabling LDAP Signing on Windows PDC in Local Policy

    - by Golmaal
    I just tripped over my own feet it seems. Playing around on a Windows 2008 R2 server (set up as domain controller), I was intrigued by certain warning event (event id 2886) which says: "To enhance the security of directory servers, you can configure both Active Directory Domain Services (AD DS) and Active Directory Lightweight Directory Services (AD LDS) to require signed Lightweight Directory Access Protocol (LDAP) binds." So I thoughtlessly did some Googling and set the relevant policies which enforce LDAP signing. Now I don't remember but I may have done that using Local Policy. Now I have setup a pfsense box which must authenticate AD users via LDAP. While the firewall can communicate over secure channel, it is difficult to manage the same for other packages such as Squid and SquidGuard. So now I have to disable i.e. undo those policy changes. The problem is that they are greyed out! The policies in question are LDAP server signing and LDAP client signing. I don't remember what I did but when I access these policies from Local Policy editor on the server, they are set to "Require Signing" and are greyed out. The same policies can still be set via Default Domain Controller option in Group Policy editor. So how can I reset these greyed out policies? Thanks

    Read the article

  • PHP/mail : server sends email originating from wrong domain

    - by Niro
    I have a Mediatemple dv (Plesk) server with two domains, each has static IP. I had domain1 as main domain and domain2 as secondary. When A PHP script from domain2 sends email the headers show the IP address of domain1 as the origin. Received: from domain2.com (domain1.com [70.ipof domain1]). I want only domain2 to be mentioned so I did the following: Changed server name to domain2.com made domain2.com the primary domain (about 30 hours ago) made fixed IP address of domain2.com the default address for the server. Still when the script sends emails I see the same info as above in the header. What do I need to do to make the email origin domain2.com?

    Read the article

  • Server HTTP Load times slow?

    - by cdog5000
    Hello, My server @ codemeh.com (HTTP Server) seems to be randomly loading slowly, I cannot tell if it just my forums (http://www.codemeh.com/forums/) that are loading slowly or if the WHOLE site is just loading slowly since my forums are the largest thing on the site right now. load average: 0.02, 0.17, 0.20 That is super low to my knowledge. I have tried Google Page Analytic plug-in for FireFox to solve the problem but nothing comes up that is VERY bad. If someone could investigate this for me since I am very new at apache and server configurations. Thanks! (top): PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 7493 www-data 15 0 98.2m 16m 9092 S 3 0.8 0:27.24 apache2 26429 www-data 15 0 98.2m 15m 7392 S 3 0.7 0:03.45 apache2 26477 www-data 17 0 98.2m 15m 7396 S 3 0.7 0:03.16 apache2 1 root 15 0 2468 1384 1156 S 0 0.1 0:00.49 init 1367 root 25 0 2564 816 660 S 0 0.0 0:00.00 xinetd 1526 root 15 0 29576 5420 1976 S 0 0.3 1:02.69 fail2ban-server 3703 root 15 0 13512 9312 1696 S 0 0.4 0:11.59 miniserv.pl 3915 postfix 15 0 6056 1652 1320 S 0 0.1 0:00.00 pickup 4010 root 15 0 4548 1296 972 S 0 0.1 0:37.36 ntpd 7448 root 15 0 98528 26m 20m S 0 1.3 0:00.27 apache2 7454 www-data 18 0 33580 2616 368 S 0 0.1 0:00.04 apache2 7528 www-data 18 0 108m 24m 15m S 0 1.2 0:27.60 apache2 7974 root 16 0 8700 2728 2164 S 0 0.1 0:00.08 sshd 8123 cdog5000 15 0 8832 1596 896 S 0 0.1 0:00.00 sshd 8126 cdog5000 18 0 4484 1716 1384 S 0 0.1 0:00.00 bash 8141 cdog5000 15 0 2344 980 796 R 0 0.0 0:00.11 top 13461 root 15 0 8700 2728 2164 S 0 0.1 0:00.07 sshd 13567 cdog5000 18 0 8832 1492 896 S 0 0.1 0:00.33 sshd 13569 cdog5000 18 0 4484 1728 1388 S 0 0.1 0:00.09 bash 17983 root 15 0 4392 1268 988 S 0 0.1 0:00.00 su 17987 root 15 0 4516 1752 1380 S 0 0.1 0:00.09 bash 18081 www-data 15 0 98.2m 14m 6588 S 0 0.7 0:04.91 apache2 20000 www-data 15 0 98.3m 15m 8040 S 0 0.8 0:02.45 apache2 20019 www-data 15 0 98.2m 14m 6808 S 0 0.7 0:04.97 apache2 30343 root 15 0 3964 1012 764 S 0 0.0 0:00.03 vsftpd 30382 root 15 0 2304 908 716 S 0 0.0 0:00.62 cron 30401 mysql 17 0 141m 17m 5416 S 0 0.9 1:02.20 mysqld 30424 root 15 0 5472 912 504 S 0 0.0 0:00.04 sshd 30473 syslog 15 0 1916 676 536 S 0 0.0 0:01.02 syslogd 30611 amavis 15 0 33872 25m 2292 S 0 1.2 0:03.11 amavisd-new 31890 amavis 18 0 34888 24m 1792 S 0 1.2 0:00.00 amavisd-new 31891 amavis 18 0 34888 24m 1784 S 0 1.2 0:00.00 amavisd-new 32397 clamav 18 0 104m 84m 1272 S 0 4.1 1:06.46 clamd 32563 clamav 15 0 12832 5716 4440 S 0 0.3 0:01.29 freshclam 32573 root 23 0 1892 456 372 S 0 0.0 0:00.00 courierlogger 32575 root 18 0 2096 684 544 S 0 0.0 0:00.01 authdaemond 32583 root 23 0 1892 360 284 S 0 0.0 0:00.00 courierlogger 32584 root 24 0 2000 612 516 S 0 0.0 0:00.00 couriertcpd 32598 root 23 0 1892 360 284 S 0 0.0 0:00.00 courierlogger 32599 root 25 0 2000 612 516 S 0 0.0 0:00.00 couriertcpd 32604 root 18 0 1892 460 372 S 0 0.0 0:00.00 courierlogger 32605 root 18 0 2000 624 532 S 0 0.0 0:00.00 couriertcpd 32607 root 18 0 2308 404 256 S 0 0.0 0:00.02 authdaemond 32608 root 18 0 2096 260 116 S 0 0.0 0:00.03 authdaemond 32609 root 15 0 2308 404 256 S 0 0.0 0:00.03 authdaemond 32610 root 18 0 2096 260 116 S 0 0.0 0:00.02 authdaemond 32612 root 18 0 2308 404 256 S 0 0.0 0:00.02 authdaemond 32621 root 24 0 1892 364 284 S 0 0.0 0:00.00 courierlogger 32622 root 25 0 2000 608 516 S 0 0.0 0:00.00 couriertcpd 32633 root 15 0 105m 936 716 S 0 0.0 0:02.26 nscd 32719 root 16 0 6252 1680 1344 S 0 0.1 0:01.24 master 32738 postfix 15 0 6188 1776 1400 S 0 0.1 0:00.44 qmgr 32758 postfix 15 0 6492 2564 1788 S 0 0.1 0:00.14 tlsmgr (/etc/apache2/sites-available/default): NameVirtualHost * <VirtualHost *> ServerAdmin webmaster@localhost DocumentRoot /var/www/web1/web/ <Directory /var/www/web1/web/> Options Indexes MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> I have fail2ban server and I dont have any firewall at this point and time that I know of. SMF is 2.0 RC4 and apache version is 2.2.14. I run a MySQL server on another box in the same DC (Persistent Connection). I installed eAccelerator today and it didnt help.

    Read the article

< Previous Page | 533 534 535 536 537 538 539 540 541 542 543 544  | Next Page >