Search Results

Search found 4534 results on 182 pages for 'dns q'.

Page 157/182 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • How can I parse/ transform text log data before it gets captured in SCOM 2007 R2?

    - by Abs
    I'm pretty much a noob with System Center Operations Manager 2007, and I'm probably missing something pretty basic, but I'm stumped anyway. We're setting up monitoring on some of our servers, and we'd like to capture data from some plain text log files (e.g. DNS debug logs, DHCP logs). It looks to me like I can set up a generic text file monitoring rule and get events captured into the main Ops Manager database, but my understanding is that the whole line of text from the plain text log gets captured as one field. In an ideal world, we'd be able to parse or transform that log file data to make it easier to query later. Is this possible? Is it easy? Do I have to buy expensive 3rd-party software to do it? One more thing: it would be even better if there was a way to stuff this data into the Audit Collection Services (ACS) database instead of the main one, but I'll take what I can get. Any help would be greatly appreciated.

    Read the article

  • What does a DHCP-client consider to be the "best" answer?

    - by Nils
    We have training rooms where normally Windows XP is installed (via PXE). The "normal" DNS/DHCP infrastructure are Windows-Servers. The training room has its own VLAN (different from the Windows servers), so there is most propably an IP helper for DHCP requests active on the Cisco router where all PCs from that room are connected to. Now we wanted to convert some of the PCs to Linux instead. The idea was: Put our own Laptop with a DHCP server into the VLAN of the room and override the "normal" DHCP response. The idea was that this should work, since a directly attached DHCP server in that VLAN should have a faster response-time than the "normal" DHCP server located some hops away from that VLAN. It turned out that this did not work. We had to manually release the lease on the original DHCP server to get it working. On the Laptop we did see the client requesting the IP and "our" dhcp was sending NACKs to the Windows IP request, before that we did offer our own response. Old Question: Why did this not work out as expected? What is making the PC regain its old lease? Update 2012-08-08: The regain-issue has been explained in the DHCP-RFC. Now this explains why the PC regains its old lease. Now we do release the IP from the Windows-DHCP-server before giving it another try. Again - the Windows-DHCP-server wins. I suspect that there is some algorithm for the dhcp-client which determines the "best" dhcp-answer for the client. The new question is: How does the client choose the "best" answer?

    Read the article

  • Subdomains and address bar

    - by Priednis
    I have a fairly noob question about how subdomains work. As I understand at first the DNS server specifies that a request for certain subdomain.domain.com has to go to the IP address of domain.com, and the webserver at domain.com further processes the request and displays the needed subdomain page. It is not entirely clear to me how (for example Apache) server does it. As I understand there can be entries in vhosts.conf file which specify folders that contain the subdomain data. Something like: <VirtualHost *> ServerName www.domain.com DocumentRoot /home/httpd/htdocs/ </VirtualHost> <VirtualHost *> ServerName subdomain.domain.com DocumentRoot /home/httpd/htdocs/subdomain/ </VirtualHost> and there also can be redirect entries in .htaccess files like rewritecond %{http_host} ^subdomain.domain.com [nc] rewriterule ^(.*)$ http://www.domain.com/subdomain/ [r=301,nc] however in this case the user gets directed to the directory which contains the subdomain data but the user gets "out" of the subdomain. I would like to know - how, when going to subdomain.domain.com the subdomain.domain.com, beginning of address remains visible in the address bar of the explorer? Can it be done by an alternate entry in .htaccess file? If a VirtualHost entry is specified in the vhosts.conf file, does it mean, that a new user account has to be specified for access to this directory?

    Read the article

  • setting up subdomain wildcard: configured A record, VirtualHost... still does not work

    - by user80314
    Running Apache on CentOS, trying to setup wildcard subdomains, basically I want .mydomain.com to point to mydomain.com With cPanel I added *.mydomain.com With WHM I made sure that A record is pointing to the right IP. I set my A record: * 14400 IN X.x.x.x My httpd.conf: ServerName _wildcard_.mydomain.com ServerAlias *.mydomain.com DocumentRoot /home/mydomain/public_html ServerAdmin [email protected] UseCanonicalName Off ## User userdomain# Needed for Cpanel::ApacheConf UserDir enabled userdomain <IfModule mod_suphp.c> suPHP_UserGroup userdomain userdomain </IfModule> <IfModule !mod_disable_suexec.c> <IfModule !mod_ruid2.c> SuexecUserGroup usergrdomain userdomain </IfModule> </IfModule> <IfModule mod_ruid2.c> RUidGid userdomain userdomain </IfModule> ScriptAlias /cgi-bin/ /home/mydomain/public_html/cgi-bin/ # To customize this VirtualHost use an include file at the following location # Include "/usr/local/apache/conf/userdata/std/2/mydomain/wildcard_safe.mydomain.com/*.conf" I have my VirtualHost in httpd.conf set to point to domain root. Restarted Apache, server, dns, still nothing. I have spent hours researching this, followed instructions, set everything correctly. What am I missing?

    Read the article

  • SSL stops working on IIS7 after a reboot

    - by Mark Seemann
    I have a Windows 2008 Server with IIS7. Every time the server reboots, SSL stops working. Normal HTTP requests work fine, but any request to an HTTPS address gives the typical error message in the browser: Cannot find server or DNS I can temporarily fix it by opening IIS Manager and bring up the Bindings… window for the website in question. Then I select “https”, click on “Edit” then click “Ok” without making any changes to the settings. After doing this, browsing to https:// works again until the next reboot. This issue look as lot like the one described here, but according to the Certificates MMC snapin, the certificate in question does have a private key. I'm also pretty sure that I never installed the certificate in the personal store, but imported it straight into the machine store, but it's been a while... There's not a lot in the event log apart from the event ID 36870 also described in the post I linked to. Can anyone help me troubleshoot this issue so that SSL will work even after a server reboot?

    Read the article

  • Amazon CloudFront and EC2: Global Load Balancing

    - by Matt Rogish
    We have an app that is going to store and serve up a decent amount of data in S3 to a global audience where latency should be minimized. So, we've been doing tests with Amazon CloudFront and have seen favorable results. However, we need a thin middleware layer (to do security etc.) and we'd like to put that in EC2. Due to security restrictions, this middleware layer will do the file streaming from S3/CloudFront: S3/CloudFront - EC2 - Clients We can geographically distribute the EC2 nodes (US East/West, and Ireland) but the problem is that a client in the EU would hit our US server and be fed data from there, thus rendering much of the performance benefit of CloudFront moot. I've been digging through the EC2 docs but I can't find a built-in way to get a geographically distributed version of EC2 a la CloudFront. Elastic Load Balancing sounds like the way to go, but I can't seem to find a way with that to direct based on routing... Preferably, we'd like to keep the amount of stuff outside of EC2/S3/etc. to a minimum (for obvious reasons). Any ideas how to do that within the EC2/S3 framework? DNS/routing tricks? Thanks!

    Read the article

  • Problem hosting server behing personal router

    - by Venkatesh Hodavdekar
    I recently bought the domain name lucidcontraptions.com and want to host the website from home. I have a D-Link router in which I have set up my personal virtual server correctly. My application server is Apache 2.2. The server works perfectly with the following settings: External IP: 207.172.xx.xx. Public port: 8888 Internal IP: 192.168.xx.xx. Private port: 80 If I go to 207.172.xx.xx:8888/ the server works perfectly and my Apache page shows up without any issues, both from inside the intranet as well as outside. This setting would not work out for me as I am not allowed port numbers in my DNS management. Now when I tweak the settings to the following: External IP: 207.172.xx.xx. Public port: 80 Internal IP: 192.168.xx.xx. Private port: 80 If I go to 207.172.xx.xx/ the server works perfectly and my Apache page shows up without any issues, BUT ONLY FROM INSIDE THE INTRANET. This page does not show up for people outside the intranet.

    Read the article

  • Sonicwall TZ210 - Set up public wifi on separate subnet & interface

    - by thomasjbarrett
    I want to set up a public wifi by connecting another router to the X6 interface, and put it on a separate subnet (192.168.10.0/24) & in the DMZ Zone to keep it away from the regular LAN. I believe I have the network settings correct: the router has acquired the IP and DNS information from the TZ210, and the TZ210 shows it as an active DHCP lease. X6 is in the DMZ. I now have a routing/NAT/firewall problem, since I can't get any traffic to travel from the subnet to the internet. I can't get to any external websites and can't ping the TZ210 from the subnet. X0 is the regular LAN, and X1 is the WAN. Looking for any tips or tutorials on this. Here's my current relevant rules: Routing Source: X6 Subnet Destination: Any Service: Any Gateway: Default Gateway Interface: X6 Source: Any Destination: X6 Subnet Service: Any Gateway: 0.0.0.0 Interface: X6 NAT Policies Source Original: Any Translated: WAN IP Destination Original: Any Translated: Original Inbound: X6 Outbound: X1 Source Original: Any Translated: U0 IP Destination Original: Any Translated: Original Inbound: X6 Outbound: U0 Firewall DMZ LAN : Deny All DMZ WAN : Allow All LAN DMZ : Allow All WAN DMZ : Allow All

    Read the article

  • Static Network configuration with bridge in CentOS 6.2

    - by Kyle
    I have a server with CentOS 6.2 installed, I want to use it as a VM host to run some windows installations for development purposes. I wanted to be able to directly RDP and serve websites from IIS on these windows server installations, so I figured I would set it up as bridged networking. I have been struggling with this all morning, usually the result being that when I brought up the bridge interface all network connectivity to the CentOS would go away, however, I think I finally have that all figured out. However, here's what happens. The eth0 and br0 interfaces are defined in /etc/sysconfig/network-scripts with ifcfg-eth0 and ifcfg-br0. I DO NOT have ifup or ifdown or any other files for these interfaces, I have not found if they are needed. I can login and use firefox to browse the web, however, running ifconfig reveals that my eth0 does not have an IPAddress, but the br0 does. I can actually RDP into the Windows installation, and browse the internet from there as well, but I cannot directly connect(via putty, vnc, nor viewing web pages) to the CentOS box. Any idea what's up? ifcfg-eth0 DEVICE=eth0 BOOTPROTO=none IPADDR=192.168.1.20 GATEWAY=192.168.1.1 NETMASK=255.255.255.0 NETWORK=192.168.1.0 ONBOOT=yes BRIDGE=br0 ifcfg-br0 DEVICE=br0 TYPE=Bridge BOOTPROTO=static DNS1=192.168.1.1 DNS2=8.8.8.8 GATEWAY=192.168.1.1 IPADDR=192.168.1.2 NETMAS=255.255.255.0 ONBOOT=yes I know some of the options are inconsistent (DNS and BOOTPROTO) because I tried changing those in the eth0 file to make it work, and the changes haven't adversly affected web browsing or the other functionality Thank you!

    Read the article

  • Fresh Proxmox VE 2.1 installation with defaults can't be reached or pinged

    - by Damainman
    I am using the lastest Proxmox VE 2.1. My server has two NICS with a uplink only connected into eth0. My Server is a co-located server utilizing public IPv4 IPs. It is not behind a firewall or any system which monitors traffic. Via IPKVM I did a fresh install of Proxmox, I put in the correct IP, Mask, Gateway, and DNS information. The install went perfectly fine with no errors. Upon completion and rebooting the system: I am unable to reach the web GUI via the browser, it just times out. I am unable to ping the server. I am unable to ping outside to the Internet from within the server. Tried pinging out to 4.2.2.2 and yahoo.com I tried rebooting the server and restarting the network service. IFCONFIG shows my IP information under vmbro0 which also has the same MAC address as the eth0 device. eth0 only displays a IPv6 Scope:Link address, which I did not setup myself. This is my first time installing proxmox, but after searching for a few hours it doesn't seem like anyone else is having the same issue as me from a fresh install with just the defaults. So far the only thing I did was install it. Also, I know the network cable is good and the IP is good because I was running a Xen XCP server with the same network settings prior to wiping it to install proxmox. Some additional information: for pveversion -v (Installed proxmox-ve_2.1-f9b0f63a-26.iso) pve-manager: 2.1-1 (pve-manager/2.1/f9b0f63a) running kernel: 2.6.32-11-pve proxmox-ve-2.6.32: 2.0-66 netstat -nr (note: .136 is my network, and .137 is my gateway) Destination - Gateway - Genmask xxx.xxx.xxx.136 - 0.0.0.0 - 255.255.255.248 0.0.0.0 - xxx.xxx.xxx.137 - 0.0.0.0 /etc/network/interfaces auto lo iface lo inet loopback auto vmbr0 iface vmbr0 inet static address xxx.xxx.xxx.138 netmask 255.255.255.248 gateway xxx.xxx.xxx.137 bridge_ports eth0 bridge_stp off bridge_fd 0

    Read the article

  • Iptables Forwarding problem

    - by ankit
    Hi all, I had initally asked question about sertting up my linux box for natting for my home network and was given suggestions in the thread here. Did not want to clutter the old question so starting a new one here. based on the earlier suggestions, i have come up with the following rules ... :PREROUTING ACCEPT [1:48] :OUTPUT ACCEPT [12:860] :POSTROUTING ACCEPT [3:228] -A POSTROUTING -o eth0 -j MASQUERADE COMMIT *filter :INPUT DROP [3:228] :FORWARD DROP [0:0] :OUTPUT DROP [0:0] -A INPUT -i lo -j ACCEPT -A INPUT -i eth0 -p icmp -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 443 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 22 -j ACCEPT -A FORWARD -i eth1 -p icmp -j ACCEPT -A FORWARD -i eth1 -p tcp -m tcp --dport 80 -j ACCEPT -A FORWARD -i eth1 -p tcp -m tcp --dport 443 -j ACCEPT -A OUTPUT -p icmp -j ACCEPT -A OUTPUT -j ACCEPT COMMIT If you notice, i do have the proper MASQURADING rule and the proper FORWARD filter rule as well. However i am facing 2 problems On the linux box itself DNS resolving is not working the lan clients connected to the linux box, are still not able to get to internet. when i ping something from them, i see the DROP count in iptables INPUT rule increasing. now my question is, when i am pinging something from the lan client, how come it is being matched by the input chain ?! should it be in the forward chain ? Chain INPUT (policy DROP 20 packets, 2314 bytes) pkts bytes target prot opt in out source destination 99 9891 ACCEPT all -- lo any anywhere anywhere 0 0 ACCEPT icmp -- eth0 any anywhere anywhere 0 0 ACCEPT tcp -- eth0 any anywhere anywhere tcp dpt:http 0 0 ACCEPT tcp -- eth0 any anywhere anywhere tcp dpt:https 122 9092 ACCEPT tcp -- eth0 any anywhere anywhere tcp dpt:ssh Thanks ankit

    Read the article

  • 3 Servers, is this is a cluster?

    - by Andy Barlow
    Hello, At the moment I have one Ubuntu server, 9.10, running with a simple Samba share, a mail server, DNS server and DHCP server. Mostly its just there for file sharing and email server. I also have 2 other servers that are exactly the same hardware and spec as the first, which have an rsync set up to retrieve the shared folders and backs them up. However, if the first server goes down, all of our shares disappear along with our mail and the system must be rebuilt. Also I tend to find if people are downloading a large amount from the file server, no-one can access there emails - especially in the morning when everyone is signing in at once. Would it be more beneficial for me to have all 3 servers, all running the same services, doing the same thing with some sort of cluster with load balancing? I'm not really sure where to begin looking, or how to go about such a setup where 3 servers are all identical, but perhaps one acts as the main load balancer?? If someone can point me in the right direction, or if this simply sounds like one of those Enterprise Cloud's that is now a default setup in Ubuntu Server 9.10+, then I'll go down that route. Cheers in advance. Andy

    Read the article

  • Apache2 VirtualHost on Debian not working

    - by milo5b
    I am having some problems with Apache2 configuration. I have already tried to look for documentation on the web (Apache's site, Debian's site, here on serverfault, etc), but nothing really helps. I have tried different configurations, but my current configuration is the following (/etc/apache2/sites-available/default): <VirtualHost *:80> ServerAdmin [email protected] ServerName mysite.dev ServerAlias mysite.dev DocumentRoot /var/www/mysite.dev/httpdocs/ ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] ServerName livesite.com ServerAlias www.livesite.com DocumentRoot /var/www/livesite.com/httpdocs/ <Directory /var/www/livesite.com/httpdocs/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> mysite.dev it's just an entry in hosts file on my client machine, while livesite.com it's an actual DNS record which would resolve to the same IP as the IP set in hosts file for mysite.dev. The problem is that when i try to type mysite.dev in my browser, it would automatically go to livesite.com. I tried to have different /etc/apache2/sites-enabled/ files (/etc/apache2/sites-enabled/mysite.dev , /etc/apache2/sites-enabled/livesite.com ) - and of course with the actual sites-available related files, but achieving the same results. I have tried to have a peak on error.log and access.log but there's nothing I can see. My httpd.conf contains: AccessFileName .htaccess And I have no /etc/apache2/conf.d/virtual.conf file. Any help would be greatly appreciated - if I did not provide enough info please let me know I will do my best to provide all necessary info. Thanks

    Read the article

  • How to configure multiple virtual hosts for multiple users on Linux/Apache2.2

    - by authentictech
    I want to set up a virtual hosting server on Linux/Apache2.2 that allows multiple users to set up multiple website domains as would be appropriate for commercial shared hosting. I have seen examples (from my then perspective as a shared hosting customer) that allow users to store their web files in their user home directory with directories to correspond to the virtual host domain, e.g.: /home/user1/www/example1.com /home/user2/www/example2.com instead of using /var/www Questions: How would you configure this in your Apache configuration files? (Don't worry about DNS) Is this the best way to manage multiple virtual hosts? Are there others? What safety or security issues do you think I should be aware of in doing this? Many thanks, folks. Edit: If you want to only answer question 1, please feel free, as that is the most urgent to me at this moment and I would consider that an answer to the question. I have done it for myself since posting, but I am not confident that it's the best solution and I would like to know how an experienced sysadmin would do it. Thanks.

    Read the article

  • Durability of Websockets Server

    - by smitchell360
    I am starting to experiment with websockets. Does anyone know of a websockets server (open source or paid) that provides a durable store of the websocket "channel"? All of the examples that I have found do not address durability -- if a websockets server goes down, all "channel" data is lost. Services such as Pusher do not really discuss whether they address the durability issue (and I have not received a response from tech support yet). Happy to roll my own, but would rather not reinvent the wheel. EDIT: I'm not looking for websockets 101 information. That is readily available and understood. I'm looking for a server (open source or paid) that supports websockets and has a durable store for the websocket data so that, in the event that a server fails, a new server can take over where the original one left off. Two main purposes: 1. support failover scenarios contemplated by the websockets Network Working Group http://tools.ietf.org/html/draft-ibc-websocket-dns-srv-02#section-5.1 (most importantly so that missed messages are sent when a client connects to a failover server) 2. support scenarios where new subscribers must receive all past messages that were published. Of course this can be handled at the application layer...but that is not what I am looking for.

    Read the article

  • Host spreads wrong MAC Adress of router on the WIFI

    - by JavaIsMyIsland
    Strange things are going on our network. Since yesterday a host which is actually not on our subnet spreads wrong ARP Replys on our network. To be precise, only on the WIFI. If I connect my Laptop to the cable ethernet, it gets the right MAC adress of the router. Also my Android phone and my Ubuntu system do get the right MAC Adress. So I took a look at wireshark. When I clear the ARP cache of the windows machine, the first ARP response is correct and comes from the router. But like 10 ms later another ARP response comes from another host in the WIFI. The host changes its IP Adresses from time to time and they look like they are not on our subnet. So I can not use the internet because DNS is not working anymore. Sometimes the router wins the race condition and the mac adress is set correctly in the arp cache. I first thought, this is an arp-poisoning mitm attack but it does not make sense if the packets get not routed correctly?! I restarted the router but it didn't help. I have no access to the router, else I would change the shared key to make sure there is no intruder on the wifi.

    Read the article

  • network latency, TCP and UDP packets

    - by user115848
    Hello recently my network has started to cause me lots of problems. I have a cable modem, connected to a tp-link router (with some port forwarding). Everything was working fine then i started to get lots of udp (port 53) "UNREPLIED" logs in the router. Now there are tcp UNREPLIED logs too. This is causing lots of latency and failed connections when trying to connect to different internet sites. Also, we run an openfire server for spark connections, and I believe its causing connectivity issues for some users who are trying to connect using Spark (some people connect fine, others don't). Please see screen shot below for packet logs. It has to be something internally, as I connected straight to the comcast modem and i was able to connect to the internet and various sites as normal. I tried to swap out the router with a different and got the same issue. I scanned both my internal dns servers for viruses or malware and it came up empty. Another anomaly is that when i try to connect to www.cnn.com, i get redirected to the different site. I scanned my own machine for hijacks. Not sure if this is related to the networking issue. Please let me know if you have any ideas for troubleshooting.

    Read the article

  • Managing multiple Apache proxies simultaneously (mod_proxy_balancer)

    - by Hank
    The frontend of my web application is formed by currently two Apache reverse proxies, using mod_proxy_balancer to distribute traffic over a number of backend application servers. Both frontend reverse proxies, running on separate hosts, are accessible from the internet. DNS round robin distributes traffic over both. In the future, the number of reverse proxies is likely to grow, since the webapplication is very bandwidth-heavy. My question is: how do I keep the state of both reverse balancers / proxies in sync? For example, for maintenance purposes, I might want to reduce the load on one of the backend appservers. Currently I can do that by accessing the Balancer-Manager web form on each proxy, and change the distribution rules. But I have to do that on each proxy manually and make sure I enter the same stuff. Is it possible to "link" multiple instances of mod_proxy_balancer? Or is there a tool out there that connects to a number of instances, and updates all with the same information? Update: The tool should retrieve the runtime status and make runtime changes, just like the existing Balancer-Manager, only for a number of proxies - not just for one. Modification of configuration files is not what I'm interested in (as there are plenty tools for that).

    Read the article

  • centos TCP/IP connection very slow

    - by yuli chika
    I have a VSP (centos6.1 64bit) with 4gb ram. It always runs well, but in recent few days, the server become slowly. open a small css file need 22 seconds(2kb). tested in home/office/phone with (IE,chrome,safari,firefox). see in firebug networking DNS Lookup ?4?ms Connecting ?21.18?s Sending 1?ms Waiting ?115?ms Receiving ?9?ms The connection cost 21.18 seconds I have checked all the log file, there have no error. top commond, still have free memory. top - 00:23:15 up 8 days, 3:57, 1 user, load average: 3.60, 3.42, 3.83 Tasks: 221 total, 4 running, 217 sleeping, 0 stopped, 0 zombie Cpu(s): 19.3%us, 3.2%sy, 0.0%ni, 76.1%id, 1.4%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4194304k total, 3247724k used, 946580k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 32357 mysql 15 0 3710m 835m 6268 S 34.5 20.4 39:14.40 mysqld 9780 apache 15 0 442m 59m 12m S 33.2 1.4 0:05.69 httpd 9842 apache 15 0 403m 26m 10m S 16.9 0.7 0:01.23 httpd 9847 apache 15 0 412m 45m 22m R 15.3 1.1 0:01.00 httpd 9834 apache 15 0 426m 46m 11m R 13.0 1.1 0:02.22 httpd 9891 apache 15 0 407m 43m 19m S 8.0 1.1 0:00.33 httpd 9845 apache 15 0 414m 51m 24m S 6.0 1.3 0:01.53 httpd 9827 apache 15 0 402m 28m 11m S 3.3 0.7 0:02.69 httpd 9768 apache 16 0 414m 51m 24m S 3.0 1.3 0:06.51 httpd 9889 root 15 0 211m 12m 8160 S 2.7 0.3 0:00.32 php 9702 apache 15 0 415m 55m 26m S 1.7 1.4 0:10.67 httpd 9844 apache 15 0 413m 47m 21m S 1.7 1.2 0:01.21 httpd 9697 apache 15 0 414m 51m 24m S 1.3 1.3 0:11.05 httpd 9778 apache 15 0 414m 53m 25m S 1.3 1.3 0:05.38 httpd 9772 apache 15 0 414m 51m 23m R 0.7 1.3 0:05.04 httpd 9823 apache 15 0 415m 50m 23m S 0.7 1.2 0:03.97 httpd 9837 apache 15 0 402m 27m 11m S 0.3 0.7 0:01.04 httpd Then, how to check where is the problem and fixed it? I haven't change and config files in these days. Thanks.

    Read the article

  • How to backup a NAS drive to a USB drive?

    - by Tim Murphy
    How would you backup 600+ GB of data on a NAS (Network-Attached Storage) drive to a USB external drive? The NAS drive does not contain mission critical data nonetheless I wish to make weekly copies of it just in case. The NAS drive is almost exclusively used as an archive dump and is rarely updated. However the backup strategy used must have a simple restore procedure so I can confidently say the data now on the NAS drive is exactly how it was at the time of backup. I did try xcopy but seemed like it would take many-many hours and eventually crashed with insufficient memory. http://www.ctunion.com/node/114 suggests I would need to use xxcopy instead due to folder/file name lengths. My concern with xcopy/xxcopy is the length of time it takes. Hoping something else is faster. NAS drive is DLink DNS-313. 1TB drive installed. Connected to router via Ethernet cable. USB drive is Seagate 1TB. Can be connected to Windows Vista (preferred) or Windows 7 PCs. Both PCs are usually connected Wirelessly however ethernet cable can be used during backup to speed up the process.

    Read the article

  • Sluggish Windows SBS 2003

    - by TomWilsonFL
    One of my customers has a Windows 2003 Small Business Server which at this point is basically the DC, DNS, Fileserver and Symantec Protection Manager. I have disabled Exchange because I moved their mail to Google Apps. The server is extremely sluggish when doing anything. It is most noticeable when a dialog box is open (say the System properties), and you try to change tabs. This is usually instant, but on this machine can take 3-5 seconds. What additional services / packages can I uninstall from this machine knowing that it is only performing the above roles? Will removing the "Small Business Server" package in Add / Remove Programs get rid of a few unnecessary things? Any other thoughts? P.S. I know Symantec Endpoint and the Protection Manager are hogs, but I have nothing to replace the solution with at the moment. Thanks, Tom UPDATE: I looked over the different performance metrics, but nothing stood out as a problem. One of my friends mentioned Symantec's log and temp files can get quite huge and slow things down, so I ran CCleaner on the machine and found close to 3 GB of Symantec "stuff." Removed that and now the machine is MUCH better. I am still unsure why the data just sitting there would cause such a slowdown. The drive is not even near full. The only thing I can imagine is that Symantec must have to run through this stuff now and then.

    Read the article

  • Two DHCP servers on the same network

    - by CesarGon
    We are setting up a routing link between the Windows Server 2008 networks of two different buildings in my organisation. Each network uses a different IP addressing scheme (one uses public addresses, the other one uses private), but the goal is having a single Windows Server domain across the gap between the buildings. The link is provided by a 100-Mbps point-to-point line. I have always understood that you should not have more than one DHCP server on a network. However, we are planning to put a domain controller on each building, and each domain controller will be a DNS server and a DHCP server as well. The intention is that a machine booting up in building A gets its IP address from the DHCP server closer to it, in building A, while a machine booting up in building B gets an address from the DHCP server in building B. Since the two buildings will be linked and the network will be only one, will this work? How can I avoid that a machine booting up in building A gets an address from the DHCP server in building B (or vice versa)? Thanks.

    Read the article

  • Two DHCP servers on the same network

    - by CesarGon
    We are setting up a routing link between the Windows Server 2008 networks of two different buildings in my organisation. Each network uses a different IP addressing scheme (one uses public addresses, the other one uses private), but the goal is having a single Windows Server domain across the gap between the buildings. The link is provided by a 100-Mbps point-to-point line. I have always understood that you should not have more than one DHCP server on a network. However, we are planning to put a domain controller on each building, and each domain controller will be a DNS server and a DHCP server as well. The intention is that a machine booting up in building A gets its IP address from the DHCP server closer to it, in building A, while a machine booting up in building B gets an address from the DHCP server in building B. Since the two buildings will be linked and the network will be only one, will this work? How can I avoid that a machine booting up in building A gets an address from the DHCP server in building B (or vice versa)? Thanks.

    Read the article

  • Finding ALL currently used IP addresses of Website

    - by Patrick R
    What steps would you take to discover all (or close to all) IP addresses that are currently used by a website? How would you be as exhaustive as possible without calling a website admin and asking for the list of IP addresses? ;) nslookup works but will vary based on dns server queried. whois is another good tool. Dig, not bad. Let's use Facebook for example. I'm blocking that site for the majority our our company's users, but some are approved for "research". I can not easily use OpenDNS because we all appear to come from the same request IP address. I could change that but don't want to add more vlans than I already have. I also could use block something like regex facebook1 "facebook\.com" (I'm running a cisco firewall) but that's pretty easy to sidestep. All that being said, I'm asking about specifically about finding ip addresses for a domain and not for other methods that I can block a domain name.

    Read the article

  • Unable to Access Certain Websites

    - by codejoust
    Through a local network, all computers except one ubuntu machine can access 1. Adobe.com 2. Icann.org 3. Apache.org 4. Example.com. The ubuntu machine returns (in firefox): "Though the site seems valid, the browser was unable to establish a connection." Furthermore, when I traceroute those websites using the ubuntu machine, they all return ubuntu.local, and it ends there: (traceroute to icann.org (192.0.32.7), 30 hops max, 40 byte packets 1 ubuntu.local (192.168.1.105) 3000.791 ms !H 3000.808 ms !H 3000.814 ms !H I've checked the hosts file, and there isn't anything in there, and I have an apache server there so if it was redirected to localhost, I'd probably see the localhost webroot page. Thanks in advance! user@ubuntu:~$ netstat -nr Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 192.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth1 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth1 The Ubuntu Machine is one of six on the network. I'm using opendns for dns, so I do think that should be a problem.

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >