Search Results

Search found 87891 results on 3516 pages for 'server migration'.

Page 1015/3516 | < Previous Page | 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022  | Next Page >

  • gmail app 504 server timeout

    - by Hui
    this is the part of code I use for getting info from gmail, it's working alright on my localhost, but somehow when i deploy it online, I got 504 gateway timeout error. Did I missed something in my code? can someone give some advices , thanks a lot public class GetGmail { static String last = null; public static ArrayList run(String username, String password, String lastloggin)throws Exception { ArrayList result = null; System.out.println("Getting Gmail......"); last = lastloggin; Properties props = System.getProperties(); props.setProperty("mail.store.protocol", "imaps"); try { Session session = Session.getDefaultInstance(props, null); Store store = session.getStore("imaps"); store.connect("imap.googlemail.com", username, password); result = readMessage(store); store.close(); } catch (NoSuchProviderException e) { e.printStackTrace(); return null; } catch (MessagingException e) { e.printStackTrace(); return null; } return result; } }

    Read the article

  • server not sending custom header values

    - by egza
    I'm using PHP 5.2.17 to get a remote page, the HTTP requests contains some cookie values but cookies are not delivered to the destination page. $url = 'http://somesite.com/'; $opts = array( 'http' => array ( 'header' => array("Cookie: field1=value1; field2=value2\r\n") ) ); $context = stream_context_create($opts); echo file_get_contents($url, false, $context); Can you help me find the problem? Note: I can't use curl. Thanks.

    Read the article

  • ?know How?Oracle WebLogic Server 11g? JRuby?JMX???????|WebLogic Channel|??????

    - by ???02
    ??Java SE???????????????JMX?????????????????????????????????????????????????????????????????????????????????????????????????????JRuby??????????JMX??????Oracle WebLogic Server 11g Release 1????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????¦????????????????????????????????????????    * Oracle Database 11g Release 2??     * JDK 1.6    * JRuby 1.5.6    * Oracle WebLogic Server 11g Release 1??    * ????·????????...

    Read the article

  • ??W-1????????????

    - by Masa.Sasaki
    ??(2?16?)?14?WebLogic Server@????????WebLogic Server??????????W-1???????????????????????????????????????????????????????????????????? ???????????????WebLogic Server???????????????????????????????WebLogic Server???·??·?????????????????????????????????????????????? ???????JVM??????JDBC??????????????????????????????????4?????????????????????????????????????????????????? ??????W-1????????????????????Tiwitter4J??????(12????10?)??WebLogic Server 11g??????????????????????(???????????????????????????????) ????9?WebLogic Server???@???3?9????15?WebLogic Server???@??(??????????)?3?16???? ???W-1?????(?????)??????????????????????????????????

    Read the article

  • What tells initramfs or the Ubuntu Server boot process how to assemble RAID arrays?

    - by Brad
    The simple question: how does initramfs know how to assemble mdadm RAID arrays at startup? My problem: I boot my server and get: Gave up waiting for root device. ALERT! /dev/disk/by-uuid/[UUID] does not exist. Dropping to a shell! This happens because /dev/md0 (which is /boot, RAID 1) and /dev/md1 (which is /, RAID 5) are not being assembled correctly. What I get is /dev/md0 isn't assembled at all. /dev/md1 is assembled, but instead of using /dev/sda2, /dev/sdb2, /dev/sdc2, and /dev/sdd2, it uses /dev/sda, /dev/sdb, /dev/sdc, /dev/sdd. To fix this and boot my server I do: $(initramfs) mdadm --stop /dev/md1 $(initramfs) mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 $(initramfs) mdadm --assemble /dev/md1 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2 $(initramfs) exit And it boots properly and everything works. Now I just need the RAID arrays to assemble properly at boot so I don't have to manually assemble them. I've checked /etc/mdadm/mdadm.conf and the UUIDs of the two arrays listed in that file match the UUIDs from $ mdadm --detail /dev/md[0,1]. Other details: Ubuntu 10.10, GRUB2, mdadm 2.6.7.1 UPDATE: I have a feeling it has to do with superblocks. $ mdadm --examine /dev/sda outputs the same thing as $ mdadm --examine /dev/sda2. $ mdadm --examine /dev/sda1 seems to be fine because it outputs information about /dev/md0. I don't know if this is the problem or not, but it seems to fit with /dev/md1 getting assembled with /dev/sd[abcd] instead of /dev/sd[abcd]2. I tried zeroing the superblock on /dev/sd[abcd]. This removed the superblock from /dev/sd[abcd]2 as well and prevented me from being able to assemble /dev/md1 at all. I had to $ mdadm --create to get it back. This also put the super blocks back to the way they were.

    Read the article

  • How to verify PostgreSQL 9 has installed correctly on a CentOS server?

    - by A4J
    I'm trying to install the PG (postgres) gem on a CentoOS server, but it keeps saying Postgres is too old, even though I have upgraded it to 9.1.3 (as per the instructions here http://www.davidghedini.com/pg/entry/install_postgresql_9_on_centos). I am using CentOS 5.8 (and Ruby 1.9.3) Here is the error message: Building native extensions. This could take a while... ERROR: Error installing pg: ERROR: Failed to build gem native extension. /usr/local/bin/ruby extconf.rb checking for pg_config... yes Using config values from /usr/bin/pg_config checking for libpq-fe.h... yes checking for libpq/libpq-fs.h... yes checking for pg_config_manual.h... yes checking for PQconnectdb() in -lpq... yes checking for PQconnectionUsedPassword()... no Your PostgreSQL is too old. Either install an older version of this gem or upgrade your database. *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. psql --version confirms my version: psql (PostgreSQL) 9.1.3 I can confirm packages installed: Setting up Install Process Package postgresql91-9.1.3-1PGDG.rhel5.x86_64 already installed and latest version Package postgresql91-devel-9.1.3-1PGDG.rhel5.x86_64 already installed and latest version Package postgresql91-server-9.1.3-1PGDG.rhel5.x86_64 already installed and latest version Package postgresql91-libs-9.1.3-1PGDG.rhel5.x86_64 already installed and latest version Package postgresql91-contrib-9.1.3-1PGDG.rhel5.x86_64 already installed and latest version Nothing to do Any ideas on how to troubleshoot this? Thanks in advance.

    Read the article

  • Cisco "copy startup-config tftp" results in a 0 byte file on the server?

    - by Geoffrey
    I'm pulling my hair out figuring this out. My startup-config is good, I can view it with a show command. I'm trying to copy it to a tftp server: asa5505# copy startup-config tftp Address or name of remote host []? ipaddress Destination filename [startup-config]? t !! %Error writing tftp://ipaddress/t (Timed out attempting to connect) On my TFTP server (SolarWinds), I get the following: binary, PUT. Started file name: C:\TFTP-Root\t binary, PUT. File Exists, C:\TFTP-Root\t binary, PUT. Deleting Existing File. binary, PUT. Interrupted by client, cause: The process cannot access the file 'C:\TFTP-Root\t' because it is being used by another process I've used tftpd32 with same results. I've tried different servers, even one on the same network as the asa ... same results. It'll create a 0 byte file and never do the dump. What's going on? Everything is working normally except for this.

    Read the article

  • DNS configuration issues. Clients inside network unable to resolve DNS server's name

    - by hydroparadise
    Setup the DNS service on Ubuntu 12.04 64 and all apears to be well except that my dhcp clients do not recognize my DNS servers hostname. When doing a nslookup on one of my Windows clients, I get C:\Users\chad>nslookup Default Server: UnKnown Address: 192.168.1.2 Where I would expect the FQDN in the spot where UnKnown is seen. The DNS server know's itself pretty well, but I think only because I have an entry in the /etc/hosts file to resolve. There's so many places to look I don't even know where to begin. Are there any logs I can look at? Something. Places I've looked at and configured: /etc/bind/zones/domain.com.db /etc/bind/zones/rev.1.168.192.in-addr.arpa /etc/bind/named.conf.local EDIT: '/etc/bind/zones/rev.1.168.192.in-addr.arpa' @ IN SOA dns-serv1.mydomain.com [email protected]. ( 2006081401; 28800; 604800; 604800; 86400 ) IN NS dns-serv1.mydomain.com. 2 IN PTR dns-serv1 2 IN PTR mydomain.com EDIT 2: '/etc/bind/named.conf.local' zone "mydomain.com" { type master; file "/etc/bind/zones/mydomain.com.db"; }; zone "1.168.192.in-addr.arpa" { type master; file "/etc/bind/zones/rev.0.168.192.in-addr.arpa"; };

    Read the article

  • Is my dns server being attacked? And what should I do about it?

    - by Mnebuerquo
    I've been having some intermittent dns problems with a web server, where certain isp's dns servers don't have my hostnames in cache and fail to look them up. At the same time, queries to opendns for those hostnames resolve correctly. It's intermittent, and it always works fine for me, so it's hard to identify the problem when someone reports connectivity problems to my site. In trying to figure this out, I've been looking at my logs to see if there are any errors I should know about. I found thousands of the following messages in my logs, from different ip's, but all requesting similar dns records: May 12 11:42:13 localhost named[26399]: client 94.76.107.2#36141: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:13 localhost named[26399]: client 94.76.107.2#29075: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:13 localhost named[26399]: client 94.76.107.2#47924: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:13 localhost named[26399]: client 94.76.107.2#4727: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:14 localhost named[26399]: client 94.76.107.2#16153: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:14 localhost named[26399]: client 94.76.107.2#40267: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:43:35 localhost named[26399]: client 82.209.240.241#63507: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:43:35 localhost named[26399]: client 82.209.240.241#63721: query (cache) 'burningpianos.org/MX/IN' denied May 12 11:43:36 localhost named[26399]: client 82.209.240.241#3537: query (cache) 'burningpianos.com/MX/IN' denied I've read of Dan Kaminski's dns cache poisoning vulnerability, and I'm wondering if these log records are an attempt by some evildoer to attack my dns server. There are thousands of records in my logs, all requesting "burningpianos", some for com and some for org, most looking for an mx record. There are requests from multiple ip's, but each ip will request hundreds of times per day. So this smells to me like an attack. What is the defense against this?

    Read the article

  • How do I host multiple independent, secured SharePoint sites (WSS 3.0) without using Active Directory on the same server?

    - by Kyle Noland
    I have a SharePoint site set up on one of my networks to service Active Directory users. To be clear, this is a Windows SharePoint Services 3.0 installation running on Windows Server 2003 Standard. It is not an option to upgrade the server or SharePoint version. Management would like to create several new sites, one for each of a handful of clients. These sites will be used like "dropboxes" or FTP sites so that my company can make large files available to outside contacts, and vice versa. Here are my requirements: I do not want to have to create Active Directory accounts for each external contact. If possible, I would like to store the external usernames and passwords in a database that I can write a small GUI for so that management can handle adding their own external contacts. Each client site must be sandboxed from each other and from my main company SharePoint site. I would like to keep everything running on port 80 and be able to access the sites as either clientname.mycompany.com or www.mycompany.com/clientname If anybody has ever done this I would really appreciate hearing about any lessons you learned and suggestions for how to set this up. Kyle

    Read the article

  • How do I apply multiple subnets to a server with one NIC?

    - by Cosban
    I am trying to route multiple IPs through one physical NIC on my dedicated server for use with Proxmox KVM VMs. I have a dedicated server which is currently running Debian 4.4.5-8 with 3 available ip addresses for use, which will be displayed as 176.xxx.xxx.196 (main), 176.xxx.xxx.198 (on same subnet as main) and 5.xxx.xxx.166 (different subnet). I am currently trying to route the third IP address with the dedi for use with a vps that I have set up using proxmox v2.x but am having a really, really hard time doing so. Virtual interfaces binding the additional IP addresses work as expected, ruling out external routing problems. The provider has given the following information for the IP addresses on the main subnet: gateway: 176.xxx.xxx.193 netmask: 255.255.255.224 broadcast: 176.xxx.xxx.223 As well as the following information for the IP address on the second subnet: gateway: 5.xxx.xxx.161 netmask: 255.255.255.248 broadcast: 5.xxx.xxx.167 Everything I've tried with /etc/network/interfaces has either not worked, or has rendered the network completely useless. This is the current state of the file, which has the secondary IP address working on the same subnet as well as IPv6 working, but not the second subnet. # Nativen IPv6 Schnittstelle iface eth0 inet6 manual # Bridge IPv4 Schnittstelle (176.xxx.xxx.193/27) auto vmbr0 iface vmbr0 inet static address 176.xxx.xxx.196 netmask 255.255.255.224 gateway 176.xxx.xxx.193 broadcast 176.xxx.xxx.223 bridge_ports eth0 bridge_stp off bridge_fd 0 bridge_maxwait 0 post-up ip addr add 176.xxx.xxx.198/27 dev vmbr0 auto vmbr1 iface vmbr1 inet static address 5.xxx.xxx.166 netmask 255.255.255.248 gateway 5.xxx.xxx.161 broadcast 5.xxx.xxx.167 bridge_ports eth0 bridge_stp off bridge_fd 0 bridge_maxwait 0 post-up ip addr add 5.xxx.xxx.166/27 dev vmbr1 # Bridge IPv6 Schnittstelle (Reichweite: xxxx:xxxx:xxxx:xxxx:xxxx:xxxx::/64) iface vmbr0 inet6 static address xxxx:xxxx:xxxx:xxxx:xxxx:xxxx netmask 64 up ip -6 route add xxxx:xxxx:xxxx:xxxx:xxxx:xxxx dev vmbr0 down ip -6 route del xxxx:xxxx:xxxx:xxxx:xxxx:xxxx dev vmbr0 up ip -6 route add default via xxxx:xxxx:xxxx:xxxx:xxxx:xxxx dev vmbr0 down ip -6 route del default via xxxx:xxxx:xxxx:xxxx:xxxx:xxxx dev vmbr0

    Read the article

  • Do you need to advertise an AFP service via Avahi for an Ubuntu Server to show up in OSX Finder?

    - by James
    I am only advertising an NFS share plus the "model", and I don't want to install extra services on the Server unless I have to, ie netatalk, as it is used solely for NFS exports. Currently there is no entry in Finder under "Shared" with below config of Avahi. serveradmin@FILESERVER:/etc/avahi/services$ cat nfs.service <?xml version="1.0" standalone='no'?><!--*-nxml-*--> <!DOCTYPE service-group SYSTEM "avahi-service.dtd"> <service-group> <name replace-wildcards="yes">%h</name> <service> <type>_nfs._tcp</type> <port>2049</port> <txt-record>path=/Volumes/StoragePool</txt-record> </service> <service> <type>_device-info._tcp</type> <port>0</port> <txt-record>model=Xserve</txt-record> </service> </service-group> Server: Ubuntu 12.04.01 x64 Clients: OSX 10.6.8 , 10.7.5, 10.8.2 The goal is to advertise that NFS share, then assign a really old Model code of Mac like a Powermac and switch out the icon for a more "LinuxServer-y" one. Plus allow users to connect to NFS in a manner they are familiar with like our other Xserve servers. I think Avahi is working in general as if I do: nfs://FILESERVER.local/Volumes/StoragePool it will connect fine. Any ideas?

    Read the article

  • How do I set up DNS with nic.io to point to an AWS EC2 server?

    - by Chad Johnson
    I purchased a domain one week ago via nic.io. I have elected to provide my own DNS [because they provided no other option]. I'm trying to point my .io domain at my EC2 server instance. I've allocated an elastic IP and associated it with the instance. I can SSH into the instance and access point 80 via the IP address just fine. The IP is 54.235.201.241. nic.io support said the following: "You have selected to provide your own DNS and therefore if there is an issue with the set-up of the name servers you will need to contact your DNS provider." So, I created a Hosted Zone via Route 53 in AWS. This created NS and SOA records. I then set the Primary and Secondary servers at nic.io's domain admin page to the SOA record domains. Additionally, I set the optional servers to the NS domains. I did this two days ago, and I can't access the server via the domain. I ran a DNS check here...still not sure what I need to do: http://mydnscheck.com/?domain=chadjohnson.io&ns1=&ns2=&ns3=&ns4=&ns5=&ns6=. I have no idea what I'm supposed to do. Does anyone have any ideas?

    Read the article

  • How do I SSH tunnel using PuTTY or SecureCRT through gateway/proxy to development server?

    - by DAE51D
    We have some unix boxes setup in a way that to get to the development box via ssh, you have to ssh into a 'user@jumpoff' box first. There is no direct connection allowed on 'dev' via ssh from anywhere but 'jumpoff'. Furthermore, only key exchange is allowed on both servers. And you always login to the development box as 'build@dev'. It's painful to always do that hopping. I know this can be done with SOCKS or a Tunnel or something... I have setup a FreeBSD VM and I can get things to work awesome using unix ssh tools. Basically all I do is make sure my vm's ~/.ssh/id_rsa.pub key is on both jumpoff and dev and use this ~/.ssh/config file: # Development Server Host ext-dev # this must be a resolvable name for "dev" from Jumpoff Hostname 1.2.3.4 User build IdentityFile ~/.ssh/id_rsa # The Jumpoff Server Host ext Hostname 1.1.1.1 User daevid Port 22 IdentityFile ~/.ssh/id_rsa # This must come below all of the above Host ext-* ProxyCommand ssh ext nc $(echo '%h'|cut -d- -f2-) 22 Then I just simply type "ssh ext-dev" and I'm in like Flynn. The problem is I can't get this same thing to work using either PuTTY or SecureCRT -- and to be honest I've not found any tutorials that really walk me through it. I see many on setting up some kind of proxy tunnel for Firefox, but it doesn't seem to be the same concept. I've been messing with various trial and error most all day and nothing has worked (obviously) and I'm at the end of my ssh knowledge and Google searching. I found this link which seemed to be perfect, but it doesn't work for me. The "Master" connects fine, but the "client" portion doesn't connect. It tells me, the remote system refused the connection. http://www.vandyke.com/support/tips/socksproxy.html I've got the VM, PuTTY and SecureCRT all using the same public/private key pairs to make things consistent and easier to debug. Does anyone have a straight up example of how to do this in Windows?

    Read the article

  • what web based tool, to allow a non-technical user to manage authorized keys files on a Linux (fedora/centos/ubuntu/debian) server

    - by Tom H
    (Edit: clarification below) We have a number of groups of developers that change frequently, and a security policy to require individual logins to servers using rsa or dsa public keys, which is achieved via the standard method of adding id_dsa.pub to their authorized keys file. I am using chef to sync the user accounts across machines, however our previous method of using webmin to manage the user passwords is not designed for key based auth, and hence is not easy to use for non-technical users. The developers are logging in from the WAN using ssh, they can either provide their own key, or an administrator will send them a private key. The development machines are located in the cloud and we have a single server available to host the master set of accounts. Obviously I could deploy ldap or other centralised authentication system, but that seems a bit over blown when webmin worked well for the simple case. It is easy to achieve synchronised users, groups and passwords across a bunch of low security development boxes using webmin clustered users and groups. However looking at the currently installed webmin it is not so easy to create the authorized keys as it is to create user accounts and passwords. (its possible, but its not easy - some functionality is in the usermin module, or would required some tedious steps) Ideally I'd like a web interface that is pretty much dedicated to creating users and groups, and can generate key pairs on the fly, and can accepted pasted in public keys to add to the users authorized keys file. If the tool sync'ed the users and keys as well, that would be great, but I can use chef to do that part if the accounts are created correctly on the "master" server.

    Read the article

  • How to create RPM for 32-bit arch from a 64-bit arch server?

    - by Gnanam
    Our production server is running CentOS5 64-bit arch. Because there are no RPM available currently for SQLite latest version (v3.7.3), I created RPM using rpmbuild the very first time by following the instructions given here. I was able to successfully create RPM for 64-bit (x86_64) architecture. But am not able to create RPM for 32-bit (i386) architecture. It failed with the following errors: ... ... ... + ./configure --build=x86_64-redhat-linux-gnu --host=x86_64-redhat-linux-gnu --target=i386-redhat-linux-gnu --program-prefix= --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64 --libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/usr/com --mandir=/usr/share/man --infodir=/usr/share/info --enable-threadsafe checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... gawk checking whether make sets $(MAKE)... yes checking for style of include used by make... GNU checking for x86_64-redhat-linux-gnu-gcc... no checking for gcc... gcc checking for C compiler default output file name... configure: error: C compiler cannot create executables See `config.log' for more details. error: Bad exit status from /var/tmp/rpm-tmp.73141 (%build) RPM build errors: Bad exit status from /var/tmp/rpm-tmp.73141 (%build) This is the command I called: rpmbuild --target i386 -ba sqlite.spec My question is, how do I create RPM for 32-bit arch from a 64-bit arch server?

    Read the article

  • For enabling SSL for a single domain on a server with muliple vhosts, will this configuration work?

    - by user1322092
    I just purchased an SSL certificate to secure/enable only ONE domain on a server with multiple vhosts. I plan on configuring as shown below (non SNI). In addition, I still want to access phpMyAdmin, securely, via my server's IP address. Will the below configuration work? I have only one shot to get this working in production. Are there any redundant settings? ---apache ssl.conf file--- Listen 443 SSLCertificateFile /home/web/certs/domain1.public.crt SSLCertificateKeyFile /home/web/certs/domain1.private.key SSLCertificateChainFile /home/web/certs/domain1.intermediate.crt ---apache httpd.conf file---- ... DocumentRoot "/var/www/html" #currently exists ... NameVirtualHost *:443 #new - is this really needed if "Listen 443" is in ssl.conf??? ... #below vhost currently exists, the domain I wish t enable SSL) <VirtualHost *:80> ServerAdmin [email protected] ServerName domain1.com ServerAlias 173.XXX.XXX.XXX DocumentRoot /home/web/public_html/domain1.com/public </VirtualHost> #below vhost currently exists. <VirtualHost *:80> ServerName domain2.com ServerAlias www.domain2.com DocumentRoot /home/web/public_html/domain2.com/public </VirtualHost> #new -I plan on adding this vhost block to enable ssl for domain1.com! <VirtualHost *:443> ServerAdmin [email protected] ServerName www.domain1.com ServerAlias 173.203.127.20 SSLEngine on SSLProtocol all SSLCertificateFile /home/web/certs/domain1.public.crt SSLCertificateKeyFile /home/web/certs/domain1.private.key SSLCACertificateFile /home/web/certs/domain1.intermediate.crt DocumentRoot /home/web/public_html/domain1.com/public </VirtualHost> As previously mentioned, I want to be able to access phpmyadmin via "https://173.XXX.XXX.XXX/hiddenfolder/phpmyadmin" which is stored under "var/www/html/hiddenfolder"

    Read the article

  • Why is the wrong name server information at crsnic.net & gtld-servers.net ?

    - by danorton
    Did I screw this up? I don’t even know how this might have happened, so I’d like to learn. I’m trying out HostGator’s reseller service and I bought a domain name through it, but I didn’t want the default name servers and so I changed them during the registration. After registration the domain name record is correct everywhere except at whois-servers.net and whois.crsnic.net and it looks like the DNS network is using that same information. $ whois -h whois.enom.com. example.com ... Name Servers: dns1.name-services.com dns2.name-services.com dns3.name-services.com dns4.name-services.com dns5.name-services.com ... $ whois -h whois.crsnic.net. example.com Domain Name: EXAMPLE.COM Registrar: ENOM, INC. Whois Server: whois.enom.com Referral URL: http://www.enom.com Name Server: NS1.HOSTGATOR.COM Name Server: NS2.HOSTGATOR.COM Status: clientTransferProhibited Updated Date: 01-jun-2010 Creation Date: 31-may-2010 Expiration Date: 31-may-2011 >>> Last update of whois database: Tue, 01 Jun 2010 19:20:47 UTC <<< ... $ dig +norecurse @b.gtld-servers.net. example.com. NS ... ;; AUTHORITY SECTION: example.com. 172763 IN NS ns2.hostgator.com. example.com. 172763 IN NS ns1.hostgator.com. ... My next step is to let HostGator have a look, but first I want to better understand how this happened. Thanks.

    Read the article

  • Backup solution to backup terabytes and lots of static files on linux server?

    - by user28679
    Which backup tool or solution would you use to backup terabytes and lots of files on a production linux server ? Note that the files are all different and almost never modified, and usage is mostly adding files, so data volume is today 3TB growing all the time at around +15GB/day. Please do not reply rsync. Basic unix tools are not enough, rsync does not keep history, rdiff-backup miserably fails from time to time and screw the history. Moreover these are all file based backup, which put a lot of IOwait just to browse directories and query stat(). But i guess, except R1Soft CDP, there is no way around that. We tried R1Soft CDP backup, which is block level backup, and it proved good and efficient for all our other servers, but systematically fails on the server with 3 terabytes and gazillions of files. That is already more than 2 months that the engineers of R1Soft and datacenter are playing a hot ball game... and still no backup except regular rsync We never tried big commercial solutions, except R1Soft CDP since it was provided as an optional service by the datacented hosting our servers.

    Read the article

  • Mac mini simple customized, Mac mini server or other?

    - by microspino
    I'm in front of a big IT choice for my little office and I need some advice. We have 5 users, 1 super user, 1 HP500 DesignJet Plotter, other 4 laser printers, 1 HP Fax/Print/Scan/Copy machine. All the clients are XP Sp3 boxes. We would like to: centralize and share 90Gb of files using a Dropbox (this way we will have LAN sync of local working directories + internet backup + access our files wherever we are). centralize our plotter, printers and fax machine backup all the workstations share outlook calendar and tasks run 24x7 saving some energy Of course this setup It's just the first step to a more serious and creative network management of our office, so we are open to new ideas. The budget vary from 400€ to 900€, we are not tech gurus but at least one of us is a power user close to become a geek. I've read some articles on macminicolo about a mac mini either normal or with snow leopard server. I heard about Windows Home Server too on the lifehacker website but I'm in a sort of analysis - paralysis can You help me?

    Read the article

  • sbs-server with 2 nics and 2 connections to the internet with different providers not working as it

    - by erik-van-gorp
    We have the following configuration : A sbs-2003 server in a domain (mydomain.com) with 2 network cards, each connected to a different network (provider), with different gateways, one for web and one for mail and clients. (we do this because the bandwitdh we get from our providers is too small to handle all the mail(+spam) traffic and webservices, so we took 2 providers) DNS is as follows : www.mydomain.com 1.2.3.4 mail.mydomain.com 5.6.7.8 NIC 1(192.168.1.3) is connected to to the internet through a firewall at 192.168.1.1, having wan address 1.2.3.4 NIC 2(10.0.0.3) is connected to to the internet through a firewall at 10.0.0.1, having wan address 5.6.7.8 Both nics have their default gateway installed at their corresponding routers. Also the metrics are set equal. (i know this isn't a supported config, but it works more or less). In this configuration i can use RDP on both wan adresses, and telnet to port 25 works as well on both. The issue now is that since a few weeks , we get regular disconnections, and website hickups(timeouts), several per hour. If we set one router to a higher metric, that route no longer works. In short, I want the mails to route through NIC2 and the web through NIC1. Any better configuration (without installing a second mail server) ?

    Read the article

  • College network - can I point non-domain student computers to our SUS server?

    - by Joel Coel
    Since I started here 3 months ago, one of the things that's really bothered me about the way this network is setup is something that shows up on the daily bandwidth consumption report. I get a list of top-visited sites by hits and by size, and invariably the top site (to the point that it's bigger than all the other top sites combined) is au.download.windowsupdate.com. We're pulling in ~30GB/day in windows updates. This is every day, not just after a patch Tuesday. After a patch day, it jumps closer to 40GB for a couple days. The key here is that almost none if it is by machines that I'm responsible for. My machines are for the most part fully patched, and when they're not they'll pull from a SUS server, so new updates are downloaded only once. It used to be closer to 50GB/day because most of the machines in our computer labs use DeepFreeze and weren't applying updates correctly, but that's fixed now. So the problem is definitely student-owned machines in the dorms, some of which are re-downloading the same updates in background each day, over and over. I'd love to have these machines start pulling from our SUS server. Then, if they don't ever actually install them at least they're not leeching bandwidth from our public internet connection. Any ideas on how to resolve the situation?

    Read the article

  • Is there any way for ME to improve routing to an overseas server? [migrated]

    - by Simon Hartcher
    I am trying to make a connection to a gaming server in Asia from Australia, but my ISP routes my connection through the US. Tracing route to worldoftanks-sea.com [116.51.25.54]over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms 192.168.1.1 2 34 ms 42 ms 45 ms 10.20.21.123 3 40 ms 40 ms 43 ms 202.7.173.145 4 51 ms 42 ms 36 ms syd-sot-ken-crt1-ge-6-0-0.tpgi.com.au [202.7.171.121] 5 175 ms 200 ms 195 ms ge5-0-5d0.cir1.seattle7-wa.us.xo.net [216.156.100.37] 6 212 ms 228 ms 229 ms vb2002.rar3.sanjose-ca.us.xo.net [207.88.13.150] 7 205 ms 204 ms 206 ms 207.88.14.226.ptr.us.xo.net [207.88.14.226] 8 207 ms 215 ms 220 ms xe-0.equinix.snjsca04.us.bb.gin.ntt.net [206.223.116.12] 9 198 ms 201 ms 199 ms ae-7.r20.snjsca04.us.bb.gin.ntt.net [129.250.5.52] 10 396 ms 391 ms 395 ms as-6.r20.sngpsi02.sg.bb.gin.ntt.net [129.250.3.89] 11 383 ms 384 ms 383 ms ae-3.r02.sngpsi02.sg.bb.gin.ntt.net [129.250.4.178] 12 364 ms 381 ms 359 ms wotsg1-slave-54.worldoftanks.sg [116.51.25.54] Trace complete. Since I think it will be unlikely that my ISP will do anything, are there any ways to improve my routing to the server without them having to intervene? NB. The game runs predominately over UDP, so I believe most low ping services are out of the question, as they rely on TCP traffic.

    Read the article

< Previous Page | 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022  | Next Page >