Search Results

Search found 31242 results on 1250 pages for 'looking for hosting'.

Page 339/1250 | < Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >

  • Why would e-mail from our own domain not be forwarded to gmail

    - by netboffin
    To solve a problem with spam on our server we tried to forward e-mail from our dedicated server's mailserver(matrix smtp service) to gmail, but while most e-mails got through e-mail from our own domain all went missing. They weren't in the inbox or spam or anywhere else. We've had to go back to using the old system, which means my boss gets a huge amount of spam. We have a windows 2003 server with iis 6 and the matrix smtp service installed. I've toyed with the idea of installing a mail proxy like ASSP but it looks pretty complicated. We're hosting 20 domains on the server as well as our own which has an online shop whose payment system depends on email. I can't start playing around with complicated solutions when it could have disastrous consequences and I don't know enough to implement them safely. So my question has two parts: Part One: Why can't we forward e-mails from people using the same domain. If our domain was foobar.com then [email protected] can't receive from [email protected], but he can receive from everyone else. Part Two: Is there a really simple server side solution to spam that would work with matrix? For instance popfile?

    Read the article

  • Running KVM/XEN/Hyper-V VMs from a RAM disk, is this possible? Practical?

    - by Ausmith1
    Currently I'm using ESX (v3 and v4) to test a scripted OS (Windows 2003) and application install DVD. The DVD ISO (8GB) is mounted on a 1Gbps NFS datastore and the VMDK's (20GB) are on an SSD mounted via NFS over a 10Gbps link. It still takes a lot longer than I'd really like for to run through a test iteration and I'm wondering if mounting the virtual disks and ISO on a RAM disk on the same server as the hypervisor is running on would be worth my while. I can dedicate a server to this VM and 32GB of RAM in the system should be adequate to do the trick I'd guess. (1GB hypervisor OS, 28GB RAM disk and 2GB for the VM is < the 32GB available to me) Since hosting a RAM disk within ESX does not seem possible I'm open to trying KVM/Xen/Hyper-V. KVM would probably be my first choice of these three. Anyone out there tried this? Bear in mind this is purely for a test run of the installer, the VM will be discarded as soon as the test is completed so I'm not worried about losing data from the remote possibility of a power failure.

    Read the article

  • Apache Probes -- what are they after?

    - by Chris_K
    The past few weeks I've been seeing more and more of these probes each day. I'd like to figure out what vulnerability they're looking for but haven't been able to turn anything up with a web search. Here's a sample of what I get in my morning Logwatch emails: A total of XX possible successful probes were detected (the following URLs contain strings that match one or more of a listing of strings that indicate a possible exploit): /MyBlog/?option=com_myblog&Itemid=12&task=../../../../../../../../../../../../../../../proc/self/environ%00 HTTP Response 200 /index2.php?option=com_myblog&item=12&task=../../../../../../../../../../../../../../../../proc/self/environ%00 HTTP Response 200 /?option=com_myblog&Itemid=12&task=../../../../../../../../../../../../../../../proc/self/environ%00 HTTP Response 301 /index2.php?option=com_myblog&item=12&task=../../../../../../../../../../../../../../../proc/self/environ%00 HTTP Response 200 //index2.php?option=com_myblog&Itemid=1&task=../../../../../../../../../../../../../../../proc/self/environ%00 HTTP Response 200 This is coming from a current CentOS 5.4 / Apache 2 box with all updates. I've manually tried entering a few in to see what they get, but those all appear to just return the site's home page. This server is just hosting a few Joomla! sites... but this doesn't seem to be targeting Joomla (as far as I can tell). Anyone know what they're probing for? I just want to make sure whatever it is I've got it covered (or not installed). The escalation of these entries has me a bit concerned.

    Read the article

  • Managing rolling deployments in the cloud

    - by Josh Nankin
    Recently I've been experimenting with various cloud management tools like RightScale, Scalr, custom scripts for managing a variety of servers, each hosting several roles (app, db, load balancer, job queues, etc). The one thing I find lacking in most solutions is a way to do rolling deployments, i.e. running deployments sequentially across a number of servers with the same role. For instance, I dont want to build all of my webservers at the same time, as that will almost definitely result in some down time or 500s for my customers. I'd rather have one or two servers build at a time, while other servers are still available to handle requests. The other alternative is obviously to launch new servers that automatically update themselves on boot, but this isn't as cost effective, and most likely requires more time for the build to complete (it's faster to build on an existing server than to launch a new server and kill old ones). We've all heard of the big companies having the famous "push to build" button (companies like Twilio, Etsy, etc.) but it seems that they all have custom implementations of this. I'm not talking about a simple ssh-loop, clusterssh, or even an mcollective - I preferably want something with a nice simple interface that allows me to specify something like a RightScript or a Scalr script to run on a set of servers with a specific role, and it builds them sequentially. Does any one know of easy ways to get this done, or is this a candidate for a new open source project?

    Read the article

  • Are there any tests I can run on a network to simulate 100 heavy network users?

    - by marc.gayle
    I will be hosting a Ruby on Rails workshop at a small hotel in the near future, and while they have 'Wifi' everywhere on the property, and the property normally hosts 150 - 300 people, I am not 100% confident that they have hosted 150 tech people that tend to have heavy web surfing habits/needs. Their tech department is also 1 or 2 guys. Are there any automated tests I can download and run from my laptop, on the network, that would simulate 100 'heavy users' on the network at the same time? Their broadband pipe is a 15mbps cable connection. Would that suffice for the general surfing needs of 100 - 150 techies? I know all it takes is 1 or 2 bit torrenters to kill the entire network, but assuming we can at the very least block those ports or encourage the attendees not to file share on the network, would that speed suffice for general surfing needs? What are good resources online that would allow me to quickly get up to speed on the IT related issues, so that I can ask their sysadmins the right questions? Edit: Note that I am fairly technical, so assume I can get up to speed quickly even with technical manuals, etc.

    Read the article

  • Error during SSL installation cPanel/WHM

    - by baswoni
    I have a dedicated server and I am using the install wizard via WHM to install an SSL certificate. I have the following keys: Certificate key RSA private key CA certificate I paste these three elements into the wizard along with the domain, IP address and username but I get this error: SSL install aborted due to error: Unable to save certificate key. Certificate verification passed Have I missed a step? I have given it another go to make sure I am copying and pasting the info correctly and I am now getting the following error: SSL install aborted due to error: Sorry, you must have a dedicated ip to use this feature for the user: username! If you are intending to install a shared certificate you must use the username "nobody" for security and bandwidth reporting reasons. Even though I am using a dedicated IP address, I am getting this problem. I thought I would also add that this SSL certificate has been installed on a shared hosting environment with my previous hostig provider. The account with them is still active, however the domain and its contents now reside on the dedicated server - could this cause problems?

    Read the article

  • MX records set up

    - by andrei.troll
    I just want to know how other people set up their MX-entries for mail accounts used with google apps. I work at a local web-hosting firm and we get a lot of tickets from clients who want to set up these settings. I just set them up something like: example.com. 14400 IN MX 10 ALT1.ASPMX.L.GOOGLE.COM. example.com. 14400 IN MX 10 ASPMX4.GOOGLEMAIL.COM. example.com. 14400 IN MX 15 ASPMX5.GOOGLEMAIL.COM. example.com. 14400 IN MX 15 ASPMX2.GOOGLEMAIL.COM. example.com. 14400 IN MX 30 ASPMX3.GOOGLEMAIL.COM. I see another firm (rival one) who sets up way more MX-records ? Roughly, around 10-15 entries. Am I doing something wrong ? More is better in this case ? Is there a secret that I'm not on too ?

    Read the article

  • How should I deploy my JVM-based web application on ubuntu?

    - by Pieter Breed
    I've developed a web application using clojure/compojure (JVM based) and while developing I tested it using embedded jetty that runs on 0.0.0.0:8080. I would now like to deploy it to run on port 80 on ubuntu. I do dynamic virtual hosting, so any request for any host that arrives on port 80 should be handled by my application. The issues that worries me are: I can still run it embedded but I'm worried about running my app as root (needed for binding to port 80). I'm not sure if I can 'give up root' when in the JVM. Do I need to be concerned by this? besides, serving web applications is a known problem and I should be using known solutions for this (jetty or tomcat) but especially tomcat seems very heavy weight. Besides, I only have one application that listens to /* and does routing internally. (with compojure/ring). What I'm trying to say with this is that tomcat by default assigns WARs to subfolders which I don't want. So basically what I need is some very safe way of binding to port 80 on ubuntu that can with minimal interference send all requests to my app. Any ideas?

    Read the article

  • While using an ntfs smb share for mac users, do symbolic links and extended attributes work?

    - by scape
    We have a majority of mac users but we'd rather support their file sharing using a Windows server with an ntfs drive, or at least a Linux server with ext3. We've had trouble, much trouble, utilizing the OS X server software and after the years are now looking to abandon it. What's mostly holding us back is the fact that the mac users very often utilize symbolic links and other special features that exist for an HFS+ partition. The shared locations are mostly primary storage and not just used as an archive storage location. While there is an option to create symbolic links under ntfs, I'm curious if there is anything I need to look out for if I were to move the files over to a new partition that's hosted from a Windows server from the HFS+ partition; in addition, how well creating a symbolic link from a mac might work. I am also worried about windows backup software and if it will ruin these special sym links, and how placing permissions on sub-folders will work. Alternatively I could remotely backup the files using a mac and Bru, nonetheless I still want to get away from mac server for hosting the shares.

    Read the article

  • Disk usage on IIS, PHP5, performance problems.

    - by Jacob84
    Hi everybody, I'm quite worried with a performance problem that I'm facing in one of our production servers. I'm working for a hosting company, so you can imagine how heterogeneous the applications runnning here are. All started with a call of a client complaining about the speed loading a Joomla. The setup is IIS6 (Windows 2003) with PHP5 and FAST CGI wich normally works pretty well. I've tested the loading time and indeed, he was right. 7 or 8 seconds to load, when usually this can be accomplished in 2. Seeing this results, I started to check first CPU and RAM. Everithing normal, 2GB of RAM free, 3%-8% of CPU activity. That's what I call a relaxed server ;). Unfortunately, digging a little deeper I've found the 'PhysicalDisk' counters quite high (above 10), specially the read queues. I've used Process Explorer to see wich of those processes has the higher deltas, but everything seemed normal. As the problem is specially related to PHP pages, I've checked specific IIS counters, as Actual connections, Number of CGI requeriments and Number of ISAPI requeriments. CGI -> 3 to 7 ISAPI -> 5 to 9 Connections-> 90 to 120 (wich appears at the top of the graph) More than a solution (I know this is hard to find), I would like to know if you have an specifical methodology to face this kind of problems. Thanks a lot, as always.

    Read the article

  • Servers / ram for social network- how many?

    - by Marty
    I am launching my social network soon an looking into hosting. The question i am lost is: Do i need separate servers for web vs database vs image handling since there is photo sharing? Or does 1 server handle it all? Also is more ram better? If i get 50GB ram is that better than having 8 gb ram? EDIT: It is PHP codeignitor and MySQL for now. (switch to NoSQL DB later if demand calls fr it.) I will be using memcache also. Concept wise it is similar to yelp, so geographic based with lots of user content and image sharing + live feeds an privacy levels. User plan is open question. Without testing the demand for this i cant give a number. But the concept is unique, no one out there with the set of features i am releasing so it could grow. Ideally I want to plan for handling about 1-2 million views / month from launch. If it goes more than that then I will upgrade.

    Read the article

  • IIS not responding with SSL Server Hello

    - by Damien_The_Unbeliever
    I'm having difficulty getting a particular IIS machine to "do" SSL. This is a test environment (one of many) which we've set up "the same" as we have many times previously, but it just doesn't seem to be working. The server is Windows Server 2003 (Version 5.2 (Build 3790.srv03_sp2_gdr.100216-1301 : Service Pack 2)) IIS is hosting 4 sites (including the default site), but only one site is configured for SSL. We're using the same SSL certificate we use on all of our other servers (it's a wildcard cert). (Don't think this is relevant, but including anyway) We've configured the site to require SSL, it has a subdirectory that doesn't require SSL but has an asp page that redirects into SSL. The 403;4 error page for the site is hooked up to this asp page (this is how we do our non-HTTPS into HTTPS transition). Using Microsoft Network Monitor (3.3), I've just run a session against a server where SSL is working. It can pull apart the SSL exchange as the following messages: SSL: Client Hello SSL: Server Hello. Certificate. Server Hello Done SSL: Client Key Exchange. Change Cipher Spec. Encrypted Handshake Message. SSL: Change Cipher Spec. Encrypted Handshake Message However, on our problem server, I only see: SSL: Client Hello. The next packet from the server (from port 443, so it's listening and responding there) contains only 60 bytes, and just seems to have the Tcp headers and not much else (but I'm by no means an expert at deciphering these things). So, where do I look next? Or what additional information do I need to add to this question, and where do I find it?

    Read the article

  • IP tables blocking access to most hosts but some accesses being logged

    - by epo
    What am I getting wrong? A while back I locked down my web hosting service while hardening it or at least trying to. Apache listens on port 80 only and I set up iptables using the following: IPS="list of IPs" iptables --new-chain webtest # Accept all established connections iptables -A INPUT --protocol tcp --dport 80 --jump webtest iptables -A INPUT --match state --state ESTABLISHED,RELATED --jump ACCEPT iptables -A webtest --match state --state ESTABLISHED,RELATED --jump ACCEPT for ip in $IPS; do iptables -A webtest --match state --state NEW --source $ip --jump ACCEPT done iptables -A webtest --jump DROP However looking at my apache logs I notice various log entries in access_log, e.g. 221.192.199.35 - - [16/May/2010:13:04:31 +0100] "GET http://www.wantsfly.com/prx2.php?hash=926DE27C156B40E55E4CFC8F005053E2D81E6D688AF0 HTTP/1.0" 404 206 "-" "Mozilla/ 4.0 (compatible; MSIE 6.0; Windows NT 5.0)" 201.228.144.124 - - [16/May/2010:11:54:16 +0100] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 226 "-" "-" 207.46.195.224 - - [16/May/2010:04:06:48 +0100] "GET /robots.txt HTTP/1.1" 200 311 "-" "msnbot/2.0b (+http://search.msn.com/msnbot.htm)" How are these slipping through? I don't mind the indexing bots (though I am a little surprised to see them get through). I suppose they must be getting through using the ESTABLISHED,RELATED rules. And no, I can't for the life of me remember why the first match state rule is there So 2 questions: is there a better way to set up iptables to restrict access to specified hosts? How exactly are these 3 examples slipping through?

    Read the article

  • Problems forwarding port 3306 on iptables with CentOS

    - by BoDiE2003
    Im trying to add a forward to the mysql server at 200.58.126.52 to allow the access from 200.58.125.39, and Im using the following rules (its my whole iptables of the VPS of my hosting). I can connect locally at the server that holds the mysql service as localhost, but not from outside. Can someone check if the following rules are fine? Thank you # Firewall configuration written by system-config-securitylevel # Manual customization of this file is not recommended. *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :RH-Firewall-1-INPUT - [0:0] -A INPUT -j RH-Firewall-1-INPUT -A FORWARD -j RH-Firewall-1-INPUT -A RH-Firewall-1-INPUT -i lo -j ACCEPT -A RH-Firewall-1-INPUT -p icmp --icmp-type any -j ACCEPT -A RH-Firewall-1-INPUT -p 50 -j ACCEPT -A RH-Firewall-1-INPUT -p 51 -j ACCEPT -A RH-Firewall-1-INPUT -p udp --dport 5353 -d 224.0.0.251 -j ACCEPT -A RH-Firewall-1-INPUT -p udp -m udp --dport 631 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m tcp --dport 631 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp -s 200.58.125.39 --dport 3306 -j ACCEPT -A INPUT -p tcp -s 200.58.125.39 --sport 1024:65535 -d localhost --dport 3306 -m state --state NEW,ESTABLISHED -j ACCEPT -A OUTPUT -p tcp -s localhost --sport 3306 -d 200.58.125.39 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT COMMIT And this is the output of the connection trial. [root@qwhosti /home/qwhosti/public_html/admin/config] # mysql -u user_db -p -h 200.58.126.52 Enter password: ERROR 2003 (HY000): Can't connect to MySQL server on '200.58.126.52' (113)

    Read the article

  • Why did I loose access to the mailboxes on my old web/mail host after changing to a new one but keeping old MX values

    - by LaserBeak
    So I changed the NS records with registrar to point at the new webhosts DNS servers and edited the SOA record there, deleting the new hosts default MX records and instead putting in the old ones for the old web\mail hosts. The website A record is however pointing at the new webhosts servers and the site comes up fine. But none of this should cause me to loose access to mailboxes on my old hosts mail server right? I log into the control panel on the old host, all the mailboxes are there, all the passwords are fine but I can't log in using either webmail or pop3, says incorrect log-in/password. I even created a new mailbox and password for it respectively, but it would not let me log in. For what its worth I did not change\delete the records for 'A' on the old webhost zone file, since I am not hosting the site with them anymore and NS records are pointing to other hosts DNS servers/zone file so that shouldn't matter right? The old hosts mailserver is also not simply down, I can tell because through the control panel I setup a mail forward for one of the existing inboxes and when sending mail to it, it receives it and forwards it fine. So from this I can deduce that I have correctly inputted the old hosts MX records into the zone file hosted on the new hosts DNS and the mail is being sent to the old hosts mail server(s) and is successfully forwarded by it. But why can't I log into those account/inboxes anymore ?

    Read the article

  • Write permissions on uploaded files - Linux, Apache, PHP

    - by letseatfood
    I am working on a PHP script that transfers files using FTP functions. It has always worked on my production server (which is a hosting service). The development server I have just setup (I am a novice to servers) is Debian Lenny with Apache2, PHP5, and MySQL5. The file transfer works correctly, but once the file has been written to the server, it has permissions of 600. This makes it impossible for me to view the file (JPEG) in the web browser, as permission is denied. I have scoured the internet and even broken my server installation and reinstalled it trying to figure this out (which has been fun, nonetheless!). I know it is unwise to set 777 permissions on public accessible files, but even that will not solve the problem. The only thing that works is if I chmod 777 thefile.jpg after it has been transferred, which is not a working solution. I tried changing the owner of my site files to www-data per this post, but that also does not work. My user is mike, and it still does not work whether the owner of the files is mike or root. Would somebody point me in the right direction? Thanks! And, of course, let me know if I can clarify anything.

    Read the article

  • Solution to easily share large files with non-tech-savvy users?

    - by Tim
    Hey all, We've got a server setup at work which we'd like to use to exchange large files with known clients easily. We're looking into software to facilitate this, but somewhow typing "large file hosting" into Google gives questionable results.. ;) We've come up with the following requirements, and I hope any of you can points us in the direction of a solution that offers this functionality, or is malleable to our needs. Synchronization / revision management is of no concern, it's mostly single large (up to 1+ GB) file uploads & downloads we'll need. We'd like to make the downloads expire & be removed after a certain number of days / downloads, to limit the amount of cleanup we'd have to do. The data files exchanged sometimes hold confidential information, so the URLs generated should be random and not publicly visible. Our users are of the less technically savvy variety, so a simple webform would be best over a desktop client (because we also have to support a mix of operating systems). As for use of the system we'd either like to send out generated random URLs for them to upload their files, or have an easy way manage & expire users. Works on a linux (Ubuntu) server (so nothing .Net-related please) Does anyone know of software that fits the above criteria? We've already seen a few instances of this within the scientific community, but nothing we could use directly.. Best regards, Tim

    Read the article

  • Solution to easily share large files with non-tech-savvy users?

    - by Tim
    Hey all, We've got a server setup at work which we'd like to use to exchange large files with known clients easily. We're looking into software to facilitate this, but somewhow typing "large file hosting" into Google gives questionable results.. ;) We've come up with the following requirements, and I hope any of you can points us in the direction of a solution that offers this functionality, or is malleable to our needs. Synchronization / revision management is of no concern, it's mostly single large (up to 1+ GB) file uploads & downloads we'll need. We'd like to make the downloads expire & be removed after a certain number of days / downloads, to limit the amount of cleanup we'd have to do. The data files exchanged sometimes hold confidential information, so the URLs generated should be random and not publicly visible. Our users are of the less technically savvy variety, so a simple webform would be best over a desktop client (because we also have to support a mix of operating systems). As for use of the system we'd either like to send out generated random URLs for them to upload their files, or have an easy way manage & expire users. Works on a linux (Ubuntu) server (so nothing .Net-related please) Does anyone know of software that fits the above criteria? We've already seen a few instances of this within the scientific community, but nothing we could use directly.. Best regards, Tim

    Read the article

  • Can Dovecot IMAP automatically create Maildir folders for new (virtual) users?

    - by user233441
    everyone. I am learning to set up a dovecot home IMAP server using a virtual Ubuntu 12.04 machine. My intention is eventually to have a home server that uses POP3 to take email from several addresses and remove them from my ISP's servers, while making them accessible through a home IMAP server (this is similar to the setup described at https://help.ubuntu.com/community/POP3Aggregator, which explains how to set up the system with dovecot version 1, and is thus outdated). I intend to use the ISP's server directly when sending messages, and to BCC all sent messages to myself. I've completed the basic set up of the test server: getmail uses POP3 to fetch messages from two test email accounts, and successfully delivers them to the respective Maildir-style new folders on the virtual machine. Dovecot then successfully sees these messages. I have two questions: 1) I had to set up new, cur, and tmp folders for both of the test accounts manually to get this setup to work. Is there a way to get dovecot to create these Maildir folders automatically when I create a new virtual user account (e.g., when I add a user and password combination to my dovecot password file), or is it expected that I write a bash script to automate that task? 2) I would welcome any comments you have on how this approach could be improved as I learn to set it up. My motivations with this approach are 1) to enable archiving/storing emails from several hosting providers that impose a cap on server storage, and 2) to give me somewhat greater control over email storage without requiring that I set up and administrate a mail server from scratch (which I'm not yet prepared to do) (this follows the recommendations at https://ssd.eff.org/tech/email). Thank you!

    Read the article

  • HTTP request hangs for for exactly 150 seconds, then gives incomplete response. How do I find out wh

    - by Nathan
    I am hosting a Wordpress blog, and having a strange problem. When I connect to the server (http://71.65.199.125/ at the time of this writing) it displays the Title correctly, and half of a download bar, indicating it has received some of the page, then it hangs for exactly 150 seconds (timed it twice), then it sends the rest of the page, but without the stylesheet. after that it hangs indefinitely, continuing to say "connecting..." without making any progress. If you have any clues as to what might be happening, or how I could print debug logs of PHP or something to see what it is looking for during that hang time that would probably help. recent changed I have made: switched wordpress themes, however I did see it work once with the new theme. moved the server to another building, with an identical ISP, and linksys router forwarding setup. I have also added a favicon.gif file to /var/www but without linking to it from any of the wordpress pages. I have also had a unanticipated power interruption. System info: Ubuntu debian 9.04 Apache2 PHP 5 Wordpress 2.9.2 Thank you

    Read the article

  • GIT Website Deployment

    - by Brian
    I am attempting to setup GIT to deploy my project to different locations based on the branch. (I think this is what I want to do anyway). My current setup is this: Local dev machine running Netbeans to make changes. Remote server hosting GIT projects (same server running apache) - 2 subsites exist a test.FQDN.com and a live.FQDN.com What I would like to do is have 1 GIT project (MyProject) and create a new feature branch. Any commits done to the new feature branch would push to test.FQDN.com. Once the features have been tested and then merged into the master branch, it would push to live.FQDN.com. I have looked at GIT's post-receive hooks and was able to use "git checkout -f" command to pull on the test.FQDN.com site however that only pulls the master branch and not the new feature branch. I do not have any funding to use a third party to make this work, and would prefer to stay within GIT but have full root access to the web server if there is a package to install which would help control this. Any suggestions would be great!

    Read the article

  • RAID 10 or RAID 5 for multiple VMs - what is the best choice?

    - by Lars Fastrup
    I have just ordered a new rig for my business. We do a lot of software development for Microsoft SharePoint and need the rig to run several virtual machines for development and test purposes. We will be using the free VMware ESXi for virtualization. For a start, we plan to build and start the following VMs - all with Windows Server 2008 R2 x64: Active Directory server MS SQL Server 2008 R2 Automated Build Server SharePoint 2010 Server for hosting our public Web site and our internal Intranet for a few people. The load on this server is going to be quite insignificant. 2xSharePoint 2007 development server 2xSharePoint 2010 development server Beyond that we will need to build several SharePoint farms for testing purposes. These VMs will only be started when needed. The specs of the new rig is: Dell R610 rack server 2xIntel XEON E5620 48GB RAM 6x146GB SAS drives Dell H700 RAID controller We believe the new server is going to make our VMs perform a lot better than our existing setup (2xIntel XEON, 16GB RAM, 2x500 GB SATA in RAID 1). But we are not sure about the RAID level for the new rig. Should we go for having the the 6x146GB SAS drives in a RAID 10 configuration or a RAID 5 configuration? RAID 10 seems to offer better write performance and lower risk of a RAID failure. But it comes at a cost of less drive space. Do we need RAID 10 or would RAID 5 also be a good choice for us?

    Read the article

  • Setup my domain at Whois.com as nameservers for a Dedicated Server with Kloxo

    - by BoDiE2003
    Hello, this is my first time at ServerFault, I'm a stackoverflow user. I'm faceing the next problem and may be its me, but I can't find proper guides to set up a domain and nameservers for a dedicated box. I have a domain, at whois.com, and a Dedicated server at Reliable Hosting Services, the server has 5 IPs, I know that I need 2 of them for the nameservers. Right now, my domain at whois.com is using nsX.whois.com nameservers and it has 2 child nameservers: ns1.mydomain.com & ns2.mydomain.com pointing to those 2 IPs from my Server. Whats next? I still cannot set that domain as my main server domain since it says: To map an IP to a domain, the domain must ping to the same IP, otherwise, the domain will stop working. The domain you are trying to map this IP to, doesn't resolve back to the IP, and so it cannot be set as the default domain for the IP. Well and I'm stuck on those steps, whats next to have my nameservers working and my main domain assigned to my server? Thank you very much and happy new year!!

    Read the article

  • centos install / partitioning

    - by ServerSideX
    I'm using NOC-PS to remotely install Centos 6.2 via KVM / IPMI. I'm going to install cPanel as well and they recommend this layout /boot (99MB) swap (2x server RAM) / (remainder) In the o/s install profile within NOC-PS software, it shows as this: part /boot --fstype ext2 --size 250 part pv.01 --size 1 --grow volgroup vg pv.01 logvol / --vgname=vg --size=1 --grow --fstype ext4 --fsoptions=discard,noatime --name=root logvol /tmp --vgname=vg --size=1024 --fstype ext4 --fsoptions=discard,noatime --name=tmp logvol swap --vgname=vg --recommended --name=swap By the time the default partition setup was done installing Centos, I get this [root@server005 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg-root 532G 907M 504G 1% / tmpfs 7.8G 0 7.8G 0% /dev/shm /dev/sda1 243M 28M 202M 13% /boot /dev/mapper/vg-tmp 1008M 34M 924M 4% /tmp [root@server005 ~]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Fri Dec 7 18:47:24 2012 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/vg-root / ext4 discard,noatime 1 1 UUID=58b31aaf-5072-4fb1-a858-33bc316fa793 /boot ext2 defaults 1 2 /dev/mapper/vg-tmp /tmp ext4 discard,noatime 1 2 /dev/mapper/vg-swap swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 My question is, how should the NOC-PS install profile look like to get the recommended cPanel partitioning? The server has 16GB RAM, dual 600GB SAS drives and will be used for cPanel shared hosting.

    Read the article

  • wild card redirects issue giving error this webpage has a redirect loop

    - by david
    In my website I changed or better word modified the directory name ""vehicles-cars"" to ""vehicles-cars-for-sale"" when i tried to redirect using wild card redirect my old directory name to new directory name in my web hosting cpanel account. every time when i open pages from that directory i am getting error code. This web-page has a redirect loop. The website is php. The problem is that that my lots of pages from old directory are indexed in googles and they are getting duplicate contents. If I redirect single page it works perfect but there are lots of pages so I need wild card redirect to redirect whole directory . I really need some advice what to do with this problem. Here is .htaccess file code for redirect thanks RewriteEngine on RewriteCond %{HTTP_HOST} ^example\.com$ [OR] RewriteCond %{HTTP_HOST} ^www\.example\.com$ RewriteRule ^vehicles\-cars\/?(.*)$ "http\:\/\/example\.com\/vehicles\-cars\-for\-sale\/$1" [R=301,L] i have other wilcard redirect of whole directory with same code and its working perfect here is the code in .htaccss file which is same as above and working perfect for this directory RewriteCond %{HTTP_HOST} ^adsbuz\.com$ [OR] RewriteCond %{HTTP_HOST} ^www\.adsbuz\.com$ RewriteRule ^autos\/?(.*)$ "http\:\/\/adsbuz\.com\/vehicles\-cars\-for\-sale\/$1" [R=301,L] so i dont understand whats wrong with the above code please i really need some expert advice thanks again

    Read the article

< Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >