Search Results

Search found 14168 results on 567 pages for 'virtual hosting'.

Page 85/567 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • The concept of virtual host and DNS [migrated]

    - by Subhransu
    I have a dedicated server and a mydomain.com (bought from a hosting company). I want to host a website from my dedicated server with the domain mydomain.com i.e. when I enter mydomain.com from browser it should point to the IP(let's say X.X.X.X) of dedicated server(and a particular folder inside it). I have some following queries: In Server I know I need to edit some of the files (like: host or hostname file) in the server but I do not know what exact file I need to edit. How to add a Site enable or Site available in apache2 ? In Hosting Company control Panel Which records to add (A or cname or anyother)? Where Should I add DNS(in dedicated server section or domain name section)? How it is going to affect the behaviour of the domain? in short the question is: How the virtual host works & how to add DNS?

    Read the article

  • Can you install Ubuntu Server in a Windows PC VM on Windows 7?

    - by Lance Fisher
    I am running Windows 7 64-bit. I've installed Windows Virtual PC and Windows XP Mode successfully. Next, I downloaded Ubuntu Server 9.04 32-bit. I created a new virtual machine with a dynamically expanding .vhd, loaded the Ubuntu .iso, and booted the machine. I successfully made it through the install, but when the machine reboots, I get a segmentation fault. Here is a screenshot. Has anyone successfully installed Ubuntu on Windows Virtual PC?

    Read the article

  • Plesk hosting on MediaTemple DV

    - by David
    Hi there, We have a MediaTemple dedicated virtual running Plesk. The problem we're having is that changing the permissions of files on the server to be writable by server owner (apache) is conflicting with the ability to upload and overwrite files via the FTP user. Here's an example, I upload a file from user "serverftp" and they own the new file in the httpdocs folder. I then change the permission of an image upload folder to the apache user to that I can upload images via a PHP script. Uploading or changing that folder with the serverftp user is then locked out. Speaking to tech support didn't get very far because there are some strange group permissions going on and it would involve me adding every single domain FTP user to the pcantl group or something similar. I'm wondering how I can easily change things so that I don't have this problem anymore.

    Read the article

  • Pushing DNSSEC updates with offline keys

    - by eggyal
    In a non-professional capacity, I look after the DNS of some 18 domains: mostly personal/vanity domains for immediate family. I outsource the whole shebang to an inexpensive managed hosting provider with a web interface through which I manage the zones; since the provider also offers DNSSEC, I have successfully deployed that too. These domains are so unimportant that an attack targetted against them seems much less likely than a general compromise of my provider's systems, at which point the records of all their customers might be changed to misdirect traffic (perhaps with extremely long TTLs). DNSSEC could protect against such an attack, but only if the zone's private keys are not held by the hosting provider. So, I wonder: how can one keep DNSSEC private keys offline yet still transfer signed zones to an outsourced DNS host? The most obvious answer (to me, at least) is to run one's own shadow/hidden master (from which the provider can slave) and then copy offline-signed zonefiles to the master as required. The problem is that the only machine I (want to*) control is my personal laptop, which usually connects from a typical home ADSL (behind NAT over a dynamically-assigned IP address). Having them slave from that (e.g. with a very long Expiry time on the zone for periods when my laptop is offline/unavailable) would not only require a Dynamic DNS record from which they can slave (if indeed they can slave from a named host rather than a static IP address), but would also involve me running a DNS server on my laptop and opening both it and my home network up to the incoming zone transfer requests: not ideal. I would prefer a much more push-oriented design, whereby my laptop initiates transfer of offline-signed zonefiles/updates to the provider's servers. I looked into whether nsupdate could fit the bill: documentation is a little sketchy, but my testing (with BIND 9.7) suggests it can indeed update DNSSEC zones, but only where the server holds the keys to perform the zone signing; I have not found a way to have it take an update including the relevant RRSIG/NSEC/etc. records and have the server accept them. Is this a supported use-case? If not, I suspect the only solutions which could fit the bill will involve non-DNS-based transfer of the zone updates and would welcome recommendations that are supported by (hopefully inexpensive) hosting providers: SFTP/SCP? rsync? RDBMS replication? Proprietary API? Finally, what would be the practical implications of such a setup? Key rotation is jumping out at me as being an obvious difficulty, especially if my laptop is offline for extended periods. But the zones are extremely stable, so perhaps I could get away with long-lived ZSKs**...? * Whilst I could run a shadow/hidden master on e.g. an outsourced VPS, I dislike the overhead of having to secure / manage / monitor / maintain yet another system; not to mention the additional financial costs of so doing. ** Okay, this would enable a concerted attacker to replay outdated records—but the risk and impact of such are both tolerable in the case of these domains.

    Read the article

  • Invoke host's workspace switcher from inside VM

    - by Paul
    When I start a virtual machine (like VMware or VirtualBox) and set it full screen then, from the host OS (in this case Ubuntu), I can beautifully switch to it with the Workspace Switcher. So I switch to the VM like I switch to a virtual screen. But switching back -- from the VM to the host's virtual screens -- seems to be impossible because by entering the VM I loose the host's workspace switcher. Is there a nifty workspace switcher program that runs inside the VM and is able to switch workspaces of the host machine? Edit in light of Frank Thomas' answer, can we configure VirtualBox (or VMware) to not send certain key combinations to the VM, but keep them to the host? Like Super+S. In that approach I would sadly have to miss the nice workspace switcher icon in the guest OS, but that's OK if at least the keyboard trick would work.

    Read the article

  • Does Windows notice when a VM is moved around?

    - by Martin
    I'm thinking of migrating a Desktop machine (Windows XP) to a VM solution (VirtualBox or MS Virtual PC). The reason is that I need a new hardware anyways and I don't want to (cannot properly) reinstall all the "business" apps on there. So my plan goes as follows: I'll pull an image of the machine and restore it to a Virtual Machine using Acronis Universal Restore or some other tool that can restore to dissimilar hardware. (The process is largely irrelevant for this question I think.) Once I have this virtual machine properly running I'll move it to a new PC. So the question now is. Are there any caveats wrt. to Windows (XP?) being installed in a VM and the VM machine being moved around on different host computers? Can anything break in the OS inside the VM? Will there be troubles wrt. to Windows activation?

    Read the article

  • Help looking before I leap! I need expert guidance...

    - by Ellen Reddick
    27" iMac running win7 under bootcamp (slick! ). I have Access 2003 program with files linked through ODBC used by 4 installations (all with Access 2003 installed). I want to buy Access 2010 and try it under virtual PC (under Bootcamp). Will it work (since I have to install the ODBC drivers)? If I decide after this trial that I like what it does, can I then install it under the Windows 7 bootcamp partition (with or without uninstalling the virtual PC) without using up the 2nd allowed installation? Also, I see that MS allows an Office Pro 2010 trial download good for 60 days. Would this work in Windows 7 Virtual PC and would it be a better way to go, followed by a legitimate purchase of Access 2010 for the Windows 7? This is not an Access programming question--I realize there may be some tweaks necessary in the program to run it under 2010 and I can handle that part.

    Read the article

  • Using m0n0wall in a VM for testing.

    - by tombull89
    I'd like to use m0n0wall inside a (VirtualBox) VM to play about with and see what it can do. Ultimately the goal is to have a number of virtual machines connected to a internal virtual network which goes through to the m0n0wall VM, and then the m0n0wall box connected to the internet through NAT or a bridged addaptor on my host machine. I can find out how to set the LAN and WAN addresses, but this seems to be only for using m0n0wall as a router intead of attached to another router. Let's see if I can diagram this: [Virtual Machine]---Internal (VM Only) Network ---[m0n0wall VM]---Bridged/NAT Addaptor---["real" router]---Internet. Can anybody suggest how I should do this or am I thinking m0n0wall isn't meant to be used like this?

    Read the article

  • Hoster not fulfilling contract: how to get money back?

    - by plua
    For several years, we have as a small webdesign company rented a dedicated server at a large hosting provider. They had several support levels. When we signed up for this, we had very limited in-house knowledge about server maintenance, and were very worried about the security of our server. We therefore took one of the more expensive support packages. An important aspect in this were these claims: [PROVIDER] verifies the availability of the latest security updates and sends you a notification to see if you are interested to have them installed [PROVIDER] verifies the availability of the latest supported software updates and sends you a notification to see if you are interested to have them installed These items were clearly stated on their website as being part of the advantage of this package.; With not enough knowledge about installing and updating such software on a Linux server, we decided to go for this package. We paid a premium of $50 per month over the maintenance package that is next in line ($100 vs $50). Over the years, we have paid several thousand dollars for this service. Then came the moment that I learned more and more about server management. And I found out step by step that our server was horrendously outdated! We had an OS that was hardly updated, our anti-virus was not working because it needed certain more recent packages on the OS, and in general there were a whole bunch of security vulnerabilities and fixes that were lacking. Shocked, I wrote the provider. Turns out, they decided unilaterally that they would not send out any notifications to clients because clients would get too many e-mails. This is a quote from their explanation: [...] We have decided not to spam its clients with OS and security updates and only install them whenever asked by the client I was shocked! They had never mentioned that they would drop this service, and in fact the claims about updating their clients through e-mail was still on their website, after they apparently stopped doing this years ago! Upon finding this out, I requested they refund all that we have paid as a premium over the other package, and make it available as future credit with their own company. I thought this was a very reasonable request. However, they said they would only go back one year and provide credit for this one year. Mails went back and forth, but they were not willing to give credit for the whole period, which I felt I was entitled to. So ultimately I left the hosting company, and filed a complaint with the BBB a while ago. Now, I am not the kind of person who runs to a lawyer for any minor thing, but in this case I am really considering taking action. I have been paying for years for a service I did not receive (the premium package had a few other pluses, but we took it primarily for these two points, and I can prove that we did not use the other benefits). For our small company the hosting costs were a very large part of our budget, and I feel it is very unfair how this large provider just does not care about not fulfilling its obligations. So my question is: what action should I take? Is a lawyer the only next step, or are there other suggestions? And am I right here to claim this money, or are they right that there is some sort of statue of limitations on such claims? Any feedback is appreciated.

    Read the article

  • Hyper-V share a folder between host and instance

    - by Fly_Trap
    I have a hyper-v server and several VM's (Virtual Machines). All the VM's are connected to an external network. I have tried to share a folder on the host and connect via the VM, I can do this but I'm prompted for a user name and password (as you would expect). I do not want to enable the "Everyone" group permissions as the physical host server is on a network of other servers. I have created a new virtual internal network in Hyper-V and given it's adapter a static ip of 33.0.0.100. I have added the virtual adapter to one of the VM's and set to IP to 33.0.0.2 (as advised here). Again this seems to work but I'm still prompted for a user name and password. Am I on the right lines here? I just want to share a directory from the host to the vm's without exposing the share to other servers on the network.

    Read the article

  • Hosting ESXI (free edition) [closed]

    - by Peter Adss
    We currently have one physical server running the free version of VMWare ESXi that virtualizes a Win SBS 2003 server and a Citrix server. We need to collocate the server and are investigating our options. Are there places that will host our virtual servers and save us the expense of shipping the physical server out for collocation. In my mind we'd copy the Vms to disk and ship them out. Does the fact that we're using the free version of ESXi create a barrier to this idea? Thanks for the help, I realize this is a stupid question.

    Read the article

  • Why is execution-time method resolution faster than compile-time resolution?

    - by Felix
    At school, we about virtual functions in C++, and how they are resolved (or found, or matched, I don't know what the terminology is -- we're not studying in English) at execution time instead of compile time. The teacher also told us that compile-time resolution is much faster than execution-time (and it would make sense for it to be so). However, a quick experiment would suggest otherwise. I've built this small program: #include <iostream> #include <limits.h> using namespace std; class A { public: void f() { // do nothing } }; class B: public A { public: void f() { // do nothing } }; int main() { unsigned int i; A *a = new B; for (i=0; i < UINT_MAX; i++) a->f(); return 0; } Where I made A::f() once normal, once virtual. Here are my results: [felix@the-machine C]$ time ./normal real 0m25.834s user 0m25.742s sys 0m0.000s [felix@the-machine C]$ time ./virtual real 0m24.630s user 0m24.472s sys 0m0.003s [felix@the-machine C]$ time ./normal real 0m25.860s user 0m25.735s sys 0m0.007s [felix@the-machine C]$ time ./virtual real 0m24.514s user 0m24.475s sys 0m0.000s [felix@the-machine C]$ time ./normal real 0m26.022s user 0m25.795s sys 0m0.013s [felix@the-machine C]$ time ./virtual real 0m24.503s user 0m24.468s sys 0m0.000s There seems to be a steady ~1 second difference in favor of the virtual version. Why is this? Relevant or not: dual-core pentium @ 2.80Ghz, no extra applications running between two tests. Archlinux with gcc 4.5.0. Compiling normally, like: $ g++ test.cpp -o normal Also, -Wall doesn't spit out any warnings, either.

    Read the article

  • PHP-FPM and APC for shared hosting?

    - by Tiffany Walker
    We are looking into finding a way to get APC to only create one cache per account / site. This can be done with Fastcgi (last update 2006…) but with Fastcgid APC will have to create multiple caches for multiple processes run by the same account. To get around this problem, we have been looking into PHP-FPM PHP process manager allows multiple PHP processes to share a single APC cache. But from what I have read (I hope I'm wrong) , even if you create a pool per process, all sites accross all pools will share the same APC cache. This brings us back to the same problem as with shared Memcached: it's not secure ! On php-fpm's site I read that you can chroot php-fpm pools and define a specific UID and GID per pool… if this is the case then shouldn't APC have to use this user and not have access to other pools cache ? An article here (in 2011) suggests that you would need to run one process per pool creating multiple launchers on different ports and different config files with one pool per config file : http://groups.drupal.org/node/198168 Is this still neceessary ? If so what would be the impact of running say 800 processes of php-fpm ? Would it be mainly memory ? If so how can I work out what the memory impact would be ? I guess that it would be better to run 800 times php-fpm then to have accounts creating multiple APC caches for a single site ? If on average an account creates a 50MB cache and creates 3 caches per account that makes 150Mb per account which makes 120GB… However if each account uses on average only 50Mb that would make 40GB We will have at least 128GB of ram on our next server so 40GB is acceptable if running 800 x PHP-FPM does not create an overhead of more than 20GB ! What do you think is PHP-FPM the best way to go to provide secure APC cache on shared hosting with a server that has a decent amount of memory ? Or should I be looking at another system ? Thanks !

    Read the article

  • local msmtp and ovh hosting

    - by klez
    I have my personal email hosted on OVH (personal hosting plan) and I'm not able to send mails using msmtp. Here's a typical session ignoring system configuration file /etc/msmtprc: File o directory non esistente loaded user configuration file /home/klez/.msmtprc using account default from /home/klez/.msmtprc host = ssl0.ovh.net port = 465 timeout = off protocol = smtp domain = localhost auth = choose user = federicoculloca%xxxxxxx password = * ntlmdomain = (not set) tls = on tls_starttls = off tls_trust_file = (not set) tls_crl_file = (not set) tls_fingerprint = (not set) tls_key_file = (not set) tls_cert_file = (not set) tls_certcheck = off tls_force_sslv3 = off tls_min_dh_prime_bits = (not set) tls_priorities = (not set) auto_from = off maildomain = (not set) from = federicoculloca@xxxxxxxx dsn_notify = (not set) dsn_return = (not set) keepbcc = off logfile = (not set) syslog = (not set) reading recipients from the command line TLS certificate information: Owner: Common Name: ssl0.ovh.net Organizational unit: Domain Control Validated Issuer: Common Name: OVH Secure Certification Authority Organization: OVH SAS Organizational unit: Low Assurance Country: FR Validity: Activation time: lun 31 gen 2011 01:00:00 CET Expiration time: mer 15 feb 2012 00:59:59 CET Fingerprints: SHA1: F9:DC:41:F9:A2:38:51:9B:56:E4:98:E6:CD:81:31:42:E6:0E:26:6D MD5: FC:EC:F3:8F:28:E4:7E:28:99:89:E6:BB:C9:DF:71:CE <-- 220 ns0.ovh.net ssl0.ovh.net. You connect to mail427.ha.ovh.net ESMTP --> EHLO localhost <-- 250-ssl0.ovh.net. You connect to mail427.ha.ovh.net <-- 250-AUTH LOGIN PLAIN <-- 250-AUTH=LOGIN PLAIN <-- 250-PIPELINING <-- 250-8BITMIME <-- 250 SIZE 109000000 --> AUTH PLAIN xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx <-- 235 ok, go ahead (#2.0.0) --> MAIL FROM:<federicoculloca@xxxxx> --> RCPT TO:<[email protected]> --> DATA <-- 250 ok <-- 250 ok <-- 354 go ahead --> hello world --> . <-- 554 mail server permanently rejected message (#5.3.0) And my configuration # ~/.msmtp # Mostly from Peter Garrett's examples # https://lists.ubuntu.com/archives/ubuntu-users/2007-September/122698.html # Accounts from Scott Robbins' `A Quick Guide to Mutt' # http://home.nyc.rr.com/computertaijutsu/mutt.html account xxxxx host ssl0.ovh.net from federicoculloca@xxxxxx auth on user federicoculloca%xxxxxx password xxxxxx tls on tls_certcheck off tls_starttls off Any idea?

    Read the article

  • Best Firewall product for hosting/housing environment?

    - by Raffael Luthiger
    I am searching for a firewall product (appliance or software) for an hosting/housing environment. The biggest problem is that the rules get very complex as more customers are behind the firewall. Some have only one server, others have a whole subnet. Some need NAT, some a VPN endpoint. Some customers want to only allow port http, others ssh as well. So the device needs to be able to support VLANs and it should be possible to group the rules per customer. Speed is another important point. And being able to manage redundant devices easily. I am searching for something that doesn't have all the extras like spam filter etc. I was searching a lot on the net but either they had all those extras as well (and with is an overloaded configuration interface) or they missed some of the features I need (e.g. VLAN). The VPN endpoint is not the an important criteria. We were thinking about a separate machine for it.

    Read the article

  • How do I effectively use WinSCP on my GoDaddy Dedicated Hosting

    - by Scott
    After being told that Virtual Private Servers would not fit the scope of my project, I have timidly entered the world of dedicated hosting. Unfortunately, this is forcing me how to learn the basics of being a Linux server admin. GoDaddy has a master account for the server. When you use SSH, they want you to use "su" to switch to the root user. Thus far, I have been able to do everything I have needed to thus far via the command line as this root user. However, now I need to upload files to my server. I'm used to using WinSCP to upload files. I can use my general server account to view the files but when I try to drag or create files its says that I cannot because I do not have permission to do so. I have researched the WinSCP documentation and it seems that this "su" function is beyond the scope of the program. How am I to grant myself access to upload these files using SSH? Should I create a user with the proper permissions? I'm happy to do this but thus far I have not been able to make sense of what I have found online. I'm going to try and move forward but any help and/or insight is appreciated.

    Read the article

  • Moving Farm to co-location hosting - network settings requirements

    - by Saariko
    I am moving my farm (2 Dell's R620) to a co-location hosting service. I am trying to figure out the secure way to have my network settings The requirements are: VM1 is the working HOST, includes: esxi 5.1, vSphere, 4 clients (w2008r2 all) VM2 has esxi 5.1 installed, and a single machine with Veeam Backup and copy 6.5 - keeping a copy of VM1 clients on the VM2 internal storage (this solution is due to a very small budget - in case of failure on Host 1 - can redirect IP's) Only 2 VM clients require network address and access from the WWAN - ISP provides IP's range for them (with Gateway and DNS) I need connection to the iDrac's from my office (option to create a VPN-SSL tunnel) Connection to the vSphere appliances I want to be able to RDP to the VM clients The current configuration is that each host has the iDrac dedicated nic connected , and another (NIC #1) connected - with a static IP on 192.168.3.x The iDrac's have a static IP from the same network range (19.168.3.x) It will look something like this: My thoughts: On NIC#2 of both hosts I will connected a crossed cable I will give each VM clients that needs internet access a 2ndry VM network with the assigned IP from the ISP open only to web - can not access from the My Question: Should I give IP's (external) to the machines who DO NOT require WWAN Access? - I can't see a way to RDP to them directly if not. Should I use the crossed cable? or just plug NIC #2 to the switch? Will this setup even work? What do I need to verify? What Virtual nic's and/or switches should I create on the Hosts?

    Read the article

  • Ruby Passeger + Nginx or lighthttpi + fgci for shared hosting

    - by devnull
    I have set up a passenger + nginx setup and I plan to offer a free non-commercial hosting (or in fact on the fly deployment) for rack-based frameworks (e.g. camping, sinatra). I am facing an "issue" with passenger. For each application you need to configure nginx.conf (it would be the same with apache so it is not an nginx issue) with: server { ... passenger_base_uri /app1; passenger_base_uri /app2; passenger_base_uri /app3; } Now this is not inherently bad as, in theory, I could allow a user to run just one app on his webspace but even in this case I need to create a new server directory on nginx e.g. (user.domain.com). As this will mainly be used to deploy apps the behavior I am looking at is more the possibility to auto map several apps (e.g. app1, app2, app3, app4) under the same server (your app.com/app1 yourapp.com/app2) without having to update the nginx or apache file each time. This seems to be a limitation in passenger. As such I am thinking about an alternative with lighttpd and fastcgi. Would this allow immediate deployment without touching the lighttpd config file e.g. I create a new directory with app2 and it will run immediately ? What is your experience in performance difference between passenger + nginx vs. lighttpd + fastcgi ? thanks in advance scenario details: on nginx + passenger - user cannot add a new sub-folder and run another sinatra/camping app without declaring the path on nginx.conf and restarting the server; wished behavior with the new setup: - user can add a new folder with a new app and it would run on lighttpd+fcgi without any extra configuration of the web server;

    Read the article

  • Apache2 - Hosting two sites on the same domain with different ports

    - by user1026361
    I am hosting a staging site (test.mydomain.com) which currently work well on port 80 for two sites (test.mydomain.com and test.FRmydomain.com) I am working on a new backend and I would like to deploy a third site on this server for testing. My hope is that it will live at test.mydomain.com:4204. I've got some experience with apache and quickly added statements: Listen 4204 NameVirtualHost *:4204 and created a new config for my site. What I imagine are the relevant parts of my config: <VirtualHost *:4204 > ServerAdmin [email protected] ServerName test.mydomain.com:4204 However, the site is not publicly available, by name or ip. If i curl localhost:4204 from the server, I get the expected page content At this point, I'm a bit of a loss on how to go forwards. It seems like my config is correct but not available to be served. Am I better off defining a proxy definition so that, for instance: test.mydomain.com/4204 proxies to my localhost server or is there a way to make the site available via the internet? EDIT: I have added an iptable rule after further Googling with the command: iptables -I INPUT -p tcp --dport 4204 -j ACCEPT I can see apache listening on 4204 and the rule is definitely in place but cant reach the site

    Read the article

  • Hosting django backend for iPhone / Android app

    - by Ashok Fernandez
    I am looking to make an iPhone / Android app for my university using the Appcelerator Titanium framework. The app will rely heavily on a server backend which will pull information from other sites, figuring out what is relevant to the user then deliver the content. Some of the information is individual to the user (calendar data), other bits are updates frequently but are shared (bus timetables) and others are static and the same for everyone (magazine articles). I was going to use django as I am fairly proficent in python so I thought it would save time. My question is, which hosting services do you recommend to host the server backend? I am expecting about 9000 people to use the app with very random spikes in traffic, but unfortunately I have very little to go on at this stage. I have heard a lot about Webfaction, is it suitable for something like this or am I likely to need something bigger? I don't really want to fork out for a VPS at this stage. What about Amazons EC2? Would that be more suitable than Webfaction? Sorry for the fairly open ended question, Im sort of new to this so I open to all suggestions.

    Read the article

  • Installing sphinx on a web hosting server

    - by fiftyeight
    I want to install sphinx search on a web-hosting server. I'm on a Linux VPS with hostgator, but I never actually installed anything on a remote server so this will be a first time for me. If there's anyone here who installed sphinx it's really help me I had some problems when using sphinx on my PC with the permissions and the MySQL files, eventually I got it working on my PC. Anyway, I'd me really grateful if anyone can help me with some questions Do i need root access to install sphinx? I have root access to the server but I'd connect to it as a normal user since doing stuff as root is always less secure. can anyone tell me by what user I need to execute the indexer and the search daemon? Should I use root access in order to this? when I did it as a normal user on my PC it gave me some trouble with the PID file and the log files. last time I executed the search deamon I executed it as a normal user and it gave me some trouble, I created the folder /var/log/ for log files and did chmod 777 on it, but still when I executed the search daemon it created the PID file "searchd.pid" file but with no permissions for some reason, any idea why? Thanx in advance. fiftyeight

    Read the article

  • Server Hosting + AWS

    - by ledy
    Since my dedicated servers are hosted at a "normal" hosting service, I wonder if there is a really cheap way to extend the server farm with AWS instances. E.g. it seems to be a effient and flexible solution with data storage and ressources for ocassional data processing, too. However, it might be very in-efficient to mix two data centres and transfering data from current webhoster to amazon and vice-versa. In my case, the traffic for this continuous data exchange seems to be expensive and the delay for moving the data back to the hoster leads into a lack or delay. How are best practises for mixing non-aws and aws systems? E.g.: How to move the hosters data to aws as log file storage to run urchin analysis and/or port the log file data into a bigtable for exhausting analysis there. After working with the data: how to bring it back to the hoster and use the data with the webservers there? I am not going to move all the server farm to amazon, only "separate" parts or tasks if the transfer/exchange does not lead to increased cost.

    Read the article

  • Free Hosting control panel

    - by John Maxim
    Hello All, I'm in the mid of researching for one of the best hosting control panels. The server I run is Ubuntu and I have some experience with ISPConfig 2 & 3. Since I haven't explored any others available, what are the recommended ones for an Ubuntu server? I asked because I find that there seems to be some disabling and modifications required for an Ubuntu server if I need to use ispconfig which causes the server to change its actual way of running. It's quite good though, but any more recommended ones ? Something more organic? which doesn't require much breaking and changing. I'm not asking for the simple one, I don't mind going extra mile to install a powerful one but just try sticking with most Ubuntu's conventions will be an ideal one for me. And of course, if there happens to be something that meets the requirement as mentioned "Ubuntu conventions" and also simple to install at the same time, that'd be a bonus. Thanks in advance.

    Read the article

  • Good Hosting Providers With Zend Framework Support [closed]

    - by manyxcxi
    I currently use ixwebhosting for my hosting services. They're cheap and work (most of the time). The databases are horribly slow, the servers are horribly slow, and their support (though usually prompt) is tough to deal with. That being said, they're cheap, I've got like 20 domains hosted in my account, none of them are high volume, and they work JUST good enough- until today. This isn't meant to be a condemnation of ixwh though. Their prices are very low for what they do offer and most things work just fine, most of the time. I need to be able to host web apps written with Zend Framework in a fairly easy fashion. The server performance can't be worse than what I've already had (a pretty low hurdle to clear), and I don't want to spend $30/mo. These are not money making websites- they're projects. My requirements are PHP 5.3, ZF support, MySQL databases, multiple domains- not much. Who should I look at, and who should I look out for?

    Read the article

  • Growing a small hosting company [closed]

    - by user2353007
    We currently have a few servers, 1 WHM VPS (2GB), 1 MS SQL VPS (2 GB), and 1 IIS VPS (2GB). The VPS servers are doing fine as far as uptime and response times but we would like to add the following features. 1) monitoring with load statistics 2) failover I have looked a Zabbix, Zenoss, Nagios, and a couple of other cloud solutions like monitor.us and watchdog from Zerigo. Ideally for the monitoring solution. Our current hosting company suggested we get a dedicated server or VPS and install load balancing software (not sure I like that idea). I've looked into Rackspace and Amazon load balancers which seem like the most feasible solutions for load balancers. Does anybody have any input on the monitoring and load balancing products I'm looking into? Monitoring should monitor uptime as well as give reports on memory usage, disk usage, processor usage, and which processes/websites/users are responsible for the load. It would be ideal if the load balancer worked with any IP. Not sure if either Rackspace or Amazon load balancers would allow load balancing with servers outside their datacenter. Thank you.

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >