Search Results

Search found 6826 results on 274 pages for 'dedicated hosting'.

Page 178/274 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • Minimal Lunix distribution with sshd and apt

    - by Sergey Mikhanov
    When I signed up for my Debian Linux VPS hosting and first logged on and invoked ps, there was the only user process running: sshd. As I can see, this was minimal Linux with only two things installed and configured: sshd and apt (plus all dependencies, of course). I want to build (or use existing) similar Linux distro, any advice on how to build (or pick) one? Googling "minimum linux", or "linux with sshd only" usually brings up Debian's netinstall, which is not what I want. Thanks in advance.

    Read the article

  • multiple ftp sites on a single iis server with one ip

    - by kacalapy
    can i have multiple iis ftp sites using something similar to web site's unasigned host headers? i have a dedicated server in a hosting facility and want to make a web site for each of my clients. to add/ remove files and content i want ftp access to each of the sites root folders. lets say i have 10 sites set up using unasigned host headers... how can i set up 10 analogous ftp sites on the same server? AND NOT USE A DEFINED IP ADDRESS FOR EACH FTP SITE thanks all

    Read the article

  • What is the best way to run ClamAV on Windows Server 2008 R2

    - by gabbsmo
    I'm hosting a Wordpress-site on Windows Server 2008 RS and want to scan all files that are uploaded by users for viruses using this plugin http://wordpress.org/extend/plugins/upload-scanner/. I'm on a really tight budget (no profit) so ClamAV seem like a good choice. What is the best way to run ClamAV under these circumstances? I'm concidering the following options: Just running the raw windows build from http://sourceforge.net/projects/clamav/ an setup definition updates with task scheduler. Any way to automate updates of the scanner (binaries)? Using a "distro" like ClamWin or Immunet (advertised on clamav.net). Any suggestions are welcome.

    Read the article

  • Clear / Flush cached memory

    - by TheDave
    I have a small VPS with 6GB RAM hosting a couple of websites. Recently I have noticed that my cached memory size is quite high - see below: Cpu(s): 0.1%us, 0.1%sy, 0.0%ni, 99.1%id, 0.0%wa, 0.2%hi, 0.4%si, 0.0%st Mem: 6113256k total, 5949620k used, 163636k free, 398584k buffers Swap: 1048564k total, 104k used, 1048460k free, 3586468k cached After investigating if there is some method to have this flushed or cleared I stumbled upon a command which is: sync; echo 3 > /proc/sys/vm/drop_caches I read it could be useful to add this to a chron-task/job. Is this method recommended or could this lead to potential problems? The only concern I have is that I use one Magento installation on Memcached - could this have any negative effects on it? I am certainly not a pro therefore I would very much appreciate some expert advise. PS: My VPS runs on CentOS 5 x64 and I have WHM + NGINX installed.

    Read the article

  • cheap way to scale a rails application

    - by VP
    I have an application, that is becoming big, but until now, its not giving me a good revenue. That means, short money to re-invest on that. In this scenario, i found a way to make a "cheap distributed rails" deployment. I've got 4 VPS. All of them are in the same physical server. I added a load balance server running HAproxy in one dedicated VPS. There i pointed my virtual ip address where my domain name is associated. Behind this HAproxy i have more two VPS running my rails APP, passenger and memcache. Both apps servers are looking to the same database server, my 4th VPS. So with $44/month, i mounted a distributed environment. It won't be my final choice, but now, that the budget is short, is that a good way to deploy a rails application? Any pros or cons? It worth my $44/month?

    Read the article

  • Best approach to design a service oriented system

    - by Gustavo Paulillo
    Thinking about service orientation, our team are involved on new application designs. We consist in a group of 4 developers and a manager (that knows something about programming and distributed systems). Each one, having own opinion on service design. It consists in a distributed system: a user interface (web app) accessing the services in a dedicated server (inside the firewall), to obtain the business logic operations. So we got 2 main approachs that I list above : Modular services Having many modules, each one consisting of a service (WCF). Example: namespaces SystemX.DebtService, SystemX.CreditService, SystemX.SimulatorService Unique service All the business logic is centralized in a unique service. Example: SystemX.OperationService. The web app calls the same service for all operations. In your opinion, whats the best? Or having another approach is better for this scenario?

    Read the article

  • Questions about Domains and DNS

    - by ShoX
    Hi, I am totally new to the DNS and server hosting world and not quite sure what I need. I want to get a domain, forward it to my own server, so that the user sees example.com in the url bar and example.com/foo/bar will work. Depending on what subdomain it is, it should do different things (another base-directory at webserver, ftp, etc). Also my email should be able to be sent to and received by that server. What irritates me, is the fact, that in the A-record I can only list IP-addresses and no ports. So do I have to set up a nameserver on my own server? Or do I accomplish this via vhosts on my webserver? I would appreciate any help or link to a tutorial. I know how DNS works, know some basic apache-stuff, etc... so no need to explain that. Thanks

    Read the article

  • Xen HVM guest has severe clock drift

    - by ipartola
    I am seeing a very severe clock drift on my Xen HVM VPS, rented from a hosting provider, so I don't have access to the dom0 system. I continuously run ntpd, but the clock drifts by as much as 30 seconds in 5 minutes and NTP cannot keep up. Has anyone experienced this? Here are some details: $ dmesg | grep clock [ 0.160000] Measured 347 cycles TSC warp between CPUs, turning off TSC clock. [ 0.396000] * this clock source is slow. Consider trying other clock sources [ 0.550448] Switching to clocksource acpi_pm [ 0.653135] rtc_cmos 00:05: setting system clock to 2011-03-09 02:45:40 UTC (1299638740) $ cat /sys/devices/system/clocksource/clocksource0/available_clocksource acpi_pm $ cat /sys/devices/system/clocksource/clocksource0/current_clocksource acpi_pm

    Read the article

  • MySQL mistake with grant option

    - by John Tate
    I am unsure reading the MySQL documentation if creating a user with the GRANT option will give them the power to create users and grant privileges, or change the privileges of other users databases. I have been creating databases for users like this CREATE DATABASE user; USE user; GRANT ALL PRIVILEGES ON *.* TO 'user'@'localhost' IDENTIFIED BY 'password' WITH GRANT OPTION; Is this the best way of doing it or have I just given my users too much control? They are people I am hosting sites for. Thankfully at this point they are trustworthy. I use quotas. Edit: I have realized I have been granting users access to all databases. This is obviously stupid I should be using this: GRANT ALL PRIVILEGES ON database.* to 'user'@localhost' IDENTIFIED BY 'password' What is the simplest way to revoke privileges for every user except root so I can quickly end this catastrophic rookie mistake?

    Read the article

  • Challenges w.r.t. proximity between application hosted outside Amazon and Amazon persistence service

    - by Kabeer
    Hello. This is about hosting a web portal. Earlier my topology was entirely based on Amazon AWS but the price factor (especially for EC2) now makes me re-think. I'll now quickly come to what I have finally arrived at. I'll launch the portal that'll be hosted on Godaddy (unlimited plan on Windows). The portal uses SimpleDB for storing metadata and S3 for blobs. Locally available MySQL will be used for the ASP.Net provider services. Once the portal is profitable, I intent to move to Amazon in totality. Now considering the proximity between Godaddy & Amazon, would I face 'substantial' performance problems? Are there any suggestions to improve upon my topology.

    Read the article

  • Deliver email to Gmail AND Office 365?

    - by gbegley
    We moved our Office app hosting from Google Apps to Office 365. Many of us miss Google Apps, especially its superior search functionality. The pressure to use Office 365 has disappeared; many (but not all) of us would like to go back to Google Apps. Is it possible to configure our domain's mail delivery so that messages are delivered to both Google Apps's Gmail and Office 365, allowing users to choose which platform they prefer? If so, what are the options? Google Apps documentation specifies the ability to deliver messages to a secondary mail server using routing configuration. Currently our MX records are point to Office 365. If I change the MX records to point to Google Apps Mails servers, is the "Office 365 MX record address" the address I would want to use for a Google Apps Routing Target?

    Read the article

  • How can I handle a .org domain on my own nameserver without paying for unwanted services?

    - by etuardu
    I have a dot org domain that I use to run a website. Until now, I had an account onto a hosting+domain provider. Recently I thought to run the website on my own webserver and to handle the domain on my own nameserver. What do I need to do in order to handle my .org domain by my own? Do I still need a registrar? Is there a more direct way that pir.org provide in order to fill in just a nameserver to be bound to a domain name?

    Read the article

  • Backing up data in an encrypted way

    - by Eli Bendersky
    I have the following use case: There's some data from my PC I want to periodically back-up online I own some hosting, so I want to use that for the backups, don't want to pay to another backup service I want to encrypt my data locally prior to moving it to the server I have no problem writing scripts to automate the process (say, periodically generate the backup and upload by FTP to my server), but my main question is about step 3 - the encryption: which way is recommended to encrypt my files (say, collected into a .ZIP) prior to uploading to the server? P.S. TrueCrypt seems popular but it's not quite what I'm looking for, since I don't want the files to be constantly encrypted here on my PC.

    Read the article

  • Do you find using a VPS worthwhile?

    - by Grant Palin
    I am currently on shared hosting, and have been recently looking at the idea of switching to a VPS instead. From what I have gathered, a VPS allows you more control over your server setup. But at the same time you have to set it up yourself, and maintain it. This is the bit I am asking about... Despite the power and flexibility you get from using a VPS, you have to take care of it yourself. Is it worth it? Some context: I am primarily a Windows user, but have been tinkering with various Linux distros off and on for several years. I know enough about Linux to get by, or to be dangerous - take your pick. I've also done some tinkering on my current host, but have no serious sysadmin experience. There's always a first time!

    Read the article

  • Passenger, Apache and avoiding page caching

    - by user38382
    I'm hosting a rack application with passenger and apache. The application is setup to cache the content of each request to the public directory after each request. This allows apache to serve the content directly as a static page for future requests. I would like to tell Apache, presumably through some rewrite rules that any requests with query parameters should not be cached, but instead passed down to the rack application. With a mongrel setup I would just redirect it to the balancer if it meets my rewrite conditions. How do you do the same with passenger?

    Read the article

  • Backing up mail accounts without full access to mailserver

    - by Agos
    Hi everybody. I'm in the process of migrating some stuff from a (crappy) hosting. Files were easy with SSH access, but mail is giving me some thoughts. This is the situation: qmail server, no ssh access I own postmaster account accounts are accessible via web interface or POP3 I'm interested in transferring emails, but if whole accounts can be transferred it'd be better. Being POP3 I'm fairly confident every message has been downloaded, but of course I'd like to download the whole thing to be safer. Right now I have this in mind: Enter in web admin Change each account's password (it's only a dozen or so accounts so still feasible) Send new password to user telling him please not to change it getmail or something like that put on new IMAP server in some way (which I still haven't planned) But I feel there should be a better way to do this. Is there? Thanks in advance!

    Read the article

  • Windows Server 2008 R2 DNS Server not working?

    - by wolfvilleian
    I have a server running Windows Server 2008 R2 hosting a DNS server, exchange 2010 and is a domain controller. One computer on the network (and domain) can ping the server 25% of the time, also when I try to ping it's own hostname it also does not work. However another computer that is on the domain can ping it fine, and another computer on the network but not domain can ping fine as well. The computer that cannot ping the server is setup to use the DNS server running on the server only (secondary dns points to nothing) and it will resolve the hostname of the server to the external IP not internal when the other two computers correctly resolve the internal All 3 computers and server are connected directly into the same switch. Does anyone have any ideas on how to fix this? Thanks

    Read the article

  • What's required to enable communication between two IP ranges located behind one switch?

    - by Eric3
    Within our co-located networking closet, we have control over two ranges of 254 addresses, e.g. 64.123.45.0/24 and 65.234.56.0/24. The problem is, if a host has only one IP address, or a block of addresses in only one range, it can't contact any of the addresses in the other subnet. All of our hosts use our hosting provider's respective gateway, e.g. 64.123.45.1 or 65.234.56.1 A host on the 64.123.45.0/24 range can contact the 65.234.56.1 gateway and vice-versa Everything in our closet is connected to an HP ProCurve 2810 (a Layer 2-only switch), which connects through a Juniper NetScreen-25 firewall to the outside world What can I do to enable communication between the two ranges? Is there some settings I can change, or do I need better networking equipment?

    Read the article

  • SQL 2008 R2 3rd Party Peer-to-Peer Replication, Global Site Distribution

    - by gombala
    We are looking at hosting 3 globally distributed SQL Server installations at different data centers. The intent is that Site A will serve web traffic and data for a specific region, same with Site B and C. In the case that Site A data center goes down, looses connectivity, etc. the users of Site A users will fail over to Site B or C (depending which is up). Also, if a user from Site A travels to Site C they should be able to access their data as it was on Site A. My questions is what SQL replication technology (SQL Replication or 3rd party) can support this scenario? We are using SQL 2008 R2 Enterprise at each site, each site runs on top of VMWare with a Netapp filer. Would something like distributed caching help in this scenario as well? We have looked at and tested Peer-to-Peer replication but have encountered issues with conflicts during our testing. I imagine there are other global data centers that have encountered and solved this issue.

    Read the article

  • UDP Reverse Proxy

    - by user180195
    I have found a way to make reverse-proxy to an external IP. Here is how someone making a request will see it's request being passed: Clients sends request Request reaches the Datacenter one in some place That datacenter, acting as a reverse proxy will redirect the same exact request to another datacenter. The datacenter will then process the request Although, this only works with TCP/HTTP (Looking currently at HaProxy). I am hosting game servers at the other datacenter (where the proxy is not) that are using UDP protocol. Do you know how can I setup a reverse proxy using the UDP protocol.

    Read the article

  • htaccess - Redirects with more than 1 level deep not working

    - by barfoon
    Hey everyone, Just moved to shared hosting on GoDaddy and Im trying to get my .htaccess rules working. Heres what I have: ErrorDocument 404 /error.php Options FollowSymLinks RewriteEngine On RewriteBase / RewriteCond %{HTTP_HOST} ^www\.mydomain\.org$ RewriteRule ^(.*)$ http://mydomain.org/$1 [R=301,L] RewriteRule ^view/(\w+)$ viewitem.php?itemid=$1 [R=301,L] RewriteRule ^category/(\w+)$ viewcategory.php?tag=$1 [R=301,L] RewriteRule ^faq$ faq.php RewriteRule ^about$ about.php RewriteRule ^contact$ contact.php RewriteRule ^submit$ submit.php RewriteRule ^contactmsg$ handler-contact.php All the pages @ the root of the domain seem to be working i.e mydomain.org/faq, mydomain.org/about are working. But whenever I try mydomain.org/category/somecategory, I get a 404. How can I fix my .htaccess to obey these rules that are more than 1 level deep? Thanks,

    Read the article

  • What are the requirements for gettings django translations to work?

    - by Espen Christensen
    Hi, I am hosting several djangosites on a CentOS 5 box. But I'm having difficulties with translations. So first i had to upgrade the gettext package from 0.14 to 0.16, but that didn't help. Now i can make and compile tranlsations files with the managment commands, but the translations does not show. I am sure that the translation files are located at the right place since they work with the same setup on a local installation, and django's own translation files does not work. (e.g the admin is not translated). What could i be missing in my server setup that makes this happen?

    Read the article

  • Windows 2008 VPS always crashes when out of disk space

    - by Pickels
    Hello, I am renting a Windows server 2008 dc SP2 VPS for hosting my Asp.Net projects. Now for the second time this month my VPS ran out of disk space. The first time it was a log file that got to big and yesterday it was my mistake for uploading a website without noticing the lack of space on my VPS. Now the side effect this has is that my VPS corrupts some files when trying to write them. Last time it was Plesk that stopped working yesterday it was IIS. So I was wondering is this normal behavior? I called my service provider to ask if they could restore a back-up and to ask if this is normal and they ensured me it was. I am not trying to blame them and I know it's mostly my fault for not monitoring my VPS better or for not setting better defaults.

    Read the article

  • How to analyse logs after the site was hacked

    - by Vasiliy Toporov
    One of our web-projects was hacked. Malefactor changed some template files in project and 1 core file of the web-framework (it's one of the famous php-frameworks). We found all corrupted files by git and reverted them. So now I need to find the weak point. With high probability we can say, that it's not the ftp or ssh password abduction. The support specialist of hosting provider (after logs analysis) said that it was the security hole in our code. My questions: 1) What tools should I use, to review access and error logs of Apache? (Our server distro is Debian). 2) Can you write tips of suspicious lines detection in logs? Maybe tutorials or primers of some useful regexps or techniques? 3) How to separate "normal user behavior" from suspicious in logs. 4) Is there any way to preventing attacks in Apache? Thanks for your help.

    Read the article

  • iPhone and Vertex Buffer Objects

    - by dancer
    I've just started playing around with opengl es on the iphone the past couple of weeks and i'm looking at refactoring some of my code to use Vertex Buffer Objects(VBO). Before I do though I would like to make sure it'll be worth it. The problem is that afaik the only reason you create VBO's is to shift a chunk of data onto the graphics card so that it doesn't need to be retrieved from system ram when it's used. The iPhone however does not have any dedicated ram that I'm aware of so i'm struggling to see why I would benefit at all from using VBO's. I have seen talk around the internet with conflicting opinions and apple certainly want dev's to use it so there's probably still a reason to use them but just wanted to see if anyone on SO had an opinion to add.

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >