Search Results

Search found 4851 results on 195 pages for 'hosting'.

Page 128/195 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • Repositories for CentOS that don't suck?

    - by Keyo
    I'm used to using ubuntu/debian repositories and they are great. I can apt-get just about any package and it'll be there. I have not found this on centos. I called my hosting company and they suggest I install atomic turtle since it's compatible with cPanel. This didn't work when I tried to install git. yum install git ... No package git available Repeat the same thing for just about any package, the default repositories are pathetic. So perhaps there are other repositories I can use. Can anyone suggest any? Edit The problem was cPanel excluding some git dependencies in yum.conf. See http://www.cmdln.org/2010/05/07/install-git-on-centos-cpanel-server/

    Read the article

  • FTP "PUT" fails from Virtual Machine, but not host PC: 504 Command not implemented for that paramete

    - by BrianH
    I have an FTP Script I'm using to automate a file transfer. The transfer works fine on my PC (XP SP2), but when I try and run it on a VM on my PC (XP SP2), the "put" commands gives off: 504 Command not implemented for that parameter. FTP File: open [ftp site] [username] [password] cd [directory on FTP server] binary hash put ..\[subfolder1]\[Subfolder2]\[subfolder3]\[filename] bye The FTP site/server is around the world, and not under my control. From what I understand of a 504, that means the command should NEVER work, but since the same script DOES work on my PC (hosting the VM), that eliminates syntax, file naming, etc. The put command when triggered from the VM, actually creates a 0 length file on the target FTP server, but doesn't populate the file.

    Read the article

  • Clear / Flush cached memory

    - by TheDave
    I have a small VPS with 6GB RAM hosting a couple of websites. Recently I have noticed that my cached memory size is quite high - see below: Cpu(s): 0.1%us, 0.1%sy, 0.0%ni, 99.1%id, 0.0%wa, 0.2%hi, 0.4%si, 0.0%st Mem: 6113256k total, 5949620k used, 163636k free, 398584k buffers Swap: 1048564k total, 104k used, 1048460k free, 3586468k cached After investigating if there is some method to have this flushed or cleared I stumbled upon a command which is: sync; echo 3 > /proc/sys/vm/drop_caches I read it could be useful to add this to a chron-task/job. Is this method recommended or could this lead to potential problems? The only concern I have is that I use one Magento installation on Memcached - could this have any negative effects on it? I am certainly not a pro therefore I would very much appreciate some expert advise. PS: My VPS runs on CentOS 5 x64 and I have WHM + NGINX installed.

    Read the article

  • Self-hosted browser-based remote desktop script?

    - by rlsaj
    I need a self-hosted browser based remote desktop script that will connect me from any PC to my work PC. I need to either host this script within my own dedicated hosting or on my work PC. The PC that I need to remote into is always the one PC (Win7) and the IP never changes, and I have access to the Router/Firewall within. I have tried many remote desktop services and applications - LogMeIn, Team Viewer, (Ultra/Tight) VNC, GoToMyPC and iTeleport Connect and even Windows Remote Desktop - and the web services (or ports) are blocked at whatever free wi-fi/hotel/coffee shop I am at. Note that I will need to be able to access this from any PC, so I won't be able to install any applications (or use any portable software) - hence my thinking that it will need to be browser based on a standard (not blocked) port. If I can set up a web based remote desktop application - really a homebrew LogMeIn - then I should solve my problem. What is the best option here?

    Read the article

  • Minimal Lunix distribution with sshd and apt

    - by Sergey Mikhanov
    When I signed up for my Debian Linux VPS hosting and first logged on and invoked ps, there was the only user process running: sshd. As I can see, this was minimal Linux with only two things installed and configured: sshd and apt (plus all dependencies, of course). I want to build (or use existing) similar Linux distro, any advice on how to build (or pick) one? Googling "minimum linux", or "linux with sshd only" usually brings up Debian's netinstall, which is not what I want. Thanks in advance.

    Read the article

  • AD account locks out when using Outlook 2007?

    - by Down Town
    Hi, I/we have a problem with our Windows Server 2008 forest and Exchange. We are buying Exchange hosting from another firm and Exchange Server is in their Windows Server 2008 forest. So, we have two forests and there isn't any trusts between these two forests. Our own forest logon name is [email protected] and we also use the same email address to logon to the Exchange mailbox. Everything works fine if both our AD account and Exchange mailbox account have the same password, but if the passwords don't match, our AD account gets locked out. I have tried to figure out why Outlook sends false logon attemps to our AD. If someone can help, please do.

    Read the article

  • Windows Server 2008 R2 DNS Server not working?

    - by wolfvilleian
    I have a server running Windows Server 2008 R2 hosting a DNS server, exchange 2010 and is a domain controller. One computer on the network (and domain) can ping the server 25% of the time, also when I try to ping it's own hostname it also does not work. However another computer that is on the domain can ping it fine, and another computer on the network but not domain can ping fine as well. The computer that cannot ping the server is setup to use the DNS server running on the server only (secondary dns points to nothing) and it will resolve the hostname of the server to the external IP not internal when the other two computers correctly resolve the internal All 3 computers and server are connected directly into the same switch. Does anyone have any ideas on how to fix this? Thanks

    Read the article

  • Windows 2008 VPS always crashes when out of disk space

    - by Pickels
    Hello, I am renting a Windows server 2008 dc SP2 VPS for hosting my Asp.Net projects. Now for the second time this month my VPS ran out of disk space. The first time it was a log file that got to big and yesterday it was my mistake for uploading a website without noticing the lack of space on my VPS. Now the side effect this has is that my VPS corrupts some files when trying to write them. Last time it was Plesk that stopped working yesterday it was IIS. So I was wondering is this normal behavior? I called my service provider to ask if they could restore a back-up and to ask if this is normal and they ensured me it was. I am not trying to blame them and I know it's mostly my fault for not monitoring my VPS better or for not setting better defaults.

    Read the article

  • SQL 2008 R2 3rd Party Peer-to-Peer Replication, Global Site Distribution

    - by gombala
    We are looking at hosting 3 globally distributed SQL Server installations at different data centers. The intent is that Site A will serve web traffic and data for a specific region, same with Site B and C. In the case that Site A data center goes down, looses connectivity, etc. the users of Site A users will fail over to Site B or C (depending which is up). Also, if a user from Site A travels to Site C they should be able to access their data as it was on Site A. My questions is what SQL replication technology (SQL Replication or 3rd party) can support this scenario? We are using SQL 2008 R2 Enterprise at each site, each site runs on top of VMWare with a Netapp filer. Would something like distributed caching help in this scenario as well? We have looked at and tested Peer-to-Peer replication but have encountered issues with conflicts during our testing. I imagine there are other global data centers that have encountered and solved this issue.

    Read the article

  • postfix, webmin installed. whats next?

    - by Johnny Craig
    Im trying to get imap running and dont know the problem. i a developer, not a network guy.( our network guy left) we had postfix installed already for outgoing mail on 8 domains. we only had incoming on 1 domain. but that mail server is located on a different ip. now we want incoming on another domain, but we dont want it on another ip, we want it on the same ip as the website itself. I installed dovecot today because my hosting company said i needed it. it seems to run fine. do i need dovecot AND postfix? or are they the same thing? dovecot does not show up anywhere in webmin what i cant seem to figure out how to do is add a user email so i can try to telnet in on port 143. i think i have evrything installed, just need the next step.... sorry for the newb question

    Read the article

  • Running Multiple sites with multiple domains apache

    - by PsychoData
    I am having a rough time running apache and using multiple domain names here is a snippet of my config file. I keep getting a error saying that NameVirtualHost has no VirtualHosts. I want them both running on the same IP and I'm not sure why this doesn't work. I've been digging through the documentation for VirtualHosts, NameVirtualHost, and apache's page about name based virtual hosting. That example in the name based page is almost exactly my config! What am I doing wrong? Listen *:80 NameVirtualHost *:80 <VirtualHost *:80> ServerName www.sample1.net DocumentRoot /var/www/sample1-net </VirtualHost> <VirtualHost *:80> ServerName www.example2.net DocumentRoot /var/www/example2-net </VirtualHost>

    Read the article

  • Backing up mail accounts without full access to mailserver

    - by Agos
    Hi everybody. I'm in the process of migrating some stuff from a (crappy) hosting. Files were easy with SSH access, but mail is giving me some thoughts. This is the situation: qmail server, no ssh access I own postmaster account accounts are accessible via web interface or POP3 I'm interested in transferring emails, but if whole accounts can be transferred it'd be better. Being POP3 I'm fairly confident every message has been downloaded, but of course I'd like to download the whole thing to be safer. Right now I have this in mind: Enter in web admin Change each account's password (it's only a dozen or so accounts so still feasible) Send new password to user telling him please not to change it getmail or something like that put on new IMAP server in some way (which I still haven't planned) But I feel there should be a better way to do this. Is there? Thanks in advance!

    Read the article

  • Passenger, Apache and avoiding page caching

    - by user38382
    I'm hosting a rack application with passenger and apache. The application is setup to cache the content of each request to the public directory after each request. This allows apache to serve the content directly as a static page for future requests. I would like to tell Apache, presumably through some rewrite rules that any requests with query parameters should not be cached, but instead passed down to the rack application. With a mongrel setup I would just redirect it to the balancer if it meets my rewrite conditions. How do you do the same with passenger?

    Read the article

  • Questions about Domains and DNS

    - by ShoX
    Hi, I am totally new to the DNS and server hosting world and not quite sure what I need. I want to get a domain, forward it to my own server, so that the user sees example.com in the url bar and example.com/foo/bar will work. Depending on what subdomain it is, it should do different things (another base-directory at webserver, ftp, etc). Also my email should be able to be sent to and received by that server. What irritates me, is the fact, that in the A-record I can only list IP-addresses and no ports. So do I have to set up a nameserver on my own server? Or do I accomplish this via vhosts on my webserver? I would appreciate any help or link to a tutorial. I know how DNS works, know some basic apache-stuff, etc... so no need to explain that. Thanks

    Read the article

  • htaccess - Redirects with more than 1 level deep not working

    - by barfoon
    Hey everyone, Just moved to shared hosting on GoDaddy and Im trying to get my .htaccess rules working. Heres what I have: ErrorDocument 404 /error.php Options FollowSymLinks RewriteEngine On RewriteBase / RewriteCond %{HTTP_HOST} ^www\.mydomain\.org$ RewriteRule ^(.*)$ http://mydomain.org/$1 [R=301,L] RewriteRule ^view/(\w+)$ viewitem.php?itemid=$1 [R=301,L] RewriteRule ^category/(\w+)$ viewcategory.php?tag=$1 [R=301,L] RewriteRule ^faq$ faq.php RewriteRule ^about$ about.php RewriteRule ^contact$ contact.php RewriteRule ^submit$ submit.php RewriteRule ^contactmsg$ handler-contact.php All the pages @ the root of the domain seem to be working i.e mydomain.org/faq, mydomain.org/about are working. But whenever I try mydomain.org/category/somecategory, I get a 404. How can I fix my .htaccess to obey these rules that are more than 1 level deep? Thanks,

    Read the article

  • php include path problem:Same code works on Ubuntu default Apache and php conf, but not on CentOS

    - by Neo
    So the same code works on my ubuntu server but when I upload it to my dedicated hosting server running CentOS it seems to add an extra prefix of .:/usr/share/pear:/usr/share/php: I tried setting includepath to different things but it just doesn't work. the file is in a directory called language in the same folder as the file that is including it and I'm using : include dirname(FILE).DIRECTORY_SEPARATOR."language".DIRECTORY_SEPARATOR."storage.inc"; and include dirname(__FILE__)."/language/language.php"; and include "language/language.php"; and alot of other combinations but I can't get it to find the file. Fatal error: require_once() [function.require]: Failed opening required '/home/neo/public_html/migration/include/class/core/storage.inc' (include_path='.:/usr/share/pear:/usr/share/php:/home/neo/public_html/migration') in /home/neo/public_html/migration/include/class/core/class_lang.inc on line 153

    Read the article

  • Trouble with resolving hostnames on CentOS using Bind

    - by cabaret
    I'm taking a course on server administration at school and I have managed to set up virtual hosting in apache and a dns server on a virtual machine. However, I have now set up an old pc to run CentOS and I'm trying the same on that box. The problem I ran into now is that I can't resolve hostnames from the linux box. I have set up the nameserver in /etc/resolv.conf to the IP of the CentOS machine, but when I try for example ping google.com I get ping: unknown host google.com However, when I do ping 66.102.13.105 (which is the Google IP, figured that out by pinging on my mac) I get: PING 66.102.13.105 (66.102.13.105) 56(84) bytes of data. 64 bytes from 66.102.13.105: icmp_seq=1 ttl=52 time=15.5 ms Slightly confused why this is happening. Could it be because of my router sitting in between the linux machine and the cable modem? It's a D-Link somethingsomething. Thanks in advance

    Read the article

  • How to analyse logs after the site was hacked

    - by Vasiliy Toporov
    One of our web-projects was hacked. Malefactor changed some template files in project and 1 core file of the web-framework (it's one of the famous php-frameworks). We found all corrupted files by git and reverted them. So now I need to find the weak point. With high probability we can say, that it's not the ftp or ssh password abduction. The support specialist of hosting provider (after logs analysis) said that it was the security hole in our code. My questions: 1) What tools should I use, to review access and error logs of Apache? (Our server distro is Debian). 2) Can you write tips of suspicious lines detection in logs? Maybe tutorials or primers of some useful regexps or techniques? 3) How to separate "normal user behavior" from suspicious in logs. 4) Is there any way to preventing attacks in Apache? Thanks for your help.

    Read the article

  • Remote access not working without connected monitor

    - by winSharp93
    I am trying to configure a Windows Server 2008 as a Home Server for my personal use (mainly for storing documents, hosting source-control, etc.). The "server" consists of an Intel Atom 2700DC board and an Intel SSD. Configuring remote access to the server, I am confronted with a very strange problem: As long as a monitor is connected to my server, remote access works without any problems. However, when no monitor is connected at boot-time, remote access simply won't work (I keep getting errors when trying to connect that the remote server was not found or that remote access is disabled). Windows definitely boots when no monitor is connected as I receive a message asking me whether to enter safe mode when booting after powering the server down by plugging the power cord. When I plug in a monitor after boot, it stays turned off and remote desktop connections still fail. Do you have any ideas about what I could try?

    Read the article

  • Apache does not serve non-locally

    - by yodaj007
    I have a freshly installed instance of Fedora Core 16 inside VirtualBox using bridged networking. On it, as root I typed in: yum -y install httpd service httpd start ifconfig Inside the VM, I can open a web browser to 'localhost' and I get the Apache test page. It works. But in Windows (the machine hosting the VM), I point my browser to the IP address returned by ifconfig (192.168.2.122). The connection times out. I can go to a command prompt and ping the VM. Is there a firewall or something that comes with Fedora by default? Or is there something I need to change in a config file?

    Read the article

  • Backing up data in an encrypted way

    - by Eli Bendersky
    I have the following use case: There's some data from my PC I want to periodically back-up online I own some hosting, so I want to use that for the backups, don't want to pay to another backup service I want to encrypt my data locally prior to moving it to the server I have no problem writing scripts to automate the process (say, periodically generate the backup and upload by FTP to my server), but my main question is about step 3 - the encryption: which way is recommended to encrypt my files (say, collected into a .ZIP) prior to uploading to the server? P.S. TrueCrypt seems popular but it's not quite what I'm looking for, since I don't want the files to be constantly encrypted here on my PC.

    Read the article

  • mailman not relaying email to external address

    - by gozzilli
    I have a setup of mailman with postfix on an ubuntu server 12.04. My problem is that mailing list emails are not forwarded to email addresses external to my institution. However the initial welcome email is received by everyone, internally and externally. in fact, a simple email from command line with mail is successfully sent to anyone after that, mailing list emails are only forwarded to internal addresses. the domain name I'm using for the server is not that of my institution who is hosting the server. Here is my main.cf: myorigin = sub.myinstitution.tld mynetworks = 127.0.0.0/8 xxx.xxx.xxx.xxx/16 # this is my institution ip range relayhost = smtp.myinstitution.tld inet_interfaces = loopback-only local_transport = error:local delivery is disabled virtual_alias_maps = hash:/etc/postfix/virtual smtpd_recipient_restrictions = permit_mynetworks myhostname = mywebsite.tld mydestination = $myhostname, localhost.$mydomain, localhost I also found these two links on serverfault and ubuntu forums, but neither of these solutions seem to do the trick for me. Any help would be much appreciated.

    Read the article

  • Xen HVM guest has severe clock drift

    - by ipartola
    I am seeing a very severe clock drift on my Xen HVM VPS, rented from a hosting provider, so I don't have access to the dom0 system. I continuously run ntpd, but the clock drifts by as much as 30 seconds in 5 minutes and NTP cannot keep up. Has anyone experienced this? Here are some details: $ dmesg | grep clock [ 0.160000] Measured 347 cycles TSC warp between CPUs, turning off TSC clock. [ 0.396000] * this clock source is slow. Consider trying other clock sources [ 0.550448] Switching to clocksource acpi_pm [ 0.653135] rtc_cmos 00:05: setting system clock to 2011-03-09 02:45:40 UTC (1299638740) $ cat /sys/devices/system/clocksource/clocksource0/available_clocksource acpi_pm $ cat /sys/devices/system/clocksource/clocksource0/current_clocksource acpi_pm

    Read the article

  • How can I handle a .org domain on my own nameserver without paying for unwanted services?

    - by etuardu
    I have a dot org domain that I use to run a website. Until now, I had an account onto a hosting+domain provider. Recently I thought to run the website on my own webserver and to handle the domain on my own nameserver. What do I need to do in order to handle my .org domain by my own? Do I still need a registrar? Is there a more direct way that pir.org provide in order to fill in just a nameserver to be bound to a domain name?

    Read the article

  • How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt?

    - by Stu Thompson
    Prelude: I'm a code-monkey that's increasingly taken on SysAdmin duties for my small company. My code is our product, and increasingly we provide the same app as SaaS. About 18 months ago I moved our servers from a premium hosting centric vendor to a barebones rack pusher in a tier IV data center. (Literally across the street.) This ment doing much more ourselves--things like networking, storage and monitoring. As part the big move, to replace our leased direct attached storage from the hosting company, I built a 9TB two-node NAS based on SuperMicro chassises, 3ware RAID cards, Ubuntu 10.04, two dozen SATA disks, DRBD and . It's all lovingly documented in three blog posts: Building up & testing a new 9TB SATA RAID10 NFSv4 NAS: Part I, Part II and Part III. We also setup a Cacit monitoring system. Recently we've been adding more and more data points, like SMART values. I could not have done all this without the awesome boffins at ServerFault. It's been a fun and educational experience. My boss is happy (we saved bucket loads of $$$), our customers are happy (storage costs are down), I'm happy (fun, fun, fun). Until yesterday. Outage & Recovery: Some time after lunch we started getting reports of sluggish performance from our application, an on-demand streaming media CMS. About the same time our Cacti monitoring system sent a blizzard of emails. One of the more telling alerts was a graph of iostat await. Performance became so degraded that Pingdom began sending "server down" notifications. The overall load was moderate, there was not traffic spike. After logging onto the application servers, NFS clients of the NAS, I confirmed that just about everything was experiencing highly intermittent and insanely long IO wait times. And once I hopped onto the primary NAS node itself, the same delays were evident when trying to navigate the problem array's file system. Time to fail over, that went well. Within 20 minuts everything was confirmed to be back up and running perfectly. Post-Mortem: After any and all system failures I perform a post-mortem to determine the cause of the failure. First thing I did was ssh back into the box and start reviewing logs. It was offline, completely. Time for a trip to the data center. Hardware reset, backup an and running. In /var/syslog I found this scary looking entry: Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_00], 6 Currently unreadable (pending) sectors Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_07], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 171 to 170 Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 16 Currently unreadable (pending) sectors Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 4 Offline uncorrectable sectors Nov 15 06:49:45 umbilo smartd[2827]: Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error Nov 15 06:49:45 umbilo smartd[2827]: # 1 Short offline Completed: read failure 90% 6576 3421766910 Nov 15 06:49:45 umbilo smartd[2827]: # 2 Short offline Completed: read failure 90% 6087 3421766910 Nov 15 06:49:45 umbilo smartd[2827]: # 3 Short offline Completed: read failure 10% 5901 656821791 Nov 15 06:49:45 umbilo smartd[2827]: # 4 Short offline Completed: read failure 90% 5818 651637856 Nov 15 06:49:45 umbilo smartd[2827]: So I went to check the Cacti graphs for the disks in the array. Here we see that, yes, disk 7 is slipping away just like syslog says it is. But we also see that disk 8's SMART Read Erros are fluctuating. There are no messages about disk 8 in syslog. More interesting is that the fluctuating values for disk 8 directly correlate to the high IO wait times! My interpretation is that: Disk 8 is experiencing an odd hardware fault that results in intermittent long operation times. Somehow this fault condition on the disk is locking up the entire array Maybe there is a more accurate or correct description, but the net result has been that the one disk is impacting the performance of the whole array. The Question(s) How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt? Am I being naïve to think that the RAID card should have dealt with this? How can I prevent a single misbehaving disk from impacting the entire array? Am I missing something?

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >