Search Results

Search found 9845 results on 394 pages for 'ntp servers'.

Page 342/394 | < Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >

  • sudo or acl or setuid/setgid?

    - by Xavier Maillard
    for a reason I do not really understand, everyone wants sudo for all and everything. At work we even have as many entries as there are way to read a logfile (head/tail/cat/more, ...). I think, sudo is defeating here. I'd rather use a mix of setgid/setuid directories and add ACL here and there but I really need to know what are the best practices before starting up. Our servers have %admin, %production, %dba, %users -i.e many groups and many users. Each service (mysql, apache, ...) has its own way to install privileges but members of the %production group must be able to consult configuration file or even log files. There is still the solution to add them into the right groups (mysql...) and set the good permission. But I do not want to usermod all users, I do not want to modify standards permissions since it could change after each upgrade. On the other hand, setting acls and/or mixing setuid/setgid on directories is something I could easily do without "defacing" the standard distribution. What do you think about this ? Taking the mysql example, that would look like this: setfacl d:g:production:rx,d:other::---,g:production:rx,other::--- /var/log/mysql /etc/mysql Do you think this is good practise or should I definetely usermod -G mysql and play with standard permissions system ? Thank you

    Read the article

  • Public Folders - Delete Public Folders from 2003 after migrating to 2010 (via Adsiedit) - safe?

    - by HeavenCore
    Similar Question: How do I delete a public store in Exchange 2003? We are ready to remove our Exchange 2003 server after having migrated all public folders and mailboxes to 2010. We ran for a week with the exchange 2003 server shutdown and everything seemed to work. When I try to delete the PF database from 2003 it says it contains replicas. Whilst migrating i only had one was sync working (from 2003 to 2010) so i believe that 2003 hasn't received the responses from 2010 saying replica removed. When I look in Public folders on the 2003 box none are listed, when i look in PF Instances they are all listed. I know everything has moved to the 2010 server and I know 2010 is not showing the 2003 server as a replica for any folders. I am looking to use ADSI edit to remove the Public folder database from the 2003 server, but want to ensure i am going to delete the right thing so that they do not get deleted from the 2010 database. Should i delete configuration, Services, Microsoft Exchange, Company Name, Administrative groups, First administrative group, Servers, Server name, Information store, First storage group, public folder store (Server name)? Or something else? I have checked and the only public folder with the old exchange server listed as a replica is SYSTEM CONFIGURATION. Thanks in advance.

    Read the article

  • Understanding where an amazon ec2 instance run?

    - by kenzo450D
    I am currently using the aws api from my local desktop. I can successfully take backups of my amazon volumes, and even create an ami from it. Now when i wanted to run the instance to be built from this ami, where does the instance run? In their Elastic Cloud or the computer from which the command was issued. Suppose I want to create the new instance in a new region? (locations as defined in ec2-describe-regions) How would I do that? It seems i have a bad knowledge about how the relation between amazon volumes and instances? Please explain it. I am only allowed to use the CLI tools to do all of my work. I made a new snapshot of the existing instance, made an ami using ec2-register, made a keypair, and then followed these steps, http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-an-instance.html#launching-an-instance-cli but i got an error as this Client.InvalidParameterValue: The requested instance type's architecture (i386) does not match the architecture in the manifest for aki-fc37bacc (x86_64) my local computer is 32bit. But I do not want to load instance on the local computer but on amazon servers?

    Read the article

  • Sendmail slow to accept emails

    - by Rich
    I have a PHP web app which is using SMTP to sendmail on localhost to send email. I would like sendmail to accept the mail request immediately and queue it for later sending, as I don't want to have user-facing request threads blocked on emails. Sendmail is installed with the default settings on RHEL web servers. Sometimes sendmail is blocking for a long time after the MAIL command is sent -- sometimes taking 60 or 90 seconds to accept the mail. The time take is usually very close to 60 or 90 sec, which makes me think this is some kind of timeout. I have looked in the sendmail logs, and there are plenty of "deferred" emails, but nothing which looks responsible for this delay. How can I diagnose what is slowing down sendmail? How can I configure sendmail to always accept the mail immediately and to queue the mail for later sending? Update: I'm not sure, but it looks like this might be linked to aol.com addresses. I strongly suspect that sendmail is doing some kind of blocking receipient address verification at the accept-email-for-sending stage. How can I disable that, so that sendmail doesn't block my UI threads? Update 2: This only seems to happen at busy times. Perhaps I am running out of sendmail threads or something? How can I check that?

    Read the article

  • Puppet: is it ok to "force" certname when you expect to shuffle nodes around?

    - by Luke404
    We all know (good example on SF) that Puppet hostname detection could be... fun. At our company (and I guess we're not alone at this) we usually pre-configure servers at our offices and test them before bringing the gear to a remote datacenter and rack them. Of course the reverse dns will change when doing that, even if we don't change the actual hostname of the system. We're slowly drafting our puppet setup and I'd like to be sure those moves won't create problems. My idea is to explicitly configure the desired full FQDN of the system as certname in puppet.conf at server provision time (before the very first puppet run). My process would look something like this: basic o.s. installation basic network configuration, enough to reach the internet and resolve dns install puppet and set up certname start puppet and let him manage the whole configuration test, fix problems in config (via puppet), re-test, and so on... manually stop puppet set up new network configuration for the datacenter network move the machine to DC turn it on puppet should automatically start and keep on doing its job The process is supported by detecting the environment in puppet's manifests (eg. based on subnet, like they do at Wikimedia) and modify configuration as needed (eg. resolv.conf contents appropriate for each network). Each node's certname will never change for the whole system life cycle. Is there any problem with this approach? Could it be improved?

    Read the article

  • Passive mode FTP file download hangs from specific machine

    - by chiptuned
    I have a server which is an AWS instance that just cannot download files from a specific FTP server. I can connect to the FTP server fine and run some commands, but when I request a file it just hangs. Here is the debug output of the base linux ftp client after login: ---> SYST 215 UNIX Type: Apache FtpServer Remote system type is UNIX. ftp> get outgoing/catalog.gz catalog.gz local: catalog.gz remote: outgoing/catalog.gz ---> PASV 227 Entering Passive Mode (64,156,167,125,135,191) ---> RETR outgoing/catalog.gz 150 File status okay; about to open data connection. Thats it. Then it just sits there and nothing transfers. I have verified that a data connection is made but the client gets no data. ? ss -nt dst 64.156.167.125 State Recv-Q Send-Q Local Address:Port Peer Address:Port ESTAB 0 0 10.185.147.150:41190 64.156.167.125:21 ESTAB 0 0 10.185.147.150:48871 64.156.167.125:48557 The FTP server is not in my control and downloads from other FTP servers in passive mode have worked. Active mode does not work as the system is behind a firewall. Every FTP client I've tried has the same problem. The download works from other systems, even from other AWS instances I have with the same Security Group. Not necessarily the same distro or config though. I understand it may be some issue on the server side, but I want to know what it is about my particular machine where the transfer hangs and where on every other machine I can get my hands on, it works. Please let me know what the culprit on the client side could be or ideas on what else to look at.

    Read the article

  • File corruption (bad checksums) in large files copied to VMware guest

    - by AllanA
    In setting up a development lab, I've got a desktop system running ESXi 4.1.0 (free license) on SATA RAID 0 (already purchased and configured when I started this job; I'm open to hardware input as it pertains to my problem.) Its guests so far include two Win2008 Server R2 64-bit VMs and on Ubuntu 10.04 64-bit VM. I'm installing onto the Windows servers. We've been copying off some fairly large files (over a gigabyte) for an installation, hoping to install more quickly from a (virtual) hard drive than from the network for from BD-ROM. The problem is that they keep coming up with different checksums from the originals. The file sizes are the same, but md5sum reports different numbers (and so does the installer, as it refuses to continue when the checksums don't match.) I've tried copying directly from the BD-ROM (attaching the OS drive to the host system's physical drive). I've tried copying the large files onto a co-worker's Windows machine from his Blu-Ray drive; when I do that, the checksums match. But when I copy from his machine to the VM guest over a network share, the checksums no longer match. Thinking this meant a corrupt destination drive, I deleted it in vSphere and added another freshly created drive. The problem persists. I'm not sure what to try next.

    Read the article

  • Throughput and why do ISPs sell too much bandwidth?

    - by jonescb
    I hope the question made sense how I worded it. :) I've been wondering, maximum theoretical bandwidth is measured as RWIN/RTT (Window size / round trip time) Source 1 and Souce 2 So if a major city only 100 miles away gives me a ping of 50ms, and I have the default 64kb TCP window size then my maximum throughput will be 12.5Mb/s. Everything further away would give me a higher ping and therefore a lower throughput. Is there any reason to buy something like FiOS with a 50Mb/s or greater connection? Will you ever be able to reach that kind of speed? I know you can increase the TCP window size to increase throughput, but it has to be at both ends which is a deal breaker because you can't control the server. I'm assuming other network protocols like UDP aren't quite as affected by latency as TCP is, but how much of overall network traffic does non-TCP make up vs TCP. Am I just misguided about how throughput works? But if the above is correct, then why should a consumer like me buy way more bandwidth than can be realistically used. Maybe the only reason is for downloading multiple things at once, or one thing from multiple servers/peers?

    Read the article

  • /etc/hosts: What is loghost? (fresh install of Solaris 10 update 9)

    - by cjavapro
    # # Internet host table # ::1 localhost 127.0.0.1 localhost XX.XX.XX.XX myserver loghost What is the purpose of loghost? If it was not for having loghost in there, all the /etc/hosts files on all the servers in this particular network could be identical. Edit: I looked at /etc/syslog.conf #ident "@(#)syslog.conf 1.5 98/12/14 SMI" /* SunOS 5.0 */ # # Copyright (c) 1991-1998 by Sun Microsystems, Inc. # All rights reserved. # # syslog configuration file. # # This file is processed by m4 so be careful to quote (`') names # that match m4 reserved words. Also, within ifdef's, arguments # containing commas must be quoted. # *.err;kern.notice;auth.notice /dev/sysmsg *.err;kern.debug;daemon.notice;mail.crit /var/adm/messages *.alert;kern.err;daemon.err operator *.alert root *.emerg * # if a non-loghost machine chooses to have authentication messages # sent to the loghost machine, un-comment out the following line: #auth.notice ifdef(`LOGHOST', /var/log/authlog, @loghost) mail.debug ifdef(`LOGHOST', /var/log/syslog, @loghost) # # non-loghost machines will use the following lines to cause "user" # log messages to be logged locally. # ifdef(`LOGHOST', , user.err /dev/sysmsg user.err /var/adm/messages user.alert `root, operator' user.emerg * ) Very interesting. when shutting down,, alerts go to all users probably through *.emerg * Looking at ifdef, it seems that the first parameter checks to see if current machine is a loghost, second parameter is what to do if it is and third parameter is what to do if it is not. Edit: If you want to test a logging rule you can use svcadm restart system-log to restart the logging service and then logger -p notice "test" to send a test log message where notice can be replaced with any type such as user.err, auth.notice, etc.

    Read the article

  • W2003StdR2 server: DNS dysfunctional!

    - by Tor
    I hate to have to do this, but i feel up that creek with no... well, some of you might know. At the moment my one and only DNS server refuses to do Forwarding. The story is as goes: This site had 2 servers, one W2003SBS and an W2003StdR2. The SBS degraded over a short periode of time, and to not go down with it i decided to move all data over to the other server. This was of course an AD integrated site. Move went ok, the Std server removed from the domain, and the SBS put to rest. For the time being we decided to run the Std as a server only, and no AD. We renamed the internal domain to xxx.local, and set the server up with DNS, DHCP and installed WINS (not activated). Forwarding of DNS is to our ISP through a Netgear Firewall. The same address setup used as before. So - DNS server started and all went ok, clients reconfigured and hooked up and then - after a day's time - internet name resolution stopped working on the server! Nothing had changed, been altered, modified, nothing! What i now get when doing NSLOOKUP is just a 2 sec timeout response! And i have checked and looked, but to no avail. Anybody seen this behaviour before? And yes - ALL servicepacks have been updated on the server. I would be much obliged if anyone in here could lend an ear... and give advice! Thanks.... from Tor in Norway Today is the 14th, and i still have no resolution to this nagging problem. Anybody else got any advice in the matter? Please?

    Read the article

  • How do I setup routing for two companies with different Internet connections on the same LAN?

    - by Clint Miller
    Here's the setup: Two companies (A & B) share office space and a LAN. A 2nd ISP is brought in and company A wants its own Internet connection (ISP A) and company B wants its own Internet connection (ISP B). VLANs are deployed internally to separate the two companies' networks (company A: VLAN 1, company B: VLAN 2, shared VOIP: VLAN 3). With separate VLANs it's simple enough to use separate DHCP servers (or separate scopes on the same server) to assign the default gateway to each company's gateway for their Internet connection. Static routes can be created on each gateway to point traffic destined for the other company's VLAN or the voice VLAN so that all nodes are reachable as expected. However, I think this is a form of asymmetrical routing, right? (The path from node A1 to node B1 is not the same as the path back from node B1 to node A1). Can I set up policy-based routing to correct this? In that case, can I assign the same default gateway to every device on all VLANs and create a routing policy on a L3 switch to look at the source address and forward traffic to the appropriate next hop? In that case, I want the routing logic to go like this: If the destination address is known, forward the traffic (traffic destined for a different VLAN). If the destination address is unknown, forward the traffic to ISP A's gateway if the source address is on VLAN A; or forward the traffic to ISP B's gateway if the source address is VLAN B. Am I thinking about this problem in the correct way? Is there another way to solve this problem that I am overlooking?

    Read the article

  • Scaling a node.js application, nginx as a base server, but varnish or redis for caching?

    - by AntelopeSalad
    I'm not close to being well versed in using nginx or varnish but this is my setup at the moment. I have a node.js server running which is serving either json, html templates, or socket.io events. Then I have nginx running in front of node which is serving all static content (css, js, etc.). At this point I would like to cache both static content and dynamic content to memory. It's to my understanding that varnish can cache static content quite well and it wouldn't require touching my application code. I also think it's capable of caching dynamic content too but there cannot be any cookie headers? I do use redis at the moment for holding session data and planned to use it for other things in the future like keeping track of non-crucial but fun stats. I just have no idea how I should handle caching everything on the site. I think it comes down to these options but there might be more: Throw varnish in front of nginx and let varnish cache static pages, no app code changes. Redis would cache dynamic db calls which would require modifying my app code. Ignore using varnish completely and let redis handle caching everything, then use one of the nginx-redis modules. I'm not sure if this would require a lot of app code changes (for the static files). I'm not having any luck finding benchmarks that compare nginx+varnish vs nginx+redis and I'm too inexperienced to bench it myself (high chances of my configs being awful). I'm basically looking for the solution that would be the most efficient in terms of req/sec and scalable in the future (throw new hardware at the problem + maybe adjust some values in a config = new servers up and running semi-painlessly).

    Read the article

  • pam_ldap.so before pam_unix.so? Is it ever possible?

    - by user1075993
    we have a couple of servers with PAM+LDAP. The configuration is standard (see http://arthurdejong.org/nss-pam-ldapd/setup or http://wiki.debian.org/LDAP/PAM). For example, /etc/pam.d/common-auth contains: auth sufficient pam_unix.so nullok_secure auth requisite pam_succeed_if.so uid >= 1000 quiet auth sufficient pam_ldap.so use_first_pass auth requiered pam_deny.so And, of course, it works for both ldap and local users. But every login goes first to pam_unix.so, fails, and only then tries pam_ldap.so successfully. As a result, we have a well-known failure message for every single ldap user login: pam_unix(<some_service>:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=<some_host> user=<some_user> I have up to 60000 of such log messages per day and I want to change the configuration so, that PAM will try ldap authentication first, and only if it fails - try pam_unix.so (I think it can improve the i/o performance of the server). But if I change common-auth to the following: auth sufficient pam_ldap.so use_first_pass auth sufficient pam_unix.so nullok_secure auth requiered pam_deny.so Then I simply can't login anymore with local (non-ldap) user (e.g., via ssh). Does somebody knows the right configuration? Why Debian and nss-pam-ldapd have pam_unix.so at first by default? Is there really no way to change it? Thank you in advance. P.S. I don't want to disable logs, but want to set ldap authentication on the first place.

    Read the article

  • What are possible results/side effects if replication between DC's in a Windows domain is unable to occur?

    - by hydroparadise
    There's plenty of administration literature out there how to properly manage Windows servers. But in dealing with real life, things don't always occur like you want them to. In Microsoft's Windows Server 2003 Administrator's Companion, out of 1400+ pages, theres only one page that I could find when it comes up setting up additional domain controlers. They make it sound seemless and don't reveal a whole lot on what happens if "peer" DC's are unable to replicate. Down to the specific issue at hand, we had a DC go down about a month ago due to a bad RAID controller. There was nothing critical that waranted imediate attention, so bringing it back up got put on the back burner. A month later, we get the DC back up and running and everyting seemed ok. The next day, nobody is able to logon complaining that the "user does not exist" or "unable to establish a trust relationship". Knowing that I had just put the downed DC back on the network, I immediately took it back off the network and had everybody restart the workstations. After that, exchange was fine, shares became available, and everybody was able to log in. After doing some event log swimming, it would appear that everything started due to replication issues on the SYSVOL. I've read where you can force replication, but that would mean putting it back on the network. I am afraid to put the DC back on the network in fear that something else could go wrong. So, what other issues could one expect to run into where two DC's are unreplicated for over a month?

    Read the article

  • How to route outbound traffic to specific domain "XYZ.org" via a specific NIC or public/static IP?

    - by user139943
    Within the next week or so, I'll be setting up an AT&T U-verse modem with 5 usable static public IP addresses. I plan to register a domain name to 1 of the 5 static IPs (remaining 4 unregistered), and run a website from a single server setup in my home LAN. I'll skip the long winded reason why, but I need to somehow route outbound traffic (originating from my server) destined for one public domain (i.e. http://www.sample.org) through one of the UNREGISTERED static IP addresses ONLY. Basically, I want this public domain to see connections coming from an IP address and not my domain name. If it makes it easier, this can apply to all outbound traffic from my server as long as it doesn't impact users browsing my website! Inbound connections should go through the domain name / registered public IP. Can I accomplish this with my single server with one or multiple NICs? Do I need multiple servers and set one up as a proxy? Please help as my background is in software and not networking, and I don't think I can accomplish this at a software level (e.g. Java). Thanks.

    Read the article

  • mail server checklist..

    - by Jeff
    currently we ran into some issues with our mail server setup. im preparing a list of actions that we should enforce and use in order to maintain a proper email solution within our company. we have around 80 exchange users, and send mass emails out almost on a monthly bases to 20,000 + customers each time.. the checklist i currently have: 1) mcafee mxlogic 'cloud' anti-spam functionality for incoming message. 2) antivirus on each computer in company 3) antivirus on exchange and DNS servers 4) setup SPF record 5) setup DKIM 6) setup domainkey 7) setup senderID 8) submit spf to microsoft, yahoo, etc. for 'whitelist' purposes. 9) configure size limits for messages in exchange to safe numbers 10) i have 2 outside IPs for my email server, incase one gets blacklisted, switch to the backup. 11) my internet site rests on a different ip than the mail server 12) all mass emails for company sent through 3rd party company (listtrak.com) 13) setup domain alias, media, enews, and bounce for the 3rd party mass mail software. 14) verify the setup using [email protected] 15) configure group policy and our opendns.org account to prevent unwanted actions and website viewing mass emails: 1) schedule them to send different amounts at different times (1,000 at 10am, 1,000 at 4pm, 1,000 10am next day).. 2) setup user prefences, decide what they want to receive ect. ( there interests) 3) send a more steady flow of email, maybe 100 a week with top new products instead of 20,000k every other month.. if anyone has suggestions or additions/subtractions to this checklist they are greatly appreciated. thank you

    Read the article

  • Firefox Does NOT get local site cookie

    - by Campo
    This is a weird one. We have a production server (Server 2008) and two staging servers (Server 2008 and Server 2003) I have sites on all of these. They all use cookies. On the Production server when browsing to our site www.supernovainteractive.com there is a cookie that detects when you visted the site and it will not refresh the logo animation (top left hand side) on clicking to another page. This works for all browsers on the production server. I’m not sure what’s going on but for some reason cookies are not working on one site in the 2008 staging server only. This is when browsing using Firefox (3.6.3) they work fine on all other browsers (IE, Chrome, Safari, Opera) In addition, the 2003 staging server works fine. You can test on the Supernova Interactive site by noticing the logo in the top left corner. It uses a cookie to detect if you’ve already seen the animation. Once you’ve seen it once, it doesn’t animate again until tomorrow. Currently, it’s animating every time. I have opened an outside facing port so others can see the issue. Http://exchange.supernova.com:10009 Any ideas on this one? Firewalls are off on the server. Notice you do not get a cookie from Exchange.supernova.com.

    Read the article

  • is there a way to run a command before puppet implements a change?

    - by Patrick
    I want to have puppet run a specific command before performing any type of change. I am aware of the prerun_command option in the main puppet.conf, but this is not what I'm looking for. I want the command to only run if something is about to change, not on every puppet run. Here's the scenario. Let's say I have a bunch of web servers behind a load balancer. I then want puppet to update the web site files. But in order to prevent issues where some files have been updated, but other files haven't, and the mixed versions causing problems, I want to take the server out of the load balancer pool. I could write a script which when run will tell the load balancer to remove the box from the pool. Then puppet can do the change, and use postrun_command to put the box back in the pool once complete. But I need a way to run that script to remove the server from the pool. The only solution I can think of is to keep 2 copies of the files on the box. One a staging copy, and when puppet updates that, use a notify action to trigger the removal script, and then copy from staging into the live location. But I was hoping for something a little more generic that would work on any change being performed (upgrading a package, restarting a service, creating a user, anything).

    Read the article

  • rDNS for SMTP server locally with Mail hosted by third party

    - by Zleviticus
    Ok We have a difference of opinion on something and wanted to get some expert advice. We host our mail with our main domain "OurDomain.net" with a third part mail provider. We have an in house application that has to be able to send mail out to our clients. The problem is that sometimes the mail is flaky and will stop users from functioning in the program for 30 sec or more and appears to lock up. We have determined that the issue is with the mail piece. One solution is to use Database mail to queue up outbound emails to send out. The other is to set up an intenal SMTP server and send out mail through it. My fear is that we wil not be able to get rDNS to work properly and most of the mail will be blocked by our various client spam filters. Is it possible to set up the DNS for the servers so that we can send mail out like [email protected] using the smtp server in house and still pass the rDNS parameters that are normally set on spam filters? enquiring minds want to know.

    Read the article

  • Timestamp Updating Constantly on /dev/null

    - by motorleague
    I've been working on a problem with a /dev/null file on an AIX system (just for background it looks as though it was inadvertently deleted and recreated as a normal file by somebody), but in trying to determine what caused the problem, I noticed that the timestamp on it seems to update every minute. I've observed this on several AIX servers at my workplace. At present I can't entirely rule out this be something specific to the Application being used at my workplace, so I compared with CentOS and Debian based computers at home last night. The CentOS box, which runs 24 hours, had a mod time on /dev/null of around 4 days ago (during which time it was essentially just being used as a web browser and multimedia player, although it would have had active but essentially unused Apache, MySQL and VMM processes running in the background). The timestamp on /dev/null on the Debian machine, which was a just booted laptop, pretty much reflected the boot time, but I tested redirecting STDIN from, and STDOUT to it, and the modification time was unchanged (I'm not sure 100% sure if directing data to /dev/null constitutes "writing to it" in the way it would a normal file). So my question is essentially, could anybody please offer any advice with regards to what circumstances (permissions changes etc.. aside) might cause the timestamp on /dev/null to update? Thanks very much for any suggestions. Alex.

    Read the article

  • connections in FIN_WAIT and CLOSE_WAIT state

    - by Raj
    I would like to elaborate the setup so You guys can understand the question and answer more accurately. I have HAProxy as load-balancer, 4 webservers (apache 2.2.3) and one database server (MySQL 5). I am monitoring these servers by nagios. I have disabled the keepalive on apache as we have only 8GB of memory. Now what happens whenever I receive alerts for high memory and cpu utilization, I have observed that the connections from apache to database server hang in established mode (keepalive with timeout value of 7200) and at other side means connections between haproxy and apache shows status as FIN_WAIT on haproxy server and CLOSE_WAIT at apache side. I also see the huge memory swapping and apache taking the most of the memory. I did strace on apache process and did not find any information. strace gets attached to apache process but did not produce any output. The processlist on Mysql server show s those processes in sleep mode. The application on webserver is Magento a php application. if you need further information please let me know. Thanks.

    Read the article

  • How to set up a server without a hosting control panel

    - by A4J
    I have always used a control panel on my dedicated servers - from cPanel to Plesk to Virtualmin, and I am now considering ditching a CP altogether and manually editing config files. My requirements are fairly simple, I will host multiple sites on the server; some Apache with PHP & Mysql and some Passenger with Rails & Postgres. All will require email smtp/pop. FTP/Stats will not be required. Could someone please give me a quick run-down of what I would need to do - in terms of installing software and configuration? My server will come with a base install of CentOS 6.4 minimal. My thoughts so far: Install/update latest versions of MySQL & Postgres (are they 'safe' out of the box? Or do I need to do anything else like set up root passwords etc?) Install Apache & PHP (again, are the base installs good to go or do they require security tweaks?) Set up nameservers/hostnames/reverse DNS etc (Any guides on how to do this please?) Install Rubygems Install and configure Dovecot and Postfix (any tips on doing this? Or links to how-tos that cover it please?) Set up each website - any links to guides on how to do this? Install/configure firewall (or is the default install good to go?) Any other tips or advice would be greatly appreciated, as would links to guides or how-tos.

    Read the article

  • DBCC CHECKDB fails and quits job, ambiguous error message.

    - by ddono25
    I received a notice that one of our servers' DBCC CHECKDB for all databases has been failing the past four times it has been run. We don't have any data prior to that, but it doesn't look like it has been succeeding for awhile. There are no errors in the log file only: DBCC results for 'sys.sysxmlfacet'. [SQLSTATE 01000] Msg 0, Sev 0, State 1: Unspecified error occurred on SQL Server. Connection may have been terminated by the server. [SQLSTATE HY000] There are 112 rows in 1 pages for object "sys.sysxmlfacet". [SQLSTATE 01000] I ran a DBCC CHECKDB using sp_MSForEachDB to get more accurate results and had the same error on the same DB but at a separate point: DBCC results for 'NameValuePair_Greek_CI_AS'. [SQLSTATE 01000] Msg 0, Sev 0, State 1: Unspecified error occurred on SQL Server. Connection may have been terminated by the server. [SQLSTATE HY000] There are 0 rows in 0 pages for object "NameValuePair_Greek_CI_AS". [SQLSTATE 01000] Also, the error-log states that the DBCC completed without errors for this database. I can't figure out how to track down this ambiguous issue that only happens on this database out of the dozens on this server. Any help is appreciated!

    Read the article

  • If I ssh to a domain provided by dyndns, does my password go through them?

    - by D Connors
    I'm running Ubuntu on my work PC, and my work place provides me with a static IP address but not with a domain. It's sometimes useful for me to connect to that PC through ssh, but it's not common enough for me to instantly remember the IP number. So I set um a dyndns account, and associated a short and intuitive domain name to that IP. Here's my question, when I try to ssh to the domain, it asks me $ ssh [email protected] The authenticity of host 'something.there.foo (xx.xx.xx.xx)' can't be established. RSA key fingerprint is 'ALPHANUMERIC STRING' Are you sure you want to continue connecting (yes/no)? That surprised me a little bit. I have already registered the RSA fingerprint by connecting directly to the IP address. I thought the domain name was simply a convenient way of pointing me in the right direction (i. e. the ip address), but that message makes me think my data is actually going through their servers or something. Which one is it? Am I sending my password through someone else's server? Or is ssh just really really careful, thus warning me even if the final destination is a know host? The ssh server I'm using is the openssh-server package.

    Read the article

  • How do I get to the bottom of network latency and bandwidth issues

    - by three_cups_of_java
    I recently moved two blocks south. That move moved me from Comcast to Broadstripe (high-speed internet cable providers). Comcast was pretty good. Broadstripe sucks. I called them on the phone, and they basically brushed me off (politely). I want to come to them with some numbers, so I can say more than just "it's really slow". I still have access to my old Comcast service, so I can run the tests using both providers. Here's what I'm seeing with my new Broadstripe service: 1) When I browse to most sites, there is a long delay (5-10 seconds) before the page starts loading in my browser 2) The speed test tell me I have 12 megs down (bullshit) 3) I have a server at my office. I just downloaded some files (using scp on the command line). It said I'm getting 3.5 KB/s I'm an experienced programmer and spend most of my days on the command line and in vim. Networking, however is not a strong point. I've played around with traceroute, but I'm not sure if that's the right tool to use. I have access to servers all over the country (I would just use Amazon EC2 to set up a test server), and I prefer to use Ubuntu for my testing. How can I come up with some hard numbers to show Broadstripe how crappy their service is?

    Read the article

< Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >