Search Results

Search found 18728 results on 750 pages for 'setup deployment'.

Page 609/750 | < Previous Page | 605 606 607 608 609 610 611 612 613 614 615 616  | Next Page >

  • Is Cherokee (probably) the best static content server for beginner sysadmins?

    - by Bad Learner
    I have read the pros and cons of most of the popular web servers and have come to a conclusion that Apache would (probably) be the best web server for serving dynamic content - - no wonder YouTube, Flickr and Facbook, among many others, use it. I do not know if that C10K problem applies to Apache even when serving dynamic content only, but I think any web server used to serve dynamic content needs some good tweaking for optimized performance, and the fact that nothing beats Apache when it comes to documentation, resources and support on the web, I think should will go with Apache for dynamic content. That apart, the confusion begins when it comes to choosing web servers for static content (including streaming videos). I see that Nginx, Cherokee and Lighttpd are among the best (I am not considering non-open source or non-linux stuff here). So, which too choose? I know one cannot go wrong with any of the three (Nginx, Cherokee, Lighttpd). Lighttpd's development has evidently gotten slower than it was a good time ago. The documentation is pretty good for all the three, and hopefully, so are the resources (knowledge of these among the users of Stackoverflow/Serverfault sites, the web etc). Precisely, and noting point [2] and [3], if I am not wrong, I should either go with Nginx or Cherokee. I would love to see someone clarify these... is Cherokee just as fast (mb/s), performant (connections/s), and reliable (think downtime/restarting server) as Nginx for serving static content and load balancing, for small, medium to large (and really large) websites and applications? (Think, the size of YouTube, Apache or Facebook.) if the answer for the Q above is a big "hell, yes!" then, I should probably prefer Cherokee, right? Because, since I am a beginner, it would a lot easier to setup Cherokee as it has a graphical admin user interface + really good documentation. Yes? I could be wrong, I could be right. I put down what I know so that you can offer most relevant advise. Pardon if anything I've said is offensive.

    Read the article

  • How can I set up a touchscreen as the non-primary monitor in a dual monitor situation?

    - by bazzjedi
    I have been looking for information about using a second monitor, and thought of using a touchscreen. Will a touchscreen monitor work in a dual setup if it is not the main monitor? I haven't found any information on this; I've only found information about the touchscreen being the main monitor. That is, except at this SU question (snippet below): If you mean do the touch screen features still work on the other machine, the answer is yes. Past that, you can even see some of the touch screen features on a non touch screen monitor (just not the multi touch features!) For example, on the taskbar, click (without releasing) on any icon and then drag the mouse up, and you will see that it does the same as using your finger and dragging up. I have a HP 2510 for the main monitor. I'm thinking of adding a HP 2310ti (the touchscreen) as the secondary. My graphics card is a geforce gtx 295. I'm running Windows 7 Professional.

    Read the article

  • Sharing a folder with Nautilus and NTFS external drive gets errors

    - by TheLQ
    I am trying to share a folder in Lubuntu over a network that's on an external NTFS drive. Due to the system that I have (rotating backup disks) this is probably the second time that the drive would of been mounted. Its manually mounted with a simple (for example) mount /dev/sdb1 /media/BACKUP On an internal NTFS disk I have successfully setup a network share and can access it. However on the external disk I can't from any other Windows computer. When setting up the share Nautilus said that it needs to change the other's permissions to allow for other users to write. However afterwords its still blank. Changing it to Read and Write just changes back to blank. Chowning the entire /media folder recursively and trying didn't work. Running PCManFM as root and changing didn't work. Adding "public=yes" to smb.conf and restarting didn't work. I'm out of idea's on what to do. What's weird is that it worked just fine on an internal NTFS disk, so why not the external one? Any solutions need to be able to managed inside of a gui (preferably Nautilus) as the person managing the machine isn't as tech savvy. Thanks

    Read the article

  • fwbuilder/iptables manually scripted + autogenerated rules at startup?

    - by Jakobud
    Fedora 11 Our previous IT-guy setup iptable rules on our firewall in a way that is confusing me and he didn't document any of it. I was hoping someone could help me make some sense of it. The iptables service is obviously starting at startup, but the /etc/sysconfig/iptables file was untouched (default values). I found in /etc/rc.local he was doing this: # We have multiple ISP connections on our network. # The following is about 50+ rules to route incoming and outgoing # information. For example, certain internal hosts are specified here # to use ISP A connection while everyone else on the network uses # ISP B connection when access the internet. ip rule add from 99.99.99.99 table Whatever_0 ip rule add from 99.99.99.98 table Whatever_0 ip rule add from 99.99.99.97 table Whatever_0 ip rule add from 99.99.99.96 table Whatever_0 ip rule add from 99.99.99.95 table Whatever_0 ip rule add from 192.168.1.103 table ISB_A ip rule add from 192.168.1.105 table ISB_A ip route add 192.168.0.0/24 dev eth0 table ISB_B # etc... and then near the end of the file, AFTER all the ip rules he just declared, he has this: /root/fw/firewall-rules.fw He's executing the firewall rules file that was auto-generated by fwbuilder. Some questions Why is he declaring all these ip rules in rc.local instead of declaring them in fwbuilder like all the other rules? Any advantage or necessity to this? Or is this just a poorly organized way to implement firewall rules? Why is he declaring ip rules BEFORE executing the fwbuilder script? I would assume that one of the first things the fwbuilder script does it get rid of any existing rules before declaring all the new ones. Am I wrong about this? If that was the case, the fwbuilder script would basically just delete all the ip rules that were defined in rc.local. Does this make any sense? Why is he executing all this stuff at startup in rc.local instead of just using iptables-save to keep the firewall settings at /etc/sysconfig/iptables that will get implemented at runtime?

    Read the article

  • SharePoint extranet security concerns, am I right to be worried?

    - by LukeR
    We are currently running MOSS 2007 internally, and have been doing so for about 12 months with no major issues. There has now been a request from management to provide access from the internet for small groups (initially) which are comprised of members from other Community Organisations like ours. Committees and the like. My first reaction was not joy when presented with this request, however I'd like to make sure the apprehension is warranted. I have read a few docs on TechNet about security hardening with regard to SharePoint, but I'm interested to know what others have done. I've spoken with another organisation who has already implemented something similar, and they have essentially port-forwarded from the internet to their internal production MOSS server. I don't really like the sound of this. Is it adviseable/necessary to run a DMZ type configuration, with a separate web front-end on a contained network segment? Does that even offer me any greater security than their setup? Some of the configurations from a TechNet doc aren't really feasible, given our current network budget. I've already made my concerns known to management, but it appears it will go ahead in some form or another. I'm tempted to run a completely isolated, seperate install just for these types of users. Should I even be concerned about it? Any thoughts, comments would be most welcomed at this point.

    Read the article

  • Additional Security Measures for Syslog over SSH

    - by Eric
    I'm currently working on setting up some secure syslog connections between a few Fedora servers. This is my currently setup 192.168.56.110 (syslog-server) <---- 192.168.57.110 (syslog-agent) From the agent, I am running this command: ssh -fnNTx -L 1514:127.0.0.1:514 [email protected] This works just fine. I have rsyslog on the syslog-agent pointing to @@127.0.0.1:1514 and it forwards everything to the server correctly on port 514 via the tunnel. My issue is, I want to be able to lock this down. I am going to use ssh keys so this is automated because there will be multiple agents talking to the server. Here are my concerns. Someone getting on the syslog-agent and logging into the server directly. I have taken care of this by ensuring that syslog_user has a shell of /sbin/nologin so that user can't get a shell at all. I don't want someone to be able to tunnel another port over ssh. Ex. - 6666:127.0.0.1:21. I know my first line of defense against this is to just not have anything listening on those ports and it's not an issue. However I want to be able to lock this down somehow. Are there any sshd_config settings on the server that I can use to make it where only port 514 can be tunneled over ssh? Are there any other major security concerns I'm overlooking at this point? Thanks in advance for your help/comments.

    Read the article

  • Varnish returning 503, FetchError (could not get storage)

    - by Archan
    On the current setup we're running into a problem with Varnish, we're running a CentOS 5.7 x86_64 xenpv, with Cpanel WHM, hosted at VPS.net. Sometimes we will recieve a Guru Meditation from Varnish, and when we look in the varnishlog with the following command varnishlog -d -c -m TxStatus:503 it returns output similar to the following: 15 VCL_call c recv 15 VCL_acl c NO_MATCH devs 15 VCL_return c pass 15 VCL_call c hash 15 Hash c **** 15 Hash c ************* 15 VCL_return c hash 15 VCL_call c pass pass 15 Backend c 12 default default 15 TTL c 1835862523 RFC 0 -1 -1 1332454056 0 1332454055 375007920 0 15 VCL_call c fetch hit_for_pass 15 ObjProtocol c HTTP/1.1 15 ObjResponse c OK 15 ObjHeader c Date: Thu, 22 Mar 2012 22:07:35 GMT 15 ObjHeader c Server: Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 mod_fcgid/2.3.6 15 ObjHeader c X-Powered-By: PHP/5.3.9 15 ObjHeader c Expires: Thu, 19 Nov 1981 08:52:00 GMT 15 ObjHeader c Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 15 ObjHeader c Pragma: no-cache 15 ObjHeader c Content-Type: text/html; charset=utf-8 15 ObjHeader c X-Cacheable: NO:Cache-Control=private 15 FetchError c chunked read_error: 12 (Could not get storage) 15 VCL_call c error deliver 15 VCL_call c deliver deliver As far as I have could gather, we could try increasing the nuke_limit, but currently we have a nuke_limit of 500, and when running varnishstat -1 -f n_lru_nuked we "only" get a total of 1031, even though we have seen the error happen on several pages. When we then run top to see how much memory Varnish is using, it only shows that it is using 763m, although we've set it to be allowed to use 1200m. Any ideas of what the problem can be?

    Read the article

  • AFP/SSH stopped working on OS X Server

    - by churnd
    I have 3 Mac OS X servers all bound to AD, all configured in the Golden Triangle setup. All 3 are completely separate from each other in terms of services, but all reside on the same internal network and are all bound to the same Active Directory domain. Two are 10.5.x (latest updates) and one is 10.6.3. Last weekend, all 3 simultaneously stopped allowing Active Directory users access to certain services, specifically AFP & SSH. SMB still works fine on all 3. I asked the AD admin if anything changed, and he said "Yes, we made a change to user accounts to toughen up security", and suggested I use [email protected] instead of just username. This still didn't work. I have completely removed one of my servers from AD, and re-joined, but this didn't work either. I can do kinit from command line and get a Kerberos ticket. sudo klist -ke shows all services are configured to use the correct Kerberos principles. I have been scavenging the logs for any useful info. The AFP log just shows that I'm connecting and disconnecting. The DirectoryService.log shows stuff about misconfigured Kerberos hashes, but my research is showing that's not uncommon. /var/log/system.log isn't showing anything useful that I can see. I'm not sure where to go from here. Any help/ideas appreciated.

    Read the article

  • Windows VPN for remote site connection drawbacks

    - by Damo
    I'm looking for some thoughts on a particular way of setting up a estate of machines. We have a requirement to install machines into unmanned, remote locations. These machines will auto login and perform tasks controlled from a central server. In order to manage patching, AV, updates etc I want these machines to be joined to a dedicated domain for this estate. Some of the locations will only have 3G connectivity (via other hardware), others will be located on customer premises in internal networks. The central server (of ours) and the Domain Controller will be on a public WAN. I see two ways of facilitating this. Install a router at each location and have a site to site VPN between the remove device and the data centre where the servers are location Have the remote machine dial up and authenticate via a Windows VPN connection to the DC via RAS Option one is more costly to setup and has a higher operational cost. It also offers better diagnostics if the remote PC goes down. Option two works well but is solely dependent on the VPN connection been made before any communication can be made to the remote machine. In a simple test, I can got a Windows 7 machine to dial a VPN prior to authentication to a domain, then automatically login to the machine using domain credentials. If the VPN connection drops, it redials. I can also create a timed task to auto connect every hour in case of other issues. I'd like to know, why (if at all) is operating a remote network of devices which are located in various out of band locations in this way a bad idea? Consider 300-400 remote machines all at different sites. I'd rather have 400 VPN connections to a 2008 server than 400 routers, however I'd like to know other opinions on this.

    Read the article

  • Mechanism behind user forwarding in ScriptAliasMatch

    - by jolivier
    I am following this tutorial to setup gitolite and at some point the following ScriptAliasMatch is used: ScriptAliasMatch \ "(?x)^/(.*/(HEAD | \ info/refs | \ objects/(info/[^/]+ | \ [0-9a-f]{2}/[0-9a-f]{38} | \ pack/pack-[0-9a-f]{40}\.(pack|idx)) | \ git-(upload|receive)-pack))$" \ /var/www/bin/gitolite-suexec-wrapper.sh/$1 And the target script starts with USER=$1 So I am guessing this is used to forward the user name from apache to the suexec script (which indeed requires it). But I cannot see how this is done. The ScriptAliasMatch documentation makes me think that the /$1 will be replaced by the first matching group of the regexp before it. For me it captures from (?x)^/(.* to ))$ so there is nothing about a user here. My underlying problem is that USER is empty in my script so I get no authorizations in gitolite. I give my username to apache via a basic authentication: <Location /> # Crowd auth AuthType Basic AuthName "Git repositories" ... Require valid-user </Location> defined just under the previous ScriptAliasMatch. So I am really wondering how this is supposed to work and what part of the mechanism I missed so that I don't retrieve the user in my script.

    Read the article

  • Trouble with nginx and serving from multiple directories under the same domain

    - by Phase
    I have nginx setup to serve from /usr/share/nginx/html, and it does this fine. I also want to add it to serve from /home/user/public_html/map on the same domain. So: my.domain.com would get you the files in /usr/share/nginx/html my.domain.com/map would get you the files in /home/user/public_html/map With the below configuration (/etc/nginx/nginx.conf) it appears to be going to my.domain.com/map/map as noticed by this: 2011/03/12 09:50:26 [error] 2626#0: *254 "/home/user/public_html/map/map/index.html" is forbidden (13: Permission denied), client: <edited ip address>, server: _, request: "GET /map/ HTTP/1.1", host: "<edited>" I've tried a few things but I'm still not able to get it to cooperate, so any help would be greatly appreciated. ####################################################################### # # This is the main Nginx configuration file. # ####################################################################### #---------------------------------------------------------------------- # Main Module - directives that cover basic functionality #---------------------------------------------------------------------- user nginx; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; #---------------------------------------------------------------------- # Events Module #---------------------------------------------------------------------- events { worker_connections 1024; } #---------------------------------------------------------------------- # HTTP Core Module #---------------------------------------------------------------------- http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; server { listen 80; server_name _; #access_log logs/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm; } location /map { root /home/user/public_html/map; index index.html index.htm; } error_page 404 /404.html; location = /404.html { root /usr/share/nginx/html; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } include /etc/nginx/conf.d/*.conf; }

    Read the article

  • Change Windows Authentication user for Sql Server Management Studio

    - by Asmor
    We're using Sql Server 2005 with Windows Authentication setup. So normally, when you log in using e.g. Sql Server Management Studio, it forces you to log in at MACHINE_NAME\Username. Anyways, on this one particular computer, the person said they had to make a new account called User01 to do something and showed me where she'd created it under security in the "master" system database. And so now when she logs in, it's listed as MACHINE_NAME\User01 (not the actual Windows user name). It's still set to Windows Authentication, though, and I'm unable to change the login name. Now here's where the real problem comes in... I didn't realize that she was being logged in under this user name at the time, and I disabled it to see what would happen. Now I can't log into the server under her account. I created a new account in Windows called test, and as expected SSMS had the username as MACHINE_NAME\test, and I was able to log in fine. However, the area where the User01 account was listed is not visible to me as far as I can tell and so I can't reenable it. I also tried running the following query: alter login User01 ENABLE And got this error: Msg 15151, Level 16, State 1, Line 1 Cannot alter the login 'User01', because it does not exist or you do not have permission. So in a nutshell, ideally I'd like to reenable User01 somehow, just to get things back to where they used to be. Failing that, how can I force SSMS to log in using the Windows account name as it should be, rather than trying to use User01?

    Read the article

  • Why do HTTP loopback connections not work on my subdomains?

    - by memeLab
    I have a shared hosting account at Jumba running Linux kernel 2.6.9-103.ELsmp (don't know if that helps) with cpanel 1.0 (RC1). I am using the WordPress plugin Backup Buddy, which requires HTTP loopback connections to monitor / complete backups. This works fine on memelab.com.au, but doesn't work at any subdomain (e.g.: staging.memelab.com.au). Is it possible to setup an A record or some such to remedy this? I'm aware of a workaround, (setting WP_ALTERNATE_CRON) but I find this unsatisfactory due to the messy URLs. BackupBuddy:_Frequent_Support_Issues#HTTP_Loopback_Connections_Disabled Here is the reply from my host: …as main domain have it's own separate DNS entry it have localhost entry which helps for looback connections where as subdomains don't have separate DNS zone, so it is not possible to create looback connections for it. I have cpanel access to the 'advanced zone editor' - is there anything tricky I can do there? maybe 127.0.0.2? (I remember reading that there were at least 8 available local IPs available on (some) Linuxes.) All the A records point to the server IP, with the exception of localhost.memelab.com.au which points to 127.0.0.1. I've just tried entering a new A record: localhost.itours.memelab.com.au pointing to 127.0.0.2. I still get the warning in Backup Buddy that loopback is not active, and Cpanel won't let me enter 127.0.0.1 (guess it doesn't work like that!) nslookup itours.memelab.com.au Server: 203.88.112.33 Address: 203.88.112.33#53 Non-authoritative answer: Name: itours.memelab.com.au Address: 117.55.224.177

    Read the article

  • Centos iptables configuration for Wordpress and Gmail smtp

    - by Fabrizio
    Let me start off by saying that I'm a Centos newby, so all info, links and suggestions are very welcome! I recently set up a hosted server with Centos 6 and configured it as a webserver. The websites running on it are nothing special, just some low traffic projects. I tried to configure the server as default as possible, but I like it to be secure as well (no ftp, custom ssh port). Getting my Wordpress to run as desired, I'm running into some connection problems. 2 things are not working: installing plugins and updates through ssh2 (failed to connect to localhost:sshportnumber) sending emails from my site using the Gmail smtp (Failed to connect to server: Permission denied (13)) I have the feeling that these are both related to the iptables configuration, because I've tried everything else (I think). I tried opening up the firewall to accept traffic for ports 465 (gmail smtp) and ssh port (lets say this port is 8000), but both the issues remain. Ssh connections from the terminal are working fine though. After each change I tried implementing I restarted the iptables service. This is my iptables configuration (using vim): # Generated by iptables-save v1.4.7 on Sun Jun 1 13:20:20 2014 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m tcp --dport 8000 -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m tcp --dport 465 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited -A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A OUTPUT -o lo -j ACCEPT -A OUTPUT -p tcp -m tcp --dport 8000 -j ACCEPT -A OUTPUT -p tcp -m tcp --dport 465 -j ACCEPT COMMIT # Completed on Sun Jun 1 13:20:20 2014 Are there any (obvious) issues with my iptables setup considering the above mentioned issues? Saying that the firewall is doing exactly nothing in this state is also an answer... And again, if you have any other suggestions for me to increase security (considering the basic things I do with this box), I would love hear it, also the obvious ones! Thanks!

    Read the article

  • How do you initialize networking on a new Xen guest VM?

    - by Marten Veldthuis
    We have a Citrix XenServer setup, and while I personally lean more towards Dev than Ops, I've got an issue that's been bugging me. When you provision a new (Linux/Ubuntu) guest, how do you get it to have the correct IP-address? I'd want my application servers to exist in the range of 10.20.0.0/24, preferably being .1, .2, etc, so I can keep my sanity. I guess that the actual IP-address is something set in Linux itself, and Xen can't touch that, but then what's the best practice for getting it done? If you set up DHCP, don't you just move the problem to getting the adapters the "correct" MAC-addresses? Do you just have to hardcode a large table of MAC-addresses to IP-addresses, and then provision new guests always with the correct MAC-address on the virtual ethernet adapter? What we currently do is have an image of a "app server" that we boot up a new instance of, and then finalize it (with a script) that (among other things) modifies the /etc/networking/interface file to give it the correct IP. But that feels dirty to me, and I feel like surely there must a better way. Please enlighten me?

    Read the article

  • Re-installing Windows on an old laptop

    - by Khaled
    I have an old laptop and I want to re-install Windows XP on it. The problem is that this laptop does not have an optical drive. I checked the boot sequence in the BIOS. It does not show an option to boot from USB. It have only two options: Boot from HD. Boot using Realtek agent (network boot). I tried to copy the Windows CD to second drive D:\ and run the installed from there. However, I could not format the C:\ drive. Windows complaints about setup files will be removed or something like that. I tried to boot the laptop using PXE, but I could not. It seems that the DHCP request did not get answered. I thought I could use a USB CD-ROM drive (I don't have one to try), but it might not work as there is no option to boot from USB. Do you think it will work? Do I have other options to try? Any recommendations?

    Read the article

  • Switched from DVI to HDMI, possible audio artifacts?

    - by I take Drukqs
    I'm using an ASUS VH236H monitor and an EVGA GeForce 570 GTX both of which are brand new. My monitor has an audio out port for speakers/headphones so I plugged in my headphones and made a random selection from my library when I noticed two things: There are static-like artifacts during "louder" parts of songs. There's what seems to be a volume cap in place. When I crank the volume past 100% in VLC the decibel level does not truly increase but the amount of static does. The cable is not new; I yanked it off of my PS3 when my DVI cable broke. It has been used a good amount on my HDTV and PS3 so I doubt it's a matter of burn-in. I like the way the setup works with an HDMI cable as opposed to DVI because my headphones barely reach my rig whereas I have plenty of slack when they're plugged into my monitor. Thanks in advance for any support. Note: I'm using a high quality HDMI cable from monoprice, AKG K702 headphones, and VLC media player.

    Read the article

  • Mystery 0xc0000142 error on starting java from a service, as a different user.

    - by cpf
    This is a very convoluted setup, but effectively this is what goes down: Manager service (which I don't have control over) running as admin user X starts my executable, which then starts Java as user Y using the standard c# StartInfo.Username/Password controls. Now, from a basic (not elevated or anything, just admin) command prompt I can run that executable, and Java pops up and works fine, running perfectly under the user it should be. When the service runs the same executable, however, Java silently fails. The only hint I see is this series of events in the event viewer: Service starts "Application popup: java.exe - Application Error : The application was unable to start correctly (0xc0000142). Click OK to close the application. " (googling this reveals a lot of scam sites telling me to use their "free antivirus to fix 0xc0000142 errors easy!"... sigh) Service stops (the java shutdown propagated, which is supposed to happen) And here's what process explorer has for the failure: As you can see, everything shows as a success. Now, I think this might have something to do with the permissions (the user java.exe is running under has traverse permission for the entire drive and full permissions to Directory A, which is where the .jar is), but I just can't fathom how something that works fine from the command line (and, this is an upgrade, the previous system without the user-switching aspect works fine from the service) can fail with such a cryptic message and little showing up in logs.

    Read the article

  • Exchange 2010 CAS Removal == Broken???

    - by Doug
    Hi there, I recently upgraded to exchange 2010 and have a setup with 2 of my servers running CAS roles - EXCH01, EXCH02 EXCH02 just happens to also have a mailbox role where a lot of the users sit EXCH01 is my front facing CAS server, and is facing the net with SSL etc and incoming mail moving through it as a hub transport layer server as well. As i was trying to lean things out in my VM environment i removed the CAS role from EXCH02 and all hell broke loose. All the mail users that have a mailbox on EXCH02 had their homeMTA set to a deleted items folder in AD and so did their msExchHomeServer properties. After a complete battle i manually fixed these issues to the oldvalues, and in the mean time reinstalled CAS on EXCH02 (management was going nuts with out OUTLOOK working so i just put things back the way they were in a hurry.) I must add as a strange thing on the side, that before i reset these to point at EXCH02 i tried EXCH01 and it failed. I still want to remove the CAS role from EXCH02 as it should really not have it (error on install/planning on my part) and would have thought that this would not cause the issues it did, i assumed that the fact that there was another CAS server in the admin group all would be good. Was i wrong in my assumption? and what can i do to complete this successfully the second time round? Do i need to rehome all the mailboxes to the CAS server? is this a bug in the role uninstall?

    Read the article

  • ADSL Modem/Router sometimes hands out incorrect IP addresses

    - by Peter Keevill
    My setup is as follows:- Main ADSL modem / router (switch) configured as DHCP server with address range 192.168.0.25-60 The office machines are configured with fixed IP ( not in the same address pool of course ) and hard wired to this router. A wireless access point ( Router ) is connected to provide Internet access for guests in a separate area. This router is NOT configured as a DHCP server. Wireless authentication is turned off. IP address lease times are set to 4 hours. Sometimes guests are able to connect to the wireless access point but they are not given a valid IP. They get 169.x.x.x addresses. Rebooting their machines does not resolve the problem. The only way to resolve is to reboot the main ADSL/router which is often frustrating for other users who are successfully connected with valid IP and DG. The problem seems to occur more frequently to Apple/Mac guests although it also sometimes occurs with Win machines. I personally use Ubuntu on my Laptop and thus far, never have had any problem connecting and getting a valid IP address in the guest area. One further point of note which may give a clue is that certain guests ( always Apple/Mac ) get lease times of 90 days. However, this does not 'stack out' the number of available addresses and of course, rebooting the router clears them until the next time they login.

    Read the article

  • Postfix SMTP-relay server against Gmail on CentOS 6.4

    - by Alex
    I'm currently trying to setup an SMTP-relay server to Gmail with Postfix on a CentOS 6.4 machine, so I can send e-mails from my PHP scripts. I followed this tutorial but I get this error output when trying to do a sendmail [email protected] Output: tail -f /var/log/maillog Apr 16 01:25:54 ext-server-dev01 postfix/cleanup[3646]: 86C2D3C05B0: message-id=<[email protected]> Apr 16 01:25:54 ext-server-dev01 postfix/qmgr[3643]: 86C2D3C05B0: from=<[email protected]>, size=297, nrcpt=1 (queue active) Apr 16 01:25:56 ext-server-dev01 postfix/smtp[3648]: 86C2D3C05B0: to=<[email protected]>, relay=smtp.gmail.com[173.194.79.108]:587, delay=4.8, delays=3.1/0.04/1.5/0.23, dsn=5.5.1, status=bounced (host smtp.gmail.com[173.194.79.108] said: 530-5.5.1 Authentication Required. Learn more at 530 5.5.1 http://support.google.com/mail/bin/answer.py?answer=14257 qh4sm3305629pac.8 - gsmtp (in reply to MAIL FROM command)) Here is my main.cf configuration, I tried a number of different options but nothing seems to work: alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 html_directory = no inet_interfaces = localhost inet_protocols = ipv4 mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost myhostname = host.local.domain myorigin = $myhostname newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES relayhost = [smtp.gmail.com]:587 sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = noanonymous smtp_sasl_tls_security_options = noanonymous smtp_sasl_type = cyrus smtp_tls_CAfile = /etc/ssl/certs/ca-bundle.crt smtp_use_tls = yes smtpd_sasl_path = smtpd unknown_local_recipient_reject_code = 550 In the /etc/postfix/sasl_passwd files (sasl_passwd & sasl_passwd.db) I got the following (removed the real password, and replaced it with "password"): [smtp.google.com]:587 [email protected]:password To create the sasl_passwd.db file, I did that by running this command: postmap hash:/etc/postfix/sasl_passwd Do anybody got an idea why I can't seem to send an e-mail from the server? Kind Regards Alex

    Read the article

  • Enabled Network Discovery on Server, and now VNC and Squeezebox clients don't work

    - by Mike Hanson
    I've recently setup a Windows Server 2008. It's running an email server, Squeezebox server, MS SQL Server, etc. I'm doing remote maintenance with UltraVNC. I had everything working fine. Then the server needed to access a network share on another machine, and I was prompted to turn on network discovery, which I did. I chose the Home rather than Public option. Since doing that, some things have stopped working, while others are still fine. Shared folders and the the Email services (ports 25 and 110) are still accessible. VNC (port 5900) and Squeezeboxes (port 9000) no longer work. Here's what I've tried to try to solve the problem: Checked the network discovery settings, to see if anything looked strange. Checked the firewall settings, and those ports appear to be open. Also in the firewall settings, the entries for Private domain Network Discovery were all on, but the Domain/Public ones were off. I tried turning those on. In the services, turned on Function Discovery Resource Publication and SSDP Discovery. Any other suggestions?

    Read the article

  • How can one use online backup with large amounts of static data?

    - by Billy ONeal
    I'd like to setup an offsite backup solution for about 500GB of data that's currently stored between my various machines. I don't care about data retention rates, as this is only a backup of, not primary storage, for my data. If the backup is stored on crappy non-redundant systems, that does not matter. The data set is almost entirely static, and mostly consists of things like installers for Visual Studio, and installer disk images for all of my games. I have found two services which meet most of this: Mozy Carbonite However, both services impose low bandwidth caps, on the order of 50kb/s, which prevent me from backing up a dataset of this size effectively (somewhere on the order of 6 weeks), despite the fact that I get multiple MB/s upload speeds everywhere else from this location. Carbonite has the additional problem that it tries to ignore pretty much every file in my backup set by default, because the files are mostly iso files and vmdk files, which aren't backed up by default. There are other services such as EC2 which don't have such bandwidth caps, but such services are typically stored in highly redundant servers, and therefore cost on the order of 10 cents/gb/month, which is insanely expensive for storage of this kind of data set. (At $50/month I could build my own NAS to hold the data which would pay for itself after ~2-3 months) (To be fair, they're offering quite a bit more service than I'm looking for at that price, such as offering public HTTP access to the data) Does anything exist meeting those requirements or am I basically hosed?

    Read the article

  • Best practices for thin-provisioning Linux servers (on VMware)

    - by nbr
    I have a setup of about 20 Linux machines, each with about 30-150 gigabytes of customer data. Probably the size of data will grow significantly faster on some machines than others. These are virtual machines on a VMware vSphere cluster. The disk images are stored on a SAN system. I'm trying to find a solution that would use disk space sparingly, while still allowing for easy growing of individual machines. In theory, I would just create big disks for each machine and use thin provisioning. Each disk would grow as needed. However, it seems that a 500 GB ext3 filesystem with only 50 GB of data and quite a low number of writes still easily grows the disk image to eg. 250 GB over time. Or maybe I'm doing something wrong here? (I was surprised how little I found on the subject with Google. BTW, there's even no thin-provisioning tag on serverfault.com.) Currently I'm planning to create big, thin-provisioned disks - but with a small LVM volume on them. For example: a 100 GB volume on a 500 GB disk. That way I could more easily grow the LVM volume and the filesystem size as needed, even online. Now for the actual question: Are there better ways to do this? (that is, to grow data size as needed without downtime.) Possible solutions include: Using a thin-provisioning friendly filesystem that tries to occupy the same spots over and over again, thus not growing the image size. Finding an easy method of reclaiming free space on the partition (re-thinning?) Something else? A bonus question: If I go with my current plan, would you recommend creating partitions on the disks (pvcreate /dev/sdX1 vs pvcreate /dev/sdX)? I think it's against conventions to use raw disks without partitions, but it would make it a bit easier to grow the disks, if that is ever needed. This is all just a matter of taste, right?

    Read the article

  • Can any iSCSI NAS appliance replicate / clone a LUN to an external drive?

    - by Boden
    I would like to backup using Windows Imaging to some kind of NAS appliance. I believe this will require the NAS to support iSCSI. I would then like the appliance to support the replication of the iSCSI LUN to an external eSATA or USB disk connected directly to the appliance. I've found plenty of NAS appliances that can do iSCSI and replicate to an external drive, but none that I've found thus far can do both at once. That is, the devices can do iSCSI, but then the replication feature doesn't work. The idea here is to backup to an appliance located in a secure office far away from the server room. Offsite backups to external hard drive could be managed from the appliance. The benefits of such a setup would be: 1) very unlikely that fire or random theft would affect both server-room backup and "remote" backup appliance 2) offsite backups could be managed by multiple trusted people without granting access to server room 3) Windows imaging provides poor man's deduplication, so each backup volume can contain a decent backup history. I understand why this would be a non-trivial thing to implement, but I'm wondering if such a thing exists? Preferably a tabletop, low to medium cost device. Alternative solutions welcome. NOTE: I'm backing up very few but very large files, so file replication is not a good option.

    Read the article

< Previous Page | 605 606 607 608 609 610 611 612 613 614 615 616  | Next Page >