Search Results

Search found 12497 results on 500 pages for 'linked servers'.

Page 411/500 | < Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >

  • sg_map & lsscsi showing old storage version

    - by PratapSingh
    I am using SUN storage and recently upgraded/refreshed my ISCSI LUN storage. We have replicated old storage to new storage and attached to our servers. I can see at SUN storage side that storage is attached to server and also from server when I run the below command it prints the following output : iscsiadm -m session tcp: [1] 10.1.1.10:3260,2 iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd The above storage is SUN STORAGE 7420 But when I run sg_map or lsscsi command it prints different version: lsscsi disk SUN Sun Storage 7410 1.0 /dev/sda disk SUN Sun Storage 7410 1.0 /dev/sdb disk SUN Sun Storage 7410 1.0 /dev/sdc disk SUN Sun Storage 7410 1.0 /dev/sdd Output of ls on "/dev/disk/by-path/" ls -1 /dev/disk/by-path/ ip-10.1.1.10:3260-iscsi-iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd-lun-0 ip-10.1.1.10:3260-iscsi-iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd-lun-0-part1 ip-10.1.1.10:3260-iscsi-iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd-lun-18 ip-10.1.1.10:3260-iscsi-iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd-lun-18-part1 ip-10.1.1.10:3260-iscsi-iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd-lun-2 ip-10.1.1.10:3260-iscsi-iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd-lun-2-part1 ip-10.1.1.10:3260-iscsi-iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd-lun-4 ip-10.1.1.10:3260-iscsi-iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd-lun-4-part1 ip-10.1.1.10:3260-iscsi-iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd-lun-6 ip-10.1.1.10:3260-iscsi-iqn.86-03.com.sun:02:afsfsf58-c56a-6ba8-a944-addd258687cd-lun-6-part1 I have rebooted server twice but still I am getting the same output as given above.

    Read the article

  • AuthBasicProvider: failover not working when the first LDAP is down?

    - by quanta
    I've been trying to setup redundant LDAP servers with Apache 2.2.3. /etc/httpd/conf.d/authn_alias.conf <AuthnProviderAlias ldap master> AuthLDAPURL ldap://192.168.5.148:389/dc=domain,dc=vn?cn AuthLDAPBindDN cn=anonymous,ou=it,dc=domain,dc=vn AuthLDAPBindPassword pa$$w0rd </AuthnProviderAlias> <AuthnProviderAlias ldap slave> AuthLDAPURL ldap://192.168.5.199:389/dc=domain,dc=vn?cn AuthLDAPBindDN cn=anonymous,ou=it,dc=domain,dc=vn AuthLDAPBindPassword pa$$w0rd </AuthnProviderAlias> /etc/httpd/conf.d/authz_ldap.conf # # mod_authz_ldap can be used to implement access control and # authenticate users against an LDAP database. # LoadModule authz_ldap_module modules/mod_authz_ldap.so <IfModule mod_authz_ldap.c> <Location /> AuthBasicProvider master slave AuthzLDAPAuthoritative Off AuthType Basic AuthName "Authorization required" AuthzLDAPMemberKey member AuthUserFile /home/setup/svn/auth-conf AuthzLDAPSetGroupAuth user require valid-user AuthzLDAPLogLevel error </Location> </IfModule> If I understand correctly, mod_authz_ldap will try to search users in the second LDAP if the first server is down or OpenLDAP on it is not running. But in practice, it does not happen. Tested by stopping LDAP on the master, I get the "500 Internal Server Error" when accessing to the Subversion repository. The error_log shows: [11061] auth_ldap authenticate: user quanta authentication failed; URI / [LDAP: ldap_simple_bind_s() failed][Can't contact LDAP server] Did I misunderstand?

    Read the article

  • Help: Setup Outgoing Mail Server Only for Multiple Domains Using Postfix?

    - by user57697
    I want an outgoing mail server ONLY for multiple domains. I plan to use Postfix as that seems to be the easiest to setup being very new to Ubuntu/Linux. The setup I plan to have are as follows: I want to use virtual domain with postfix i.e. my multiple websites must be able to send an email from each their respective domains i.e. [email protected] is sent from my domain1.com website and [email protected] is sent from domain2.com website This is an outgoing mail server only i.e. I don't want any returned (or otherwise) email sent to my postfix server. Incoming mail is handled by Google Apps/Gmail and is already setup. I already set my SPF recording to designate my mx records and postfix server ip as valid email servers i.e. "v=spf1 mx include:mydomain.com -all" How can I achieve this? I'm frankly a little confused, so some help would be appreciated. I attempted to follow these guides here, but it doesn't seem right (and it isn't clear what all the settings mean): How to configure Postfix virtual domains http://www.sysdesign.ca/guides/postfix_virtual.html Postfix Installation *.slicehost.com/2008/7/29/postfix-installation Basic Postfix settings (main.cf) *.slicehost.com/2008/7/31/postfix-basic-settings-in-main-cf I can only post one link, but those articles above can be found by replacing * with articles in the hyperlink.

    Read the article

  • how to design pound -> varnish -> jboss for ha + loadbalancing

    - by andreash
    Hello, I'm planning a new infrastructure for our web application. We have two JBossAS5 servers, running in a cluster. Session state will be replicated via JBoss Cache. In front of that, there should be some cache, to speed up delivery of static elements. However, most of the traffic to our app will be via HTTPS. So far, I had been thinking of two Varnish caches in front of the JBossASs, each being configured for loadbalancing to the two JBossASs via round-robin. Since Varnish doesn't handle HTTPS, then there would need to be two pound proxies in front of the Varnishs, dealing with the HTTPS. The two pounds would be made high-available with Heartbeat/LinuxHA. The traffic to www.example.com would then be going through our firewall, from there to the virtual IP of the pounds, from there to the Varnishs, and from there to the JBossASs. Question 1: Does this make sense? Or is it overly complicated, and the same goal can be reached with simpler methods? Question 2: If my layout is fine, how do I configure the pound - Varnish step? Should I a) make the Varnish service high-available through Heartbeat/LinuxHA as well and direct traffic from pound to the virtual IP of the Varnishs, or should I rather b) Configure two independent Varnishs and use load-balancing in pound to address the different Varnishs? Thanks a lot for your insight! Andreas.

    Read the article

  • MSDTC - Communication with the underlying transaction manager has failed (Firewall open, MSDTC network access on)

    - by SocialAddict
    I'm having problems with my ASP.NET web forms system. It worked on our test server but now we are putting it live one of the servers is within a DMZ and the SQL server is outside of that (on our network still though - although a different subnet) I have open up the firewall completely between these two boxes to see if that was the issue and it still gives the error message "Communication with the underlying transaction manager has failed" whenever we try and use the "TransactionScope". We can access the data for retrieval it's just transactions that break it. We have also used msdtc ping to test the connection and with the amendments on the firewall that pings successfully, but the same error occurs! How do i resolve this error? Any help would be great as we have a system to go live today. Panic :) Edit: I have created a more straightforward test page with a transaction as below and this works fine. Could a nested transaction cause this kind of error and if so why would this only cause an issue when using a live box in a dmz with a firewall? AuditRepository auditRepository = new AuditRepository(); try { using (TransactionScope scope = new TransactionScope()) { auditRepository.Add(DateTime.Now, 1, "TEST-TRANSACTIONS#1", 1); auditRepository.Save(); auditRepository.Add(DateTime.Now, 1, "TEST-TRANSACTIONS#2", 1); auditRepository.Save(); scope.Complete(); } } catch (Exception ex) { Response.Write("Test Error For Transaction: " + ex.Message + "<br />" + ex.StackTrace); }

    Read the article

  • High latency issue for web service call from amazon aws ec2 to local server

    - by SibzTer
    We have a legacy web application that is running in our data center on premises located in Houston. We have a developed a new .net 4 based web application in order to provide new features to customers. The new web application is hosted in amazon aws ec2 environment (N. Virginia region us-east-1b zone). In order to get seamlessly integrate with the legacy application the new web application makes web service calls to retrieve data. We are seeing an unusually high latency time in the order of 5+ seconds for these web service calls. The exact same web service call returns in less than a second on our local PCs (which makes sense given physical proximity to the actual server). The weird part is that we have developers in California who also have the same milliseconds response time. We are testing the web service response using third party tools such as SoapUI, Google Chrome extensions such as Advanced REST Client, Postman REST Client, etc. As if this wasnt weird enough, we have noticed the same low latency from certain other ec2 instances while testing which are in the same region and availability zone as well. If we experienced the high latency consistently from all the ec2 instances I could understand. But there is something else going on. Comparing the various stats and results between the low latency and high latency ec2 servers do not show any significant differences: ping (constant 40ms), tracert, winmtr, etc. We have instances that are in the VPC as well. So I tried both the public and private IP address of the web service host server and that didnt make a difference either for the above results. We need to resolve this latency issue as this is causing the resulting web pages to load very slowly (almost 15+ seconds which is simply unacceptable). The ec2 instances have Windows Server Datacenter 64 bit. Let me know if there is any other infor I can provide to help diagnose this.

    Read the article

  • How to configure VirtualBox server for performance at home

    - by BluJai
    I currently have two physical Ubuntu Server 10.10 servers at home: one serves as our firewall/router/DHCP/VPN server and the other performs double-duty as a file server and a VirtualBox host for an Ubuntu Desktop 10.10 machine which I use from remote connections (via NoMachine) for many thin-client purposes which are irrelevant to my question. What I'd like to accomplish is to consolidate the two physical machines into one which is a dedicated VirtualBox host (most likely running Ubuntu Server 10.10). Note that I'd like to stick with VirtualBox (if possible) because I'm most comfortable with it and use it on a daily basis at both home and work. Specifically, I plan to have one VM set up as file server, another as the firewall/router/DHCP/VPN (or possibly split those a bit) and a third, which is the only current VM (already VirtualBox), which is the thin-client host. My question comes down to performance and/or recommendations about the file server VM. The file server hosts about 6 terabytes of data across 4 drives. What I'd like to do is use raw disk access from the VM directly to the existing disks. However, I'm curious what performance advantage/disadvantage that would have as compared to using shared folders from the VM host and basically just have the whole drive served as a shared folder to the VM which would then serve it to the other machines on the network. I don't know if virtual disks would even work in this scenario and I certainly wouldn't want a drive to be filled with just a single file which is 1.5 TB (disk image). To add understanding of context, but not to get additional advice, I want to virtualize these machines because I intend to regularly use the snapshot capabilities of VirtualBox for the system disks (which will be virtual drives) of the VMs and I have some physical space/power needs to address (as I mentioned, this is at home).

    Read the article

  • Caused by: java.net.SocketException: Software caused connection abort: socket write error

    - by jrishere
    I running JSP on Oracle 11g, Weblogic 10.3.4. I have 2 managed server and a oracle admin server installed. I am encountering an error where intermittently the log file of the 2 managed server and admin server will show java.net.SocketException: Software caused connection abort: socket write error. The application can run for 2 days without showing this error or it can show up a few times in a day. The server load are similar everday. When this error is been encountered, the server will just stop accepting connections and will not be able to access the application. Even if I try to access the application through localhost, I will not be able to access the JSP pages and a 503 http status is shown but then I am able to access the static HTML page. I will not be able to access the Oracle 11g Weblogic admin console page. When I take a look at admin server log, it shows that the managed servers are disconnected from the admin server and vice versa. Magically the application is able to recover by its own and the application is able to access again or I need to restart the server as restarting the service of the application does not work. The FTP connections that the application is connected to are closed as well. I am able to ping to telnet to the server port. The event log doesn't seem to be leaving any information. We did run wireshark to see the packet traffic and it seems that the application port is sending a RST, ACK packet to the load balancer. Any kind help will greatly be appreciated. Should you need more info, feel free to ask me. Thanks in advance.

    Read the article

  • "Path Not Found" when attempting to write to a sub folder within a mapped drive

    - by Adam
    We have an interesting issue with one of our server shares, or possibly, our Win 7 desktops. When our users try to save files in a sub folder, either via copy/paste or through an application, to a mapped drive on our DC they receive an error saying "Path not found". They can however browse this folder and open files from it. This is where the "Path Not Found" error doesn't seem to stack up in my opinion. Users can however save files fine in the root folder of the mapped drive, it appears only to affect sub folders. It seems to be random which users and machines this affects. The users can log on to a different machine and be able to save in sub folders fine, on the same mapped drive. Event viewer hasn't been much help either. Currently, the only solution we have found is to image the machines affected which solves the issue. Our servers are Server 2008 R2 with Win 7 Pro desktops. Any help/pointers/suggestions would greatly be appreciated.

    Read the article

  • Connection timeout when trying to SSH

    - by dan
    The other day I tried to connect to my remote server via SSH as i always have. But now when I try to connect it just times out after about 60 seconds. I run service ssh start Which tells me that Job is already running: ssh. I then ran $netstat -tnlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:993 0.0.0.0:* LISTEN 1972/dovecot tcp 0 0 0.0.0.0:995 0.0.0.0:* LISTEN 1972/dovecot tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 2030/mysqld tcp 0 0 0.0.0.0:110 0.0.0.0:* LISTEN 1972/dovecot tcp 0 0 0.0.0.0:143 0.0.0.0:* LISTEN 1972/dovecot tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN 2157/perl tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 3028/sshd tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN 2273/master tcp6 0 0 :::80 :::* LISTEN 2618/apache2 tcp6 0 0 :::21 :::* LISTEN 2291/proftpd: (acce tcp6 0 0 :::22 :::* LISTEN 3028/sshd I am able to access subdomains on my site, and FTP, but don't have the ability to SSH or even ping remotely. Any thoughts?

    Read the article

  • Name resolution not working with ipv6 on centos

    - by jolivier
    I just installed CentOs 6.3 on a server to be installed in a data center, but cannot get name resolution / curl to work. I know this is because of it trying to use ipv6, since ping google.com works, curl -4 google.com works, but not curl google.com. I removed the ipv6 adress from the interface and it does not change anything. This is very problematic since most system tools like yum fail at name resolution currently. Browsers like Firefox work because they might be using another tool for name resolution than the one use by curl. I managed to fix this on workstations by completely disabling ipv6 following tutorials like this one / hardcoding name resolution in /etc/hosts. But since I am here configuring a server which will be later installed in a remote data center, I would like not to mess up, understand what is going on and fix it properly. Besides, I will face the same issue with more servers to come so I would really appreciate your help in understanding this problem and how to solve it. I would be happy to provide more information if needed to help understand what is going on. The current network configuration is a small enterprise network, with a DNS server (let's call it A) configured once a long time ago. dig google.com and dig -4 google.com are both refused by the A DNS. But this is also true for my workstation on which curl is working (and yes they both use the same A DNS server). Indeed this faulty server and my workstation have multiple nameservers in /etc/resolv.conf, and the second one is working fine for both of them, so if I remove A from my resolv.conf everything works fine! Regards, Olivier

    Read the article

  • Custom flash uploader breaks only on Media Temple

    - by LaserWolf
    I've built a flash uploader to upload files up to 100MB using a php backend. It works wonderfully on our dev server, on a hostgator vps, and on one of our clients' servers running Debian. It will not work on our Media Temple (dv)3.5 and I don't know why. The upload will start but will choke after a few seconds with this flash error message: ioerror: [IOErrorEvent type="ioError" bubbles=false cancelable=false eventPhase=2 text="Error #2038: File I/O Error. URL: http://..._upload.php"] The problem seems to be specific to the asynchronous nature of the flash uploader because if I try a straight php upload it works fine. The php.ini settings are set to allow such a large upload as well. Also, I've thoroughly googled flash, 2038, I/O error, etc but have yet to find anything that helped. Here's the weird part though: We work in Seattle. It won't work from the office. It won't work from home. But, while on the phone with MT's support, they were able to upload a file through our flash uploader just fine. I'm not sure where his office was located but I think it was Atlanta. So the problem also seems specific to physical location. Has anyone run into this sort of problem before?

    Read the article

  • NFS high CPU usage

    - by user269836
    Hello, I have a very strange issue. I have next server: Intel(R) Xeon(TM) MP CPU 3.16GHz cat /proc/cpuinfo | grep proce | wc -l 8 free -m total used free shared buffers cached Mem: 28203 27606 596 0 10789 9714 -/+ buffers/cache: 7103 21100 Swap: 24695 0 24695 RAID card *-storage description: RAID bus controller product: MegaRAID vendor: LSI Logic / Symbios Logic physical id: 7 bus info: pci@0000:13:07.0 logical name: scsi2 version: 01 width: 32 bits clock: 66MHz capabilities: storage pm bus_master cap_list rom configuration: driver=megaraid latency=32 resources: irq:134 memory:d8ff0000-d8ffffff(prefetchable) memory:df600000-df60ffff(prefetchable) HDD: 10x148Gb SCSI U320 15k - RAID5 /dev/sdb1 807G 674G 93G 88% /storage /dev/sdb1 /storage ext4 defaults,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,noatime,nodiratime,noacl,errors=remount-ro 0 1 network cards ethtool -i eth0 driver: tg3 version: 3.116 firmware-version: 5704-v3.36, ASFIPMIc v2.36 bus-info: 0000:10:02.0 ethtool -i eth1 driver: tg3 version: 3.116 firmware-version: 5704-v3.36, ASFIPMIc v2.36 bus-info: 0000:10:02.0 ifconfig bond0 Link encap:Ethernet HWaddr 00:0f:1f:ff:d6:4d inet addr:192.168.15.71 Bcast:192.168.15.255 Mask:255.255.255.0 inet6 addr: fe80::20f:1fff:feff:d64d/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:1062818202 errors:0 dropped:3918 overruns:0 frame:0 TX packets:1041317321 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:10000 RX bytes:258867684559 (241.0 GiB) TX bytes:396569192650 (369.3 GiB) this server running only nfs-kernel-server uname -a Linux nas2-backup 2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 x86_64 GNU/Linux Debian 6. What do I have, once per day or two, LA goes up, it can be reached around LA: 40 but if I do: nfs-kernel-server restart. Every thing is OK. But on the next day or a little bit later, LA goes up again. Servers are connected to d-link dgs 1016d with 24 GBits ports. I have tried everything to find out what the problem is. Why it's happening, but still I can not resolve this issue. Any ideas on what is happening here?

    Read the article

  • Webserver optimization

    - by f-aminov
    Hi guys! I have a website hosted on a VPS (512Mb - minimum guranteed memory, 510Mhz proccessor, Debian 5.0 Lenny, Apache 2.2.9 with nginx 0.7.65 as a frontend to serve static content, MySQL 5.1.44, PHP 5.3.2 with APC caching). I'm a web developer, so I'm not very good at optimizing servers, but I've managed to install and setup all those neccessary components (LAMP, nginx, etc.). After that I decided to stress test my website (which uses Drupal 6.16 with caching and all possible optimization enabled) using a utility called "Webserver Stress Tool 7". And it seems to me that the results aren't any good - here is a graph (sorry, as a new user I'm not allowed to post images) As you can see the response time depending on amount of simultaneous users increases very quickly. With 10 simultaneous users the time is about 1000ms, with 100 simultaneous users it's about 15000ms (15s!). The question is do you think this is normal behavior for such a server or something is wrong with the settings and optimization? If you think something is wrong what particulary could be wrong? Any other suggestion how to speed this a little bit up?

    Read the article

  • Has anyone used tools like (Chef, Salt, Puppet, CfEngine) to configure a 2008 Win Server with Sql?

    - by Development 4.0
    I have been looking into tools to automate the creation of servers. For two different reasons: Production Development machines I love the idea of the immutable server. I have seen the tools demoed and used successfully on *nix boxes running Rails or Lamp etc. Has anyone found a good way to do this in the Microsoft stack? I would like to get in on the fun and create scripts that will install Windows, patch it according to specification, deploy Sql Server create scripts to build out a database and just for fun deploy SharePoint and configure it, and then deploy a SharePoint solution to it. I can get part of the way, install Windows manually, install Sql Server manually, use Powershell to do all the configuration and setup. Install SharePoint and configure part of it, then powershell for the rest of the configuration and deploying a solution. I would love to have the ability to run one script though, or at least one unified process. I can, and have mostly used VM template images and then instantiated them, but the creation of the template is usually a manual step.

    Read the article

  • Routing a single request through multiple nginx backend apps

    - by Jonathan Oliver
    I wanted to get an idea if anything like the following scenario was possible: Nginx handles a request and routes it to some kind of authentication application where cookies and/or other kinds of security identifiers are interpreted and verified. The app perhaps makes a few additions to the request (appending authenticated headers). Failing authentication returns an HTTP 401. Nginx then takes the request and routes it through an authorization application which determines, based upon identity and the HTTP verb (put, delete, get, etc.) and URL in question, whether the actor/agent/user has permission to performed the intended action. Perhaps the authorization application modifies the request somewhat by appending another header, for example. Failing authorization returns 403. (Wash, rinse, repeat the proxy pattern for any number of services that want to participate in the request in some fashion.) Finally, Nginx routes the request into the actual application code where the request is inspected and the requested operations are executed according to the URL in question and where the identity of the user can be captured and understood by the application by looking at the altered HTTP request. Ideally, Nginx could do this natively or with a plugin. Any ideas? The alternative that I've considered is having Nginx hand off the initial request to the authentication application and then have this application proxy the request back through to Nginx (whether on the same box or another box). I know there are a number of applications frameworks (Django, RoR, etc.) that can do a lot of this stuff "in process", but I was trying to make things a little more generic and self contained where different applications could "hook" the HTTP pipeline of Nginx and then participate in, short circuit, and even modify the request accordingly. If Nginx can't do this, is anyone aware of other web servers that will perform in the manner described above?

    Read the article

  • Windows XP laptop doesn't appear in WSUS All computers list

    - by George
    I have this one laptop that doesn't appear in WSUS all computers list. We have about 23-25 PCs/laptops/servers in the network, all, but one are listed in WSUS. This is what I have done so far: 1) Changing Updates on local PC: Go to your Windows XP client and start a new Microsoft Management Console (MMC). At Start, Run, type MMC. Use Ctrl+M to add a new snap-in. Click Add, and then add the Group Policy Object Editor for the Local Computer. Click Close, and then click OK. Expand the Local Computer Policy. Under Computer Configuration, go to Administrative Templates, Windows Components, Windows Update. In the right-hand pane, double-click Specify intranet Microsoft update service location. Configure the settings to reflect my WSUS server. Click OK and then close the MMC without saving it. executed wuauclt.exe /detectnow 2) Edited registry key to be pushed to the PCs using GPO [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate] "WUServer"=http://wsusserver "TargetGroupEnabled"=dword:00000001 "TargetGroup"="WINXP" "WUStatusServer"=http://wsuswerver 3) executed wuauclt /resetauthorization /detectnow 4)Synchronised and refreshed the group I am running out of ideas here. The client is running Windows XP pro, WSUS version is 3.0 and is running on Windows 2008 R2 64-bit. Please, help! Thanks! EDIT 13.IX.2012 @ 15.40 I should have also mentioned that we do have a Windows Update GPO for workstations group and that laptop is a part of that group.

    Read the article

  • Setup Entourage for Exchange via HTTP communication

    - by Johandk
    Our ISP set up a hosted exchange server for all our mail. I've setup all our Outlook users with no problems. We have two people using Mac OSX Leopard and Entourage. Entourage has the option of adding an Exchange account, but I have no idea how to tell it to connect to exchange via HTTP. Heres an excerpt from the client setup docs the hosting company sent me for Outlook: 1 .Go to control panel 2. Select ‘Mail’ 3. Select ‘Email accounts’ Under the E-mail tab select ‘New’ Select ‘Manually configure server settings......’ - click next Select ‘Microsoft Exchange’ – click next Complete details as below with Microsoft Exchange Server as: [server address] Do not select ‘Check Name’. Instead select ‘More Settings’. Go to the Connection tab, and select the bottom option ‘Connect to Microsoft Exchange using HTTP’. And then select the ‘Exchange Proxy Settings’ button. Enter Proxy server for Exchange Check Only connect to proxy servers that have this principal name in their certificate, Enter msstd:[servername] Proxy Authentication - select Basic Authentication Select OK, and again, so that you return to the main screen. Now select ‘Check Name’. Enter Username and Password: The username should now be the full name and underlined. If so select next, and then finish. Next time you open Outlook, enter username and password Any help GREATLY appreciated.

    Read the article

  • Will adding a SSD cache device to my ZFS storage improve performance?

    - by Sysadminicus
    The server has 4GB of RAM and my zpool is made up of 15.5k SAS drives arranged like this: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 c0t3d0 ONLINE 0 0 0 c0t4d0 ONLINE 0 0 0 c0t5d0 ONLINE 0 0 0 c0t6d0 ONLINE 0 0 0 c0t7d0 ONLINE 0 0 0 c0t8d0 ONLINE 0 0 0 raidz1-1 ONLINE 0 0 0 c0t10d0 ONLINE 0 0 0 c0t11d0 ONLINE 0 0 0 c0t12d0 ONLINE 0 0 0 c0t13d0 ONLINE 0 0 0 c0t14d0 ONLINE 0 0 0 spares c0t9d0 AVAIL c0t1d0 AVAIL The primary use is as an NFS store for a couple VMWare ESXi servers. I can't do any "true" benchmarks because this is a production system (no budget for test systems), but using dd and bonnie++ I can't get more than ~40-50MB/s writes and ~70-90MB/s reads. It seems I should be able to do much better, but I'm not sure where to optimize. Based on what I've read, I think dropping in a OCZ Vertex 2 Pro SSD as my L2ARC is going to be the best bang-for-the-buck to improve througput. Is there something else I should be looking into to help performance? If not... How do I know how big a cache device I need? Am I safe with only a single SSD as my cache device?

    Read the article

  • Does Windows 7 Authenticate Cached Credentials on Startup

    - by Farray
    Problem I have a Windows domain user account that gets automatically locked-out semi-regularly. Troubleshooting Thus Far The only rule on the domain that should automatically lock an account is too many failed login attempts. I do not think anyone nefarious is trying to access my account. The problem started occurring after changing my password so I think it's a stored credential problem. Further to that, in the Event Viewer's System log I found Warnings from Security-Kerberos that says: The password stored in Credential Manager is invalid. This might be caused by the user changing the password from this computer or a different computer. To resolve this error, open Credential Manager in Control Panel, and reenter the password for the credential mydomain\myuser. I checked the Credential Manager and all it has are a few TERMSRV/servername credentials stored by Remote Desktop. I know which stored credential was incorrect, but it was stored for Remote Desktop access to a specific machine and was not being used (at least not by me) at the time of the warnings. The Security-Kerberos warning appears when the system was starting up (after a Windows Update reboot) and also appeared earlier this morning when nobody was logged into the machine. Clarification after SnOrfus answer: There was 1 set of invalid credentials that was stored for a terminal server. The rest of the credentials are known to be valid (used often & recently without issues). I logged on to the domain this morning without issue. I then ran windows update which rebooted the computer. After the restart, I couldn't log in (due to account being locked out). After unlocking & logging on to the domain, I checked Event Viewer which showed a problem with credentials after restarting. Since the only stored credentials (according to Credential Manager) are for terminal servers, why would there be a Credential problem on restart when remote desktop was not being used? Question Does anyone know if Windows 7 "randomly" checks the authentication of cached credentials?

    Read the article

  • Mysql cluster strange behaviour

    - by Champion
    Hi Guys, I have 2 mysql clusters on two different servers with management node on each of them. It went down someway. I ran following commands to start the cluster: Start the management node on srv1: srv1: mysqlc/bin/ndb_mgmd --initial -f my_cluster/conf/config.ini --configdir=/home/mysql_cluster/my_cluster/conf Start the management node on srv2: srv2: mysqlc/bin/ndb_mgmd --initial -f my_cluster/conf/config.ini --configdir=/home/mysql_cluster/my_cluster/conf Start the ndbd nodes on srv1: srv1: mysqlc/bin/ndbd --initial -c localhost:1186 Start the ndbd nodes on srv2: srv2: mysqlc/bin/ndbd --initial -c localhost:1186 Start mysqld server on srv1: srv1: mysqlc/bin/mysqld --defaults-file=my_cluster/conf/my.cnf --user=root & and here is the problem. mysql server not loading the data. Only database names are present. All the tables which are ENGINE=ndbcluster are not being loaded. Tables with ENGINE=myisam are being loaded. Backup scripts helped me load the data. But this way I can't use cluster setup. Similar issue appeared when i started srv2. How can I resolve this issue ?

    Read the article

  • CentOS security for lazy admins

    - by Robby75
    I'm running CentOS 5.5 (basic LAMP with Parallels Power Panel and Plesk) and have thus far neglected security (because it's not my full-time job, there is always something more important on my todo-list). My server does not contain any secret data and also no lives depend on it - Basically what I want is to make sure it does not become part of a botnet, that is "good enough" security in my case. Anyway, I don't want to become a full-time paranoid admin (like constantly watching and patching everything because of some obscure problem), I also don't care about most security problems like DOS attacks or problems that only exist when using some arcane settings. I'm in search of a "happy medium", for example a list of known important problems in the default installation of CentOS 5.5 and/or a list of security problems that have actually been exploited - not the typical endless list of buffer overflows that "maybe" a problem in some special case. The problem that I have with the usually recommended approaches (joining mailing lists, etc.) is that the really important problems (something where an exploit exists, that is exploitable in a common setup and where the attacker can do something really useful - i.e. not a DOS) are completely and utterly swamped by millions of tiny security alerts that surely are important for high-security servers, but not for me. Thanks for all suggestions!

    Read the article

  • Explorer.exe not starting after login on Windows Server 2003 (Terminal Services and console)

    - by Pepperoni Icecream
    When users login to a Windows Server 2003 R2 running Terminal Services they have a blank desktop. Upon inspection, explorer.exe is not running. When I login as administrator, using either RDP or to the console, I am having the same issue. I can pull up the taskman and start explorer.exe manually. I have another Terminal Server setup exactly the same way (same apps, settings, GPO, etc . . .) the only difference is we deployed Symantec Endpoint Client 11.0.5 on Friday. For some reason the working Terminal Server is still on 11.0.4, but the suspect server received the 11.0.5 client upgrade. I checked the eventviewer for any relevant explorer.exe entries to no avail. It seems that if SEP is preventing explorer.exe from starting at login it would do the same for the domain admin starting explorer.exe from the taskman. I disabled the SEP client and services on the server and issued smc -stop and tried logging in again. Still no explorer.exe. So I'm not sure if the client upgrade is relevant but it is worth mentioning since that was the last system change. The 2 servers are members of a NLB group. I took the bad terminal server out of the group until the issue is resolved. Actually stopped the host using NLB manager Any help is appreciated.

    Read the article

  • Postfix relay gives error 450 while it should be 550

    - by dieter-be
    Hi, we use postfix to do relaying. We get several messages like the following in /var/log/mail (slightly edited) Apr 13 13:30:29 linserver postfix/smtpd[1064]: NOQUEUE: reject: RCPT from unknown[$ip]: 450 4.1.1 <[email protected]>: Recipient address rejected: undeliverable address: host domain.be [$ip] said: 550 <[email protected]>: Recipient address rejected: User unknown in virtual mailbox table (in reply to RCPT TO command); from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<BLUESTREAK.domain.local> Now, when the master mail servers gives a 550, claiming that the user does not exist, I want the relay to also give a 550 back. What happens now is that it seems to return a 450, causing clients to keep messages queued, keep trying and only notify users after a certain period has passed. According to what I could find, the soft_bounce could cause this. But we have not enabled this option (and by default it's off according to postfix docs) It might also have something to do with the *_reject_code postconf values. Especially since the log message complains the unknown ip. But as you can see in the postconf output below, smtpd_sender_restrictions and smtpd_client_restrictions are empty. So even if it would try to do any restrictions there, 550 is the "worst" error going on, so that's what I expect to be returned to the client. postconf: http://sprunge.us/JYgB Thanks, Dieter

    Read the article

  • Forward Apache to Django dev server

    - by Alex Jillard
    I'm trying to get apache to forward all requests on port 80 to 127.0.0.1:8000, which is where the django dev server runs. I think I have it forwarding properly, but there must be an issue with 127.0.0.1:8000 not being run by apache? I'm running the django dev server in an ubuntu vmware instance, and I'd other people in the office to see the apps in development without having to promote anything to our actual dev/staging servers. Right now the virtual machine picks up an IP for itself, and when I point a browser to that url with the defualt apache config, I get the default apache page. I've since changed the httpd.conf file to the following to try and get it to forward the requests to the django dev server: ServerName localhost <Proxy *> Order deny,allow Allow from all </Proxy> <VirtualHost *> ServerName localhost ServerAdmin [email protected] ProxyRequests off ProxyPass * http://127.0.0.1:8000 </VirtualHost> All I get are 404s with this, and in error.log I get the following (192.168.1.101 is the IP of my computer 192.168.1.142 is the IP of the virtual machine): [Mon Mar 08 08:42:30 2010] [error] [client 192.168.1.101] File does not exist: /htdocs

    Read the article

< Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >