Search Results

Search found 41582 results on 1664 pages for 'fault tolerance'.

Page 180/1664 | < Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >

  • Always failed in connecting to the Outlook Anywhere through TMG 2010 with certificate ?

    - by Albert Widjaja
    Hi, I have successfully published Exchange Activesync using TMG 2010 and OWA internally only but somehow when I tried to publish the Outlook Anywhere it failed ( as can be seen from the https://www.testexchangeconnectivity.com ) Settings: IIS 7 settings, I have unchecked the require SSL and "Ignore" the client certificate Exchange CAS settings: ServerName : ExCAS02-VM SSLOffloading : True ExternalHostname : activesync.domain.com ClientAuthenticationMethod : Basic IISAuthenticationMethods : {Basic} MetabasePath : IIS://ExCAS02-VM.domainad.com/W3SVC/1/ROOT/Rpc Path : C:\Windows\System32\RpcProxy Server : ExCAS02-VM AdminDisplayName : ExchangeVersion : 0.1 (8.0.535.0) Name : Rpc (Default Web Site) DistinguishedName : CN=Rpc (Default Web Site),CN=HTTP,CN=Protocols,CN=ExCAS02-VM,CN=Servers,CN=Exchange Administrative....... Identity : ExCAS02-VM\Rpc (Default Web Site) Guid : 59873fe5-3e09-456e-9540-f67abc893f5e ObjectCategory : domainad.com/Configuration/Schema/ms-Exch-Rpc-Http-Virtual-Directory ObjectClass : {top, msExchVirtualDirectory, msExchRpcHttpVirtualDirectory} WhenChanged : 18/02/2011 4:31:54 PM WhenCreated : 18/02/2011 4:30:27 PM OriginatingServer : ADDC01.domainad.com IsValid : True Test-OutlookWebServices settings: 1013 Error When contacting https://activesync.domain.com/Rpc received the error The remote server returned an error: (500) Internal Server Error. 1017 Error [EXPR]-Error when contacting the RPC/HTTP service at https://activesync.domain.com/Rpc. The elapsed time was 0 milliseconds. https://www.testexchangeconnectivity.com testing result: Checking the IIS configuration for client certificate authentication. Client certificate authentication was detected. Additional Details Accept/Require client certificates were found. Set the IIS configuration to Ignore Client Certificates if you aren't using this type of authentication. environment: Windows Server 2008 (HT-CAS) Exchange Server 2007 SP1 TMG 2010 Standard Outlook 2007 client SP2. Any kind of help would be greatly appreciated. Thanks.

    Read the article

  • Website does not resolve in browser but traceroute is successful

    - by Colum
    I am trying to figure out an issue. My internet is working fine, but this one website is not resolving. It works via a proxy, traceroute works: 1 192.168.1.1 (192.168.1.1) 4.205 ms 0.568 ms 0.510 ms 2 * * * 3 67.59.255.13 (67.59.255.13) 10.583 ms 7.949 ms 7.557 ms 4 67.59.255.61 (67.59.255.61) 10.256 ms 9.576 ms 13.083 ms 5 64.15.8.126 (64.15.8.126) 9.943 ms 11.929 ms 11.452 ms 6 64.15.0.217 (64.15.0.217) 14.655 ms 14.092 ms 13.771 ms 7 64.15.0.118 (64.15.0.118) 33.201 ms 34.875 ms 36.544 ms 8 xe-6-0-3.ar1.ord1.us.nlayer.net (69.31.111.169) 34.027 ms 34.957 ms 34.231 ms 9 ae1-30g.cr1.ord1.us.nlayer.net (69.31.111.133) 82.683 ms 35.138 ms 37.592 ms 10 xe-3-0-0.cr2.iad1.us.nlayer.net (69.22.142.26) 41.657 ms 34.063 ms 34.519 ms 11 ae2-30g.ar2.iad1.us.nlayer.net (69.31.31.186) 35.780 ms 36.361 ms 33.968 ms 12 as33597.xe-3-0-7.ar2.iad1.us.nlayer.net (69.31.30.230) 35.086 ms as33597.xe-3-0-7.ar2.iad1.us.nlayer.net (69.31.30.234) 38.031 ms as33597.xe-3-0-7.ar2.iad1.us.nlayer.net (69.31.30.230) 36.833 ms 13 cr1.iad2.inforelay.net (66.231.176.246) 32.595 ms cr2.iad1.inforelay.net (66.231.176.10) 31.771 ms cr1.iad2.inforelay.net (66.231.176.246) 32.622 ms 14 cr1.iad2.inforelay.net (66.231.176.246) 32.956 ms 33.625 ms !X 41.058 ms 15 * cr1.iad2.inforelay.net (66.231.176.246) 35.312 ms !X * 16 * cr1.iad2.inforelay.net (66.231.176.246) 32.814 ms !X * 17 cr1.iad2.inforelay.net (66.231.176.246) 35.459 ms !X * 53.137 ms !X Ping returns this: Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 Request timeout for icmp_seq 2 Request timeout for icmp_seq 3 Request timeout for icmp_seq 4 Request timeout for icmp_seq 5 Request timeout for icmp_seq 6 But what I can not figure out is why my browsers (Firefox, Safari, Opera) can not resolve the domain. I am on a Wifi connection. What could be the problem? BTW I am on a Mac (10.6.5)

    Read the article

  • PHP Can't connect to MySQL on the production server

    - by Jairo Santos
    I'm having problems with connections with MySQL through PHP script. The MySQL user is root and I added GRANTS to root@'%' so I can connect from anywhere. Lets assume my MySQL host as "bigboy.com.br" The funny part is, from my local machine, on my test server, the script can connect to the MySQL server normally. But on the dedicated server where MySQL is running, the same PHP script gives me "Access denied for 'root'@'bigboy.com.br'" error.

    Read the article

  • Save output and exit code of command to files on windows

    - by poncha
    I want to run a command and save its output and its exit code, in different files. Here's what i am doing: cmd.exe /C command 1> %TEMP%\output.log 2> %TEMP%\error.log && echo %ERRORLEVEL% > %TEMP%\status || echo %ERRORLEVEL% > %TEMP%\status If i don't do output redirection (into %TEMP%\output.log and/or %TEMP%\error.log), then exit code is saved just fine. However, when i run the line as shown above more than once (just get back to previous line in command prompt and rerun it), i get 0 in %TEMP%\status regardless of the real exit code. What am i missing? Or maybe there's a better way of doing this?

    Read the article

  • Snort/Barnyard2 Logging

    - by Eric
    I need some help with my Snort/Barnyard2 setup. My goal is to have Snort send unified2 logs to Barnyard2 and then have Barnyard2 send the data to other locations. Here is my currrent setup. OS Scientific Linux 6 Snort Version 2.9.2.3 Barnyard2 Version 2.1.9 Snort command snort -c /etc/snort/snort.conf -i eth2 & Barnyard2 command /usr/local/bin/barnyard2 -c /etc/snort/barnyard2.conf -d /var/log/snort -f snort.log -w /var/log/snort/barnyard.waldo & snort.conf output unified2: filename snort.log, limit 128 barnyard2.conf output alert_syslog: host=127.0.0.1 output database: log, mysql, user=snort dbname=snort password=password host=localhost With this setup, barnyard2 is showing all of the correct information in the database and I'm using BASE to view it on the web GUI. I was hoping to be able to send the full packet data to syslog with barnyard2 but after reading around, it seems that it is impossible to do that. So I then started trying to modify the snort.conf file and add lines like "output alert_full: alert.full". This definitely gave me a lot more information but still not the full packet data like I want. So my question is, is there anyway I can use barnyard2 to send the full packet data of alerts to a human readable file? Since I can't send it directly to syslog, I can create another process to take the data from that file and ship it off to another server. If not, what flags and/or snort.conf configuration would you recommend to get the most data possible but still be able to handle quite a bit of traffic? In the end of it all, these alerts will be shipped to a central server via a SSH tunnel. I'm trying to stay away from databases.

    Read the article

  • IPv6: Can't ping anything - "Operation not permitted"

    - by Matthew Iselin
    I've been working on getting IPv6 support into my network, and had everything working properly for a short while. The server is running Ubuntu Server 8.10. Now however whenever I attempt to do anything related to IPv6 on the server, I get "Operation not permitted". This is coming from things like wide-dhcpv6-client (when trying to get an IPv6 address from the ISP) and radvd - both log errors of this type into syslog. Even pinging the loopback interface fails: xxx@gordon:~$ ping6 ::1 PING ::1(::1) 56 data bytes ping: sendmsg: Operation not permitted ping: sendmsg: Operation not permitted ping: sendmsg: Operation not permitted ^C --- ::1 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2014ms xxx@gordon:~$ sudo ping6 ::1 sudo: unable to resolve host gordon PING ::1(::1) 56 data bytes ping: sendmsg: Operation not permitted ping: sendmsg: Operation not permitted ping: sendmsg: Operation not permitted ^C --- ::1 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2014ms As you can see, I have attempted pinging as root, as most of the material I've found on the internet points to a permission problem. However, that has not helped. Any hints to getting unstuck would be appreciated.

    Read the article

  • Using GitOAuthPlugin for Jenkins - not working as expected

    - by Blundell
    I need some clarity and maybe a fix. I'm using this plugin to authorise who views our Jenkins ci server: https://wiki.jenkins-ci.org/display/JENKINS/Github+OAuth+Plugin As I understand it anyone who is auth'd to view one of our github project's can also login to our Jenkins box. This works I thought it would also allow the person logging in to only view the Project that they have GitHub permission on. For instance. Three projects on GitHub (A,B,C). Three builds on Jenkins. User 1 has Git access to all 3 projects (A B C). User 2 has Git access to only 1 project (A). When logging into Jenkins: User 1 can see all 3 projects ( this works ) User 2 can only see project A The problem is User 2 can also see all 3 projects when they should only see 1! Have I got this correct, and if so is this a bug? I have the settings set in Jenkins configuration Github Authorization Settings. Here we have some admin users. One organization. And none out of the 4 checkboxes ticked. (User 2, is not an admin, is not part of the org). The plugin is open sourced here: https://github.com/mocleiri/github-oauth-plugin I was trying to get Jenkins to print me the Logs from the plugin but I also failed at viewing these (to see if there was an issue). I followed these instructions: https://wiki.jenkins-ci.org/display/JENKINS/Logging It's the same concept as outlined below but using GitHub rather than manually selecting users: https://wiki.jenkins-ci.org/display/JENKINS/2012/01/03/Allow+access+to+specific+projects+for+Users%28Assigning+security+for+projects+in+Jenkins%29 Have I got this right or wrong? Is it possible to auth a Jenkins user to only see one project?

    Read the article

  • Does SQL Server Maintainance Cleanup consider exact times when deleting old backups?

    - by Heinzi
    Let's say I have a daily maintainance task that: Backups all databases and then removes backups that are older than 3 days. Now let's say the first backup at day 1, starting at 10:00, results in the following files db1.bak 2012-01-01 10:04 db2.bak 2012-01-01 10:06 Now let's say at day 4 the first step of the maintainance task (backup the DBs) happens to finish at 10:05. Will SQL Server delete db1.bak and keep db2.bak (would be logical, but might be surprising for the user) or keep both or remove both?

    Read the article

  • Remove Downed Exchange Server from First Administrative Group

    - by Campo
    I had a server die. It is gone forever. It is still listed in the servers folder of the first administrative group in the Exchange System Manager. When I click "all tasks Remove Server" I get the following error: The Server "SERVERNAME" cannot be removed because: -One or more users currently use a mailbox on this server. These users must be moved to a mailbox store on a different server or be mail disabled before uninstalling this server. Facility: Exchange System Manager ID no: c103f492 Exchange System Manager Any help would be MUCH appreciated. I cannot access the mailbox stores anymore and I do not care about the lost mailboxes. We deleted the inactive and old users as well. So I am stumped on this one. I just need to remove the old machine. THANKS!

    Read the article

  • Stop Zabbix notification for nodes under zabbix-proxy when proxy service is down

    - by A_01
    I have a zabbix-proxy and 12 nodes in that proxy. Right now whenever proxy service goes down. It send out of reach mail for all the 12 nodes. I want to send mail only for the zabbix proxy not for the nodes under that proxy Updated: Now I am trying to have a single trigger in which I want to check both the conditions like 1-check zabbix-host is not accessble from past x minutes. 2-check the host is not giving any data to the proxy(Host is down). Not the trigger should start shouting onle when we have condition in which proxy is running and node is down. I tried the below but its not working for me. Can some please help me out in this ({ip-10-4-1-17.ec2.internal:agent.ping.nodata(2m)}=1) & ({ip-10-4-1- 17.ec2.internal:zabbix[proxy,zabbixproxy.dev-test.com,lastaccess].fu??zzytime(120)}=1)

    Read the article

  • How do I obtain a valid DNS resolution given just an IP address?

    - by Dee Newcum
    Is there a publicly-available DNS server somewhere that will respond to requests like: 74_125_225_50.anyip.com And will return 74.125.225.50 for the above request? That is, every single possible IP address can queried by name instead of number. http://ipq.co/ is close to what I'm looking for, but it requires you to first register an IP address before you can query its DNS name. I want a service that does a straightforward mapping from domain name to IP address. Why do I want to do this? I have a program that we use at work that requires a DNS lookup, but I need to be able to give it bare IP addresses. (long story... it's a server that I don't control, so I can't work around it using /etc/hosts)

    Read the article

  • Redhat | Fuse | SSH file system

    - by MMRUSer
    I did manage setup and configure fuse and [sshfs][2] on my Redhat EL 5.4. But when I hit the sshfs it's out put an error sshfs: error while loading shared libraries: libfuse.so.2: cannot open shared object file: No such file or directory couldn't figure out the exact reason, requesting some helping hand Thanks..

    Read the article

  • Tomcat 6 Windows Server 64 Redirect Connector Fails

    - by Rafe
    So is there some problem with running the Tomcat connectors under a 64 bit windows OS? Here's my configuration: Windows Server 2003 64 bit Intel Xeon Tomcat 6.0.26 JVM 1.6.0 (64bit) ISAPI Redirect Connector 1.2.30.0 (64 bit) Calling the IP address of the site with :8080 brings up the tomcat page so I know that's running and the examples all work so its obviously not having a problem with the JVM. Calling the site ip on port 80 however gives me error 324 - looking at the application log on windows shows "Could not load all ISAPI filters for site/service. Therefore startup aborted". The ISAPI filter page under the web site properties shows the status of this filter to be down with a red arrow. The ISAPI filter name is jakarta and there is a corresponding virtual directory set up in the root of the site pointing to the same directory as the filter. The jakarta web service extension is also pointing to the required dll (c:\program files\apache software foundation\jakarta isapi redirector\bin\isapi_redirect.dll). Incidentally, this same problem occurs when trying to use Tomcat 5.5. I've also tried swapping out various redirect versions. It's really odd because I got it to work once with a version of the redirector that came with Plesk but I've since uninstalled everything to do with plesk and even trying to use the plesk-compiled dll doesn't work now. I am pulling my hair out on this, any ideas?

    Read the article

  • How does a vsftpd server work and how to configure it?

    - by ysap
    I was asked to configure a FTP server, based on the vsftpd package. The server is running on a remote machine to which I have a superuser privilege access. Being unfamiliar with the mechanics of FTP servers, I tried to figure out how user ftp accounts are configured. The previous maintainer used a shell script, which works on a list that we maintain to track users accounts and passwords, to configure the ftp accounts. From reading the script, I see that he generates a list of usernames and passwords, and actually creates a user account on the Linux machine. This means that for each user that we configure in the list, a new user account is being added by the adduser command: adduser --home /home/ftp --no-create-home $user (but w/o a private /home/username directory - using the /home/ftp instaed). Each of these users can log into his account using the ssh command. This fact seems a little strange to me, as I'd think that the ftp account should be decoupled from the Ubuntu user accounts. As another side effect, when a user connects using a web browser, he is connected to the /home/ftp directory. However, he can then use "Up to a higher level directory" link to go up and effectively have access to all of our system. So, the questions are: Is this really how the FTP server supposed to work in terms of configuring ftp accounts? If not, how do I configure the vsftpd server in a way that I have only the superuser Ubuntu account on that machine and all ftp account are... just FTP user accounts? Additionally, these ftp account should be configured in terms of how and what they are allowed to access.

    Read the article

  • CORAID using only 1 of the 2 available NICs for AoE traffic

    - by Peter Carrero
    We got 6 CORAID shelves in my workplace. On 2 of them I see AoE traffic on only 1 of the 2 NICs that are attached to the SAN switch. We got jumbo frames enabled on all devices. Both NICs show up when I issue the aoe-interfaces command. This wouldn't bother me too much if the throughput performance observed on the "bad" shelves using bonnie++ wasn't half of the result of the "good" shelves. The "good" shelves are older SR1521 model and they have ReiserFS on their LUNS - not that I think it makes a difference - and the "bad" shelves are newer SR2421 model and have JFS. Any help as to what is going on and how to rectify this would be greatly appreciated. BTW: even the lower performing shelves outperform another iSCSI device we got, but that is another story... Thanks.

    Read the article

  • Capacity Planning IIS6

    - by user45457
    Hello, I am planning to host 5000 site with separate application pool. Based on my estimate there are around 1600- 2000 worker process will be running on the server. So my question is: Is is possible to host 5000 sites on a server with IIS 6. Server Configuration: 16 Core @ 2.27 Ghz CPU 8 GB RAM Please tell me if u require any other information. Kartik

    Read the article

  • To clone or to automate a system installation?

    - by Shtééf
    Let's say you're setting up a cluster of servers performing the same task. Or say you're just setting up a bunch of different servers, but you expect to use a base configuration on all of your servers. Would it be better practice to create a base image and clone it, or to automate the installation and configuration? I occasionally end up in this argument with my boss, in situations where we're time-pressed. When he sees me struggle with perfecting the automation, his suggestion is often to clone the entire disk to the other machines. But my instinct has always been to avoid cloning. This is mostly from an Ubuntu perspective, but the question is fairly general. My reasons for avoiding cloning are: On a typical install, even if it's fresh, there are already several unique identifiers installed: filesystem UUIDs, SSH host keys, among others. These would have to be regenerated. Network needs to be reconfigured for each clone. This would need to be done off-line, of course, or the settings will conflict with other machines on the network. On the other hand, some of the cloning advantages are quite clear as well: (Initially?) less effort required than automating configuration. Tools exist to quickly address (some) of the above disadvantages. (I can see right through my own bias there.)

    Read the article

  • Changing subnet-mask of class-c network host to 255.255.0.0

    - by Prashant Mandhare
    We have a existing class-c network with IP address range 11.22.33.44/24 (just for example). My domain controller has been configured within this subnet. So all servers within this subnet have subnet mask configured to 255.255.255.0. Now we have got a new subnet with IP address 11.22.88.99/24 (note that only last 2 octets have changed). I want all new hosts in this new subnet to join my existing DC. For this we have configured firewall properly so allow this. (so there is no issue with firewall). But initially I was not able to join hosts in new subnet in existing domain. Later I doubted on subnet mask used in domain controller (255.255.255.0) and for testing purpose I changed it to 255.255.0.0, it worked like charm, i was able to join subnet-2 hosts in subnet-1 domain. Now i am wondering whether it will be good practice to change subnet mask of a class-c network to 255.255.0.0? Can any issues arise due to this? Experts please provide your opinion.

    Read the article

  • AdPrep logs show an LDAP error

    - by Omar
    What I am trying to do is transition our domain from Server 2003 Enterprise x32 to Server 2008 R2 Enterprise x64. Here is what I have done thus far. The 2003 server is a physical machine, the 2008 server is a virtual machine Built a virtual machine that has Server 2008 R2 Enterprise x64 and joined it to the domain as a domain member On the 2003 DC, Raised Domain Functional Level and Forest Functional Level to Windows Server 2003 On the 2003 DC, went into the registry and navigated to HKLM\SYSTEM\CurrentControlSet\Services\NTDS\Parameters and verified that the Schema Version is 30 On the 2003 DC, inserted the Windows Server 2008 Enterprise x32 Edition to copy over the adprep folder. This version is the only one that seemed to work On the 2003 DC, opened command prompt and went to adprep directory and ran adprep /forestprep , adprep /domainprep , and adprep /domainprep /gpprep On the 2008 server, Installed the Active Directory Domain Services role from Server Manager On the 2003 DC, went into the registry and navigated to HKLM\SYSTEM\CurrentControlSet\Services\NTDS\Parameters and verified that the Schema Version is now 44 When I go to run dcpromo on the 2008 server, I get a message that says: "To install a domain controller into this Active Directory forest, you must first prepare using adprep /forestprep" I went back to the 2003 DC server and went through the adprep logs and I came across this: Adprep was unable to modify the security descriptor on object CN=DomainControllerAuthentication,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com. [Status/Consequence] ADPREP was unable to merge the existing security descriptor with the new access control entry (ACE). [User Action] Check the log file ADPrep.log in the C:\WINDOWS\debug\adprep\logs\20100327143517 directory for more information. Adprep encountered an LDAP error. *Error code: 0x20. Server extended error code: 0x208d, Server error message: 0000208D: NameErr: DSID-031001CD, problem 2001 (NO_OBJECT), data 0, best match of: 'CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com* In fact, I got three of these errors. The LDAP error is consistent with all three, but the top part where it says "Adprep was unable to modify the security descriptor on object" are different. They are the following: CN=DomainControllerAuthentication,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com. CN=DirectoryEmailReplication,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com. CN=KerberosAuthentication,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=xeroxtoledo,DC=com. The credentials I am using on the 2008 server when running dcpromo is my domain account. My account is part of the domain and enterprise admin groups. I've tried various quick fixes that I've came across through Google searches that include: Disabling AntiVirus on current DCs Pointing DNS on PDC to point to itself Changing the Schema Update Allowed key to 1 and tried rerunning adprep - when rerunning adprep, told me that Forest-wide information has already been updated Disabled Windows Firewall on the Server 2008 box On the 2003 DC, went to Domain Controller Security Policy Local Policies User Rights Assignment and added Domain Admins to the Enable computer and user accounts to be trusted for delegation policy setting Both our PDC and BDC are Global Catalog Servers. Not sure if this matters or not I ran the command netdom query fsmo and verified that the FSMO role holder is the current 2003 PDC I ran dcdiag /v on the 2003 PDC and the only thing that failed was Services. Dnscache Service is stopped on the PDC I even went as far as deleting the virtual machine and recreating it from scratch - no avail... Help :(

    Read the article

  • How to fix an external HDD?

    - by Bogdan
    Hi, I have a Verbatim 1080p external HDD (47535 model). When i plug it in, the power and the hdd leds are lighting, but it has an anoying sound every half second or so. Is there any posibility to fix it OR retrieve my data? Or is a mechanical problem? Thanks!

    Read the article

  • SSH - using keys works, but not in a script

    - by Garfonzo
    I'm kind of confused, I have set up public keys between two servers and it works great, sort of. It only works if I ssh manually from a terminal. When I put the ssh command into a python script, it asks me for a password to login. The script is using rsync to sync up a directory from one server to the other. manual ssh command that works, no password prompt, automatic login: ssh -p 1234 [email protected] In the Python script: rsync --ignore-existing --delete --stats --progress -rp -e "ssh -p 1234" [email protected]:/directory/ /other/directory/ What gives? (obviously, ssh details are fake)

    Read the article

  • Mac OS X behind OpenLDAP and Samba

    - by Sam Hammamy
    I have been battling for a week now to get my Mac (Mountain Lion) to authenticate on my home network's OpenLDAP and Samba. From several sources, like the Ubuntu community docs, and other blogs, and after a hell of a lot of trial and error and piecing things together, I have created a samba.ldif that will pass the smbldap-populate when combined with apple.ldif and I have a fully functional OpenLDAP server and a Samba PDC that uses LDAP to authenticate the OS X Machine. The problem is that when I login, the home directory is not created or pulled from the server. I get the following in system.log Sep 21 06:09:15 Sams-MacBook-Pro.local SecurityAgent[265]: User info context values set for sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got user: sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got ruser: (null) Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got service: authorization Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_principal_for_user(): no authauth availale for user. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_principal_for_user(): failed: 7 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Failed to determine Kerberos principal name. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Done cleanup3 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Kerberos 5 refuses you Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): pam_sm_authenticate: ntlm Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_acct_mgmt(): OpenDirectory - Membership cache TTL set to 1800. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_record_check_pwpolicy(): retval: 0 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Establishing credentials Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Got user: sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Context initialised Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): pam_sm_setcred: ntlm user sam doesn't have auth authority All that's great and good and I authenticate. Then I get CFPreferences: user home directory for user kCFPreferencesCurrentUser at /Network/Servers/172.17.148.186/home/sam is unavailable. User domains will be volatile. Failed looking up user domain root; url='file://localhost/Network/Servers/172.17.148.186/home/sam/' path=/Network/Servers/172.17.148.186/home/sam/ err=-43 uid=9000 euid=9000 If you're wondering where /Network/Servers/IP/home/sam comes from, it's from a couple of blogs that said the OpenLDAP attribute apple-user-homeDirectory should have that value and the NFSHomeDirectory on the mac should point to apple-user-homeDirectory I also set the attr apple-user-homeurl to <home_dir><url>smb://172.17.148.186/sam/</url><path></path></home_dir> which I found on this forum. Any help is appreciated, because I'm banging my head against the wall at this point. By the way, I intend to create a blog on my vps just for this, and create an install script in python that people can download so no one has to go through what I've had to go through this week :) After some sleep I am going to try to login from a windows machine and report back here. Thanks Sam

    Read the article

  • Best Firewall product for hosting/housing environment?

    - by Raffael Luthiger
    I am searching for a firewall product (appliance or software) for an hosting/housing environment. The biggest problem is that the rules get very complex as more customers are behind the firewall. Some have only one server, others have a whole subnet. Some need NAT, some a VPN endpoint. Some customers want to only allow port http, others ssh as well. So the device needs to be able to support VLANs and it should be possible to group the rules per customer. Speed is another important point. And being able to manage redundant devices easily. I am searching for something that doesn't have all the extras like spam filter etc. I was searching a lot on the net but either they had all those extras as well (and with is an overloaded configuration interface) or they missed some of the features I need (e.g. VLAN). The VPN endpoint is not the an important criteria. We were thinking about a separate machine for it.

    Read the article

  • FFMpeg installation problem

    - by Thomas
    Hi I have problem installing ffmpeg. I follow this url: https://www.crucialp.com/resources/tutorials/server-administration/how-to-install-ffmpeg-ffmpeg-php-mplayer-mencoder-flv2tool-LAME-MP3-Encoder-libog.php Setting up repositories core 100% |=========================| 1.1 kB 00:00 rpmforge 100% |=========================| 1.1 kB 00:00 Error: Cannot find a valid baseurl for repo: updates [root@02e7709 src]# yum install subversion ruby ncurses-devel Loading "installonlyn" plugin Setting up Install Process Setting up repositories core 100% |=========================| 1.1 kB 00:00 rpmforge 100% |=========================| 1.1 kB 00:00 Error: Cannot find a valid baseurl for repo: updates [root@02e7709 src]# svn checkout svn://svn.mplayerhq.hu/ffmpeg/trunk ffmpeg -bash: svn: command not found [root@02e7709 src]# svn command not found and throws error Error: Cannot find a valid baseurl for repo: updates I am installing in fedora core 6 64 bit

    Read the article

< Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >