Search Results

Search found 13224 results on 529 pages for 'redirection errors'.

Page 466/529 | < Previous Page | 462 463 464 465 466 467 468 469 470 471 472 473  | Next Page >

  • DNS entries for OCS 2007 R2 basic deploy

    - by Anero
    I'm doing a test deploy on a Lab with 3 VMs: TEST-DC: DC / DHCP / DNS / Root CA (Joined to TEST.AD Domain) TEST-CS: OCS Front End (Joined to TEST.AD Domain - IP: 10.0.128.1) TEST-EDGES: OCS Edge Server (Joined to Workgroup: EDGE-WKG - Internal IP: 10.0.128.3, External IPs: 192.168.129.12 - Access Edge Server, 192.168.129.13 - Web Conferencing, 192.168.129.14 - A/V) I can login with the Communicator Client from within computers in the domain (using [email protected]) and even the Automatic Sign-In works as expected. Nevertheless, I cannot login neither from within machines in the domain nor from outside the domain using [email protected]. I'm pretty sure it is a DNS related issue, so I'm including below a list of the entries. DNS Entries on TEST-DC: Forward Lookup Zones TEST.AD sip.test.ad (Host A). IP Address: 10.0.128.1 sipinternal.test.ad (Host A). IP Address: 10.0.128.1 sipexternal.test.ad (Host A). IP Address: 10.0.128.3 _sipinternaltls._tcp.test.ad (Service Location SRV). Port: 5061. Host: sipinternal.test.ad _sipinternal._tcp.test.ad (Service Location SRV). Port: 5061. Host: sipinternal.test.ad _sip._tcp.test.ad (Service Location SRV). Port: 5061. Host: sipexternal.test.ad _sipfederationtls._tcp.test.ad (Service Location SRV). Port: 5061. Host: sipexternal.test.ad _sip._tls.test.ad (Service Location SRV). Port: 443. Host: sipexternal.test.ad TEST.COM sip.test.com (Host A). IP Address: 10.0.128.1 sipinternal.test.com (Host A). IP Address: 10.0.128.1 sipexternal.test.com (Host A). IP Address: 10.0.128.3 _sipinternaltls._tcp.test.com (Service Location SRV). Port: 5061. Host: sipinternal.test.com _sipinternal._tcp.test.com (Service Location SRV). Port: 5061. Host: sipinternal.test.com _sip._tcp.test.com (Service Location SRV). Port: 5061. Host: sipexternal.test.com _sip._tls.test.ad (Service Location SRV). Port: 443. Host: sipexternal.test.ad Validation Errors OCS Front End Edge Server I ran the OCS 2007 Automatic Sign-In Troubleshooting and all DNS entries for both TEST.AD and TEST.COM are reported to be OK. What am I missing?

    Read the article

  • Setting up Apache and PHP on Mac OS X Snow Leopard

    - by Martin Bean
    I've recently purchased an Apple iMac. Unfortunately, enabling Apache and PHP has thrown up some problems. I enabled Mac's built-in Web Sharing through System Preferences, at which point I got an output and could add HTML files to my user directory. However, PHP files were being displayed rather than interpreted. I then discovered this is because PHP isn't enabled by default on Mac's Apache set-up. After a quick Google search, I came across this page: http://developer.apple.com/mac/articles/internet/phpeasyway.html I proceeded to the section, Enabling PHP in Apache, copying and pasting the following code snippet into a new Terminal window and hitting Return: set admin_email to (do shell script "defaults read AddressBookMe ExistingEmailAddress") user_www=$HOME/Sites filename=php-test user_index=${user_www}/${filename}.php user_db=${user_www}/${filename}-db.sqlite3 # NOTE: Having a writeable database in your home directory can be a security risk! conf=`apachectl -V | awk -F= '/SERVER_CONFIG/ {print \$2}'| sed 's/"//g'` conf_old=$conf.$$ conf_new=/tmp/php_conf.new touch $user_db chmod a+r $user_index chmod a+w $user_db chmod a+w $user_www echo "Enabling PHP in $conf ..." sed '/#LoadModule php5_module/s/#LoadModule/LoadModule/' $conf | sed "s^[email protected]^<b>\$admin_email</b>^" > $conf_new echo "(Re)Starting Apache ..." osascript <<EOF do shell script "/bin/mv -f $conf $conf_old; /bin/mv $conf_new $conf; /usr/sbin/apachectl restart" with administrator privileges EOF Unfortunately, this has completed thrown Apache and now nothing is being served; instead I'm receiving "Failed to open page" errors because it cannot connect to the server, despite Web Sharing still being active in System Preferences. So therefore I guess my question is this: how can I undo the changes made by the copy-and-pasting of the above code snippet? Admittedly, I don't understand what the above did; I just thought it looked like a Terminal command and tried it. I have no experience in setting up Apache on Mac OS X (and I've only installed XAMPP and WampServer on Windows). So any points on reversing the aforementioned, and then successfully enabling PHP would be great. EDIT: I've discovered, via Console, the following error message is being recorded when trying to browse to 127.0.0.1... (org.apache.httpd) Throttling respawn: Will start in 10 seconds no listening sockets available, shutting down Unable to open logs (org.apache.httpd[13453]) Exited with exit code: 1 Does this point any more to the issue? EDIT #2: I'm now getting this in Console... 15/02/2010 21:24:14 osascript[3597] Error loading /Library/ScriptingAdditions/Adobe Unit Types.osax/Contents/MacOS/Adobe Unit Types: dlopen(/Library/ScriptingAdditions/Adobe Unit Types.osax/Contents/MacOS/Adobe Unit Types, 262): no suitable image found. Did find: /Library/ScriptingAdditions/Adobe Unit Types.osax/Contents/MacOS/Adobe Unit Types: no matching architecture in universal wrapper

    Read the article

  • PHP 5.3 Not Logging

    - by BHare
    I have set error_log = "/var/log/apache2/php_errors.log" and made sure errors were being logged. I have set the file to be owned by the www-data owner and group and even set the permissions to 777. I have confirmed with phpinfo() that the error_log is correctly set, however The logging still only happens in my vhost's apache error log. The following is my php.ini for 5.3.3-7 on Debian Squeeze Apache 2: The top is populated with comments on what I have been interested, or have changed. I have deleted all comments to save space. Full versions here: http://pastebin.com/AhWLiQBR [PHP] ;short_open_tag = On ;allow_call_time_pass_reference = On ;error_reporting = E_ALL & ~E_NOTICE & ~E_DEPRECATED ;display_errors = On ;display_startup_errors = Off ;log_errors = On ;html_errors = On error_log = "/var/log/apache2/php_errors.log" engine = On short_open_tag = On asp_tags = Off precision = 14 y2k_compliance = On output_buffering = 4096 zlib.output_compression = Off implicit_flush = Off unserialize_callback_func = serialize_precision = 100 allow_call_time_pass_reference = On safe_mode = Off safe_mode_gid = Off safe_mode_include_dir = safe_mode_exec_dir = safe_mode_allowed_env_vars = PHP_ safe_mode_protected_env_vars = LD_LIBRARY_PATH disable_functions = disable_classes = expose_php = On max_execution_time = 30 max_input_time = 60 memory_limit = 128M error_reporting = E_ALL & ~E_NOTICE & ~E_DEPRECATED display_errors = On display_startup_errors = Off log_errors = On log_errors_max_len = 1024 ignore_repeated_errors = Off ignore_repeated_source = Off report_memleaks = On track_errors = Off html_errors = On variables_order = "GPCS" request_order = "GPC" register_globals = Off register_long_arrays = Off register_argc_argv = Off auto_globals_jit = On post_max_size = 100M magic_quotes_gpc = Off magic_quotes_runtime = Off magic_quotes_sybase = Off auto_prepend_file = auto_append_file = default_mimetype = "text/html" doc_root = user_dir = enable_dl = Off file_uploads = On upload_tmp_dir = /tmp upload_max_filesize = 100M max_file_uploads = 20 allow_url_fopen = On allow_url_include = Off default_socket_timeout = 60 [Date] [filter] [iconv] [intl] [sqlite] [sqlite3] [Pcre] [Pdo] [Pdo_mysql] pdo_mysql.cache_size = 2000 pdo_mysql.default_socket= [Phar] [Syslog] define_syslog_variables = Off [mail function] SMTP = localhost smtp_port = 25 mail.add_x_header = On [SQL] sql.safe_mode = Off [ODBC] odbc.allow_persistent = On odbc.check_persistent = On odbc.max_persistent = -1 odbc.max_links = -1 odbc.defaultlrl = 4096 odbc.defaultbinmode = 1 [Interbase] ibase.allow_persistent = 1 ibase.max_persistent = -1 ibase.max_links = -1 ibase.timestampformat = "%Y-%m-%d %H:%M:%S" ibase.dateformat = "%Y-%m-%d" ibase.timeformat = "%H:%M:%S" [MySQL] mysql.allow_local_infile = On mysql.allow_persistent = On mysql.cache_size = 2000 mysql.max_persistent = -1 mysql.max_links = -1 mysql.default_port = mysql.default_socket = mysql.default_host = mysql.default_user = mysql.default_password = mysql.connect_timeout = 60 mysql.trace_mode = Off [MySQLi] mysqli.max_persistent = -1 mysqli.allow_persistent = On mysqli.max_links = -1 mysqli.cache_size = 2000 mysqli.default_port = 3306 mysqli.default_socket = mysqli.default_host = mysqli.default_user = mysqli.default_pw = mysqli.reconnect = Off [mysqlnd] mysqlnd.collect_statistics = On mysqlnd.collect_memory_statistics = Off [OCI8] [PostgresSQL] pgsql.allow_persistent = On pgsql.auto_reset_persistent = Off pgsql.max_persistent = -1 pgsql.max_links = -1 pgsql.ignore_notice = 0 pgsql.log_notice = 0 [Sybase-CT] sybct.allow_persistent = On sybct.max_persistent = -1 sybct.max_links = -1 sybct.min_server_severity = 10 sybct.min_client_severity = 10 [bcmath] bcmath.scale = 0 [browscap] [Session] session.save_handler = files session.use_cookies = 1 session.use_only_cookies = 1 session.name = PHPSESSID session.auto_start = 0 session.cookie_lifetime = 0 session.cookie_path = / session.cookie_domain = session.cookie_httponly = session.serialize_handler = php session.gc_probability = 0 session.gc_divisor = 1000 session.gc_maxlifetime = 1440 session.bug_compat_42 = Off session.bug_compat_warn = Off session.referer_check = session.entropy_length = 0 session.cache_limiter = nocache session.cache_expire = 180 session.use_trans_sid = 0 session.hash_function = 0 session.hash_bits_per_character = 5 url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry" [MSSQL] mssql.allow_persistent = On mssql.max_persistent = -1 mssql.max_links = -1 mssql.min_error_severity = 10 mssql.min_message_severity = 10 mssql.compatability_mode = Off mssql.secure_connection = Off [Assertion] [COM] [mbstring] [gd] [exif] [Tidy] tidy.clean_output = Off [soap] soap.wsdl_cache_enabled=1 soap.wsdl_cache_dir="/tmp" soap.wsdl_cache_ttl=86400 soap.wsdl_cache_limit = 5 [sysvshm] [ldap] ldap.max_links = -1 [mcrypt] [dba]

    Read the article

  • Unable to add IPv6 address to sendmail access list

    - by David M. Syzdek
    I am running Sendmail 8.14.4 on Slackware 13.37. I have the following in my /etc/mail/access file and it works without any errors: Connect:127 OK Connect:10.0.1 RELAY # Net: office Connect:50.116.6.8 RELAY # Host: glider Connect:96.126.127.87 RELAY # Host: kite The above configuration also allows me to send an e-mail via IPv6 to a local user on the mail server. However, it does not allow my office to relay via IPv6. I have tried two ways of adding IPv6 networks to my access file. Method 1: Connect:127 OK Connect:10.0.1 RELAY # Net: office Connect:IPv6:2001:470:b:84a RELAY # Net: office Connect:50.116.6.8 RELAY # Host: glider Connect:96.126.127.87 RELAY # Host: kite Method 2: Connect:127 OK Connect:10.0.1 RELAY # Net: office Connect:[IPv6:2001:470:b:84a] RELAY # Net: office Connect:50.116.6.8 RELAY # Host: glider Connect:96.126.127.87 RELAY # Host: kite However whenever I try using either method 1 or 2, I am unable to relay e-mail messages through the host. /var/log/maillog entry: May 31 11:57:15 freshsalmon sm-mta[25500]: ruleset=check_relay, arg1=[IPv6:2001:470:b:84a:223:6cff:fe80:35dc], arg2=IPv6:2001:470:b:84a:223:6cff:fe80:35dc, relay=[IPv6:2001:470:b:84a:223:6cff:fe80:35dc], reject=553 5.3.0 RELAY # Net:office Test session from telnet: syzdek@blackenhawk$ telnet -6 freshsalmon.office.example.com 25 Trying 2001:470:b:84a::69... Connected to freshsalmon.office.bindlebinaries.com. Escape character is '^]'. 220 office.example.com ESMTP Sendmail 8.14.4/8.14.4; Thu, 31 May 2012 11:57:15 -0800 HELO blackenhawk.office.example.com 250 office.example.com Hello [IPv6:2001:470:b:84a:223:6cff:fe80:35dc], pleased to meet you MAIL FROM:[email protected] 553 5.3.0 RELAY # Net:office What is the correct way to add an IPv6 address/network to the access file in sendmail? Update: Apparently my access file was not working regardless. Removing the comments at the end of the line seems to have fixed the problem. Here is the lines which worked: Connect:127 OK Connect:IPv6:::1 OK # Net: office Connect:10.0.1 RELAY Connect:IPv6:2001:470:b:84a RELAY # Host: glider Connect:50.116.6.8 RELAY Connect:IPv6:2600:3c01::f03c:91ff:fedf:381a RELAY # Host: kite Connect:96.126.127.87 RELAY Connect:IPv6:2600:3c00::f03c:91ff:fedf:52a4 RELAY

    Read the article

  • Puppet master fails to run under nginx+passenger configuration as rack app, works when run as system service

    - by Anadi Misra
    I get the error [anadi@bangda ~]# tail -f /var/log/nginx/error.log [ pid=19741 thr=23597654217140 file=utils.rb:176 time=2012-09-17 12:52:43.307 ]: *** Exception LoadError in PhusionPassenger::Rack::ApplicationSpawner (no such file to load -- puppet/application/master) (process 19741, thread #<Thread:0x2aec83982368>): from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from config.ru:13 from /usr/local/lib/ruby/gems/1.8/gems/rack-1.4.1/lib/rack/builder.rb:51:in `instance_eval' from /usr/local/lib/ruby/gems/1.8/gems/rack-1.4.1/lib/rack/builder.rb:51:in `initialize' from config.ru:1:in `new' from config.ru:1 when I start nginx server with passenger module configured, puppet master configured to run through rack. here is the config.ru [anadi@bangda ~]# cat /etc/puppet/rack/config.ru # a config.ru, for use with every rack-compatible webserver. # SSL needs to be handled outside this, though. # if puppet is not in your RUBYLIB: #$:.unshift('/usr/share/puppet/lib') $0 = "master" # if you want debugging: # ARGV << "--debug" ARGV << "--rack" require 'puppet/application/master' # we're usually running inside a Rack::Builder.new {} block, # therefore we need to call run *here*. run Puppet::Application[:master].run and the nginx configuration for puppet master is as follows [anadi@bangda ~]# cat /etc/nginx/conf.d/puppet-master.conf server { listen 8140 ssl; server_name bangda.mycompany.com; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; access_log /var/log/nginx/puppet/master.access.log; error_log /var/log/nginx/puppet/master.error.log; root /etc/puppet/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangda.mycompany.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangda.mycompany.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } however when I run puppet through the ususal puppetmasterd daemon it works perfect with no errors. I can see somehow the nginx+passenger+rack setup fails to initialize while the same works when running the natvie puppetmaster daemon. Any configuration that I am missing?

    Read the article

  • Linux not picking up new partition correctly on emc pseudo device

    - by James
    Hi We have a database server running oracle rac. We were recently running out of space on the main LUN that it is attached to. I created a new 100GB LUN and concatenated this onto the existing LUN creating a new MetaLUN. After some messing I managed to get linux to recognise the new space. I then created a new partition in on the pseudo device, to use the new space. Previously when I have done this on other system the next step is to create an ASM disk on the new partition and add this disk to the oracle disk group. This however fails. I am aware of various issues with ASM and powerpath, but I don't think this is the issue here. As on while investigating the issue I discovered that one of the underlying logical device is not reflecting the size change. See below; Powermt displays all of the underlying logical units [root@XXXXX~]# powermt display dev=emcpowerd Pseudo name=emcpowerd CLARiiON ID=CKM00091500009 [VFRAC2] Logical device ID=6006016030312200787502866C65DE11 [LUN 30] state=alive; policy=CLAROpt; priority=0; queued-IOs=0 Owner: default=SP A, current=SP A Array failover mode: 1 ============================================================================== ---------------- Host --------------- - Stor - -- I/O Path - -- Stats --- ### HW Path I/O Paths Interf. Mode State Q-IOs Errors ============================================================================== 3 qla2xxx sde SP A0 active alive 0 0 3 qla2xxx sdj SP B0 active alive 0 0 4 qla2xxx sdo SP A1 active alive 0 0 4 qla2xxx sdt SP B1 active alive 0 0 Fdisk on the pseudo device shows correct space. [root@XXXXX ~]# fdisk -l /dev/emcpowerd Disk /dev/emcpowerd: 429.4 GB, 429496729600 bytes 255 heads, 63 sectors/track, 52216 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/emcpowerd1 1 39162 314568733+ 83 Linux /dev/emcpowerd2 39163 52216 104856255 83 Linux fdisk on one of the logical units is wrong [root@XXXXX~]# fdisk -l /dev/sde Disk /dev/sde: 322.1 GB, 322122547200 bytes 255 heads, 63 sectors/track, 39162 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sde1 1 39162 314568733+ 83 Linux /dev/sde2 39163 52216 104856255 83 Linux fdisk on the rest of the units is fine [root@XXXXX ~]# fdisk -l /dev/sdj Disk /dev/sdj: 429.4 GB, 429496729600 bytes 255 heads, 63 sectors/track, 52216 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdj1 1 39162 314568733+ 83 Linux /dev/sdj2 39163 52216 104856255 83 Linux Also when I created the the partition linux did not create the any entries in the /dev directory for the second partition so I created these manually [root@XXXXX dev]# mknod sde2 b 8 66 [root@XXXXX dev]# ls -al sd[ejot]? brw-r----- 1 root disk 8, 65 Dec 29 14:20 sde1 brw-r--r-- 1 root disk 8, 66 Apr 8 20:31 sde2 brw-r----- 1 root disk 8, 145 Dec 29 14:19 sdj1 brw-r--r-- 1 root disk 8, 146 Apr 8 20:33 sdj2 brw-r----- 1 root disk 8, 225 Apr 6 23:12 sdo1 brw-r--r-- 1 root disk 8, 226 Apr 8 20:33 sdo2 brw-r----- 1 root disk 65, 49 Dec 29 14:19 sdt1 brw-r--r-- 1 root disk 65, 50 Apr 8 20:33 sdt2 This is a production server that we cannot easily reboot. Any ideas would be much appreciated. J

    Read the article

  • Windows: Impact of clean install Service Pack 2 to applications & data?

    - by Thomas Matthews
    My Windows Vista Home Premium system is corrupt and won't install Service Pack 2. I have followed all the advice from Microsoft and still no luck. I would like to perform a clean install of Vista, then SP1, and then SP2. My concern is the effect of the clean install on the registry, my apps and all my data. My plan: 1. Download Vista Service Pack 1 (SP1) ISO and write to DVD. 2. Download Vista Service Pack 2 (SP2) ISO and write to DVD. 3. Backup all data, applications and registry to external hard drive (file copy not disk image) 4. ?? Format hard drive?? (is this necessary?) 5. Install Vista from DVDs / CDs. 6. Install SP1 from DVD 7. Install SP2 from DVD 8. Restore registry, applications and data from external hard drive. My questions: 1. Is formatting the hard drive a necessary step? 2. Will restoring the registry from the backup corrupt the system? 3. Should I use Windows Backup or ZIP/RAR? 4. Any gotcha's that I should look out for? Background: I am using Windows Vista Home Premium with SP1. The sfc program does not finish due to a resources problem (even when run as administrator). I have 5 users on it. After a while, the screen goes black and shows an error message window about an error with login.scr. Standard accounts display a black screen and can't run any applications. Administrative accounts have no problems (even standard accounts when converted to Administrative have no problem). The CBS log contains a lot of 0x8000ffff and E_UNEXPECTED errors (which Microsoft defines as catastrophic failure). This is the reasoning behind performing a clean install up to service pack 2.

    Read the article

  • Microsoft signed driver appears as publisher not verfied

    - by Priyanka Gupta
    Task at hand: Microsoft sign drivers on Win 7. I microsoft signed my driver package 3 times every time thinking I might have missed a step or something. However, I cannot seem to get rid of the Windows Security error message "Windows can't verify the publisher of this driver software'. This is not the first time I have signed the driver packages. I was successfully able to sign other driver packages a few months ago. However, with this driver package I keep getting Windows security dialog box. Here's the procedure I follow - Create a new cat file using INF2CAT tool. Self sign the driver using a Versign Class 3 Public Primary Certification Authority - G5.cer. Run the microsoft tests on DTM Servers and clients with the devices that use this driver. Create WLK submission package. Self sign the cab file. Submit the package for certification. The catalog file that comes back after successfully passing tests says Name of signer "Microsoft Windows Hardware Comptibility Publisher". When I check the validity of signature using SignTool, it says the signature is vaild. However, when I try to install the driver with new signed catalog file the windows complain. Any ideas? Edit 11/12/2012: Reply to Eugene's comment Thanks for the help, Eugene. Yes. I did sign two other driver packages before. One of them was modified version of WinUSB driver. I am using the same certificate I used when I signed those two driver packages a few months ago. It costs $250 per signing from Microsoft. I would think that Microsoft would complain about it during certification if the certificate is wrong. I use the following command to self sign the CAT file. I don't have to specify the ceritificate name as there's only one certificate in the directory - Signtool sign /v /a /n CompanyName /t http://timestamp.verisign.com/scripts/timestamp.dll OurCatalogFile.cat Below is the result from running Verify command on the Microsoft signed OurCatalogFile.cat C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\x64signtool verify /v "C:\User s\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Verifying: C:\Users\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Hash of file (sha1): BDDF39B1DD95881B462164129758A7FFD54F47D9 Signing Certificate Chain: Issued to: Microsoft Root Certificate Authority Issued by: Microsoft Root Certificate Authority Expires: Sun May 09 18:28:13 2021 SHA1 hash: CDD4EEAE6000AC7F40C3802C171E30148030C072 Issued to: Microsoft Windows Hardware Compatibility PCA Issued by: Microsoft Root Certificate Authority Expires: Thu Jun 04 16:15:46 2020 SHA1 hash: 8D42419D8B21E5CF9C3204D0060B19312B96EB78 Issued to: Microsoft Windows Hardware Compatibility Publisher Issued by: Microsoft Windows Hardware Compatibility PCA Expires: Wed Sep 18 18:20:55 2013 SHA1 hash: D94345C032D23404231DD3902F22AB1C2100341E The signature is timestamped: Tue Nov 06 11:26:48 2012 Timestamp Verified by: Issued to: Microsoft Root Authority Issued by: Microsoft Root Authority Expires: Thu Dec 31 02:00:00 2020 SHA1 hash: A43489159A520F0D93D032CCAF37E7FE20A8B419 Issued to: Microsoft Timestamping PCA Issued by: Microsoft Root Authority Expires: Sun Sep 15 02:00:00 2019 SHA1 hash: 3EA99A60058275E0ED83B892A909449F8C33B245 Issued to: Microsoft Time-Stamp Service Issued by: Microsoft Timestamping PCA Expires: Tue Apr 09 16:53:56 2013 SHA1 hash: 1895C2C907E0D7E5C0292B92C6EA8D0E236F525E Successfully verified: C:\Users\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Number of files successfully Verified: 1 Number of warnings: 0 Number of errors: 0 Thank you!

    Read the article

  • Add Mirror for volumes other than the last one in Windows 7 (disk "not up-to-date")

    - by rakslice
    I'm using Windows 7 x64 Ultimate. I have an existing 4TB disk with 3 NTFS volumes, a new 3TB blank disk, and I'm trying to mirror the volumes onto the new disk. My Windows install is on an SSD which is Disk 0. The 4TB disk with volumes is Disk 1, and the new blank disk is Disk 2. I can add a mirror successfully for the last volume, but when I try to add a mirror for the first volume I immediately get errors (see below). Is there something I special I need to do to add a mirror for a volume other than the last one? More info: I opened Disk Management, right-clicked on the first volume on the existing disk, went to Add Mirror, and selected the new disk. The first time I did this I was prompted to convert the new disk to a Dynamic Disk, which I approved. Subsequently I got a message: The operation failed to complete because the Disk Management console view is not up-to-date. Refresh the view by using the refresh task. If the problem persists close the Disk Management console, then restart Disk Management or restart the computer. I've refreshed disk management, restarted the computer, and converted the new disk to basic and back to dynamic, but I still get that error message. Looking around for suggestions of a workaround, I saw a suggestion to use the diskpart command line tool. Running diskpart from the Start Menu as Administrator, I did select volume 2 (the first volume I want to mirror) and then add disk 2 (the new disk), and received a somewhat similar error: Virtual Disk Service error: The disk's extent information is corrupted. DiskPart has referenced an object which is not up-to-date. Refresh the object by using the RESCAN command. If the problem persists exit DiskPart, then restart DiskPart or restart the computer. A rescan appears to be successful: DISKPART> select disk 2 Disk 2 is now the selected disk. DISKPART> rescan Please wait while DiskPart scans your configuration... DiskPart has finished scanning your configuration. but attempting to add the mirror again resulted in the same error. The only similar report I found online was this: http://www.sevenforums.com/hardware-devices/335780-unable-mirror-all-but-last-partition-drive.html Based on that I attempted to mirror the last volume on the disk to the new disk using diskpart, and that started successfully -- it is currently resynchronizing. More Background: In the course of dealing with a failing 3TB hard drive, I bought a replacement 4TB drive and installed it, then copied the partitions from the failing drive to it using Minitool Partition Wizard Home, and then removed the failing drive and was up and running again normally. Now I've received a warranty replacement for the failing drive, and installed it, and now I'm attempting to mirror my partitions to it.

    Read the article

  • Authenticate by libpam-mysql and libnss-mysql (CentOS)

    - by Chris
    I'm trying to get MySQL to function as a backend for authenticating users on CentOS 6.3. So far I have successfully installed and configured libnss-mysql. I can test this by doing: # groups testuser testuser : sftp Testuser is a member of the sftp group in fact, all MySQL based useraccounts will be hardcoded to it. The sftp group is chrooted and forced to use internal-sftp so they cannot do anything but access their home directory. Then I configured pam-mysql and PAM to allow mysql logins. This also works.. When SELinux is not enforcing. When I do setenforce 1 users can no longer login. Error: Permission denied, please try again. This is my pam_mysql.conf file: users.host=localhost users.db_user=nss-pam-user users.db_passwd=*********** users.database=sftpusers users.table=users users.user_column=username users.password_column=password users.password_crypt=6 verbose=1 My /etc/pam.d/sshd: #%PAM-1.0 auth sufficient pam_sepermit.so auth include password-auth auth required pam_mysql.so config_file=/etc/pam_mysql.conf account sufficient pam_nologin.so account include password-auth account required pam_mysql.so config_file=/etc/pam_mysql.conf password include password-auth # pam_selinux.so close should be the first session rule session required pam_selinux.so close session required pam_loginuid.so # pam_selinux.so open should only be followed by sessions to be executed in the user context session required pam_selinux.so open env_params session optional pam_keyinit.so force revoke session include password-auth And to be complete the contents of some log files.. /var/logs/secure Nov 20 14:52:20 hostname unix_chkpwd[4891]: check pass; user unknown Nov 20 14:52:20 hostname unix_chkpwd[4891]: password check failed for user (testuser) Nov 20 14:52:20 hostname sshd[4880]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.10.107 user=testuser Nov 20 14:52:22 sftpusers sshd[4880]: Failed password for testuser from 192.168.10.107 port 51849 ssh2 /var/logs/audit/audit.log type=USER_AUTH msg=audit(1353420107.070:812): user pid=5285 uid=0 auid=500 ses=24 subj=unconfined_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=pubkey acct="testuser" exe="/usr/sbin/sshd" hostname=? addr=192.168.10.107 terminal=ssh res=failed' type=USER_AUTH msg=audit(1353420112.312:813): user pid=5285 uid=0 auid=500 ses=24 subj=unconfined_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=PAM:authentication acct="testuser" exe="/usr/sbin/sshd" hostname=192.168.10.107 addr=192.168.10.107 terminal=ssh res=failed' type=USER_AUTH msg=audit(1353420112.456:814): user pid=5285 uid=0 auid=500 ses=24 subj=unconfined_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=password acct="testuser" exe="/usr/sbin/sshd" hostname=? addr=192.168.10.107 terminal=ssh res=failed' I tried to let audit2why explain the problem but it remains silent even though there are some errors. Does anyone see the problem? Thanks! EDIT: Turns out it's almost working with setenforce 0 I can mkdir foobar but if I do a single ls I get an error: Received message too long 16777216

    Read the article

  • Unable to commit file through svn, server sent truncated HTTP response body

    - by Rocket3G
    I have my own VPS, on which I want to run a simple SVN + chiliproject setup. I have re-installed SVN, CHILI and the OS several times, and it always works for a couple of hours/days and then it just stops working. Well, everything works, except I can't upload any files. Committing directories seems to work just fine, but when I try to commit a file it breaks. I have an error log file, which gives me the following text when I try to commit something x.x.x.x - - [19/Oct/2013:00:01:46 +0200] "OPTIONS /project HTTP/1.1" 200 149 x.x.x.x - - [19/Oct/2013:00:01:46 +0200] "PROPFIND /project HTTP/1.1" 207 346 x.x.x.x - - [19/Oct/2013:00:01:46 +0200] "MKACTIVITY /project/!svn/act/c11d45ac-86b6-184a-ac5a-9a1105d64563 HTTP/1.1" 401 345 x.x.x.x - admin [19/Oct/2013:00:01:46 +0200] "MKACTIVITY /project/!svn/act/c11d45ac-86b6-184a-ac5a-9a1105d64563 HTTP/1.1" 201 262 x.x.x.x - - [19/Oct/2013:00:01:46 +0200] "PROPFIND /project HTTP/1.1" 207 236 x.x.x.x - admin [19/Oct/2013:00:01:46 +0200] "CHECKOUT /project/!svn/vcc/default HTTP/1.1" 201 271 x.x.x.x - admin [19/Oct/2013:00:01:46 +0200] "PROPPATCH /project/!svn/wbl/c11d45ac-86b6-184a-ac5a-9a1105d64563/1 HTTP/1.1" 207 267 x.x.x.x - admin [19/Oct/2013:00:01:46 +0200] "CHECKOUT /project/!svn/ver/1 HTTP/1.1" 201 271 x.x.x.x - - [19/Oct/2013:00:01:46 +0200] "HEAD /project/index.html HTTP/1.1" 404 - x.x.x.x - admin [19/Oct/2013:00:01:46 +0200] "PUT /project/!svn/wrk/c11d45ac-86b6-184a-ac5a-9a1105d64563/index.html HTTP/1.1" 201 269 x.x.x.x - admin [19/Oct/2013:00:02:04 +0200] "DELETE /project/!svn/act/c11d45ac-86b6-184a-ac5a-9a1105d64563 HTTP/1.1" 204 - So it seems that it PUTs the file (test.html) correctly, and somehow somewhere something is wrong (file permissions are alright, when I purposely stated that they are wrong, it gave me errors, which is expected, and they were about the file permissions being incorrect. The odd thing is that files won't get added, but directories are fine. I also have enough storage left on my machine. What I should note, perhaps, is that I use Ubuntu 12.04.3 with ruby 1.9.3, mysql 14.14 and I have it set up that Chiliproject handles the authentication and authorization for the project. It works, because I can commit directories and read it all correctly, though I can't upload files. Help would really be appreciated, as I don't know what on earth is going on with this 'truncated http response body'. I tried to read them with wireshark, but it basically gave me the same information. With regards, Ps. I have no clue what the delay between put and delete is, as it's a file of a mere 500 bytes, so it's uploaded in approximately a second. Pps. I copied this question from StackOverflow to this site, as I didn't know the existence of this site and another user suggested that I'd get more answers here, as it's basically a server fault.

    Read the article

  • 401 - Unauthorized On Server 2008 R2 IIS 7.5

    - by mxmissile
    I have a web application deployed to Server 2008 IIS 7.5 box. From remote it gives this error: 401 - Unauthorized: Access is denied due to invalid credentials. (remote = desktops on the same LAN) Have tried several remote clients using different browsers, all the same result. (IE, FF, and Chrome) Hitting the application from the desktop of the server itself works flawlessly. However I have not tried Firebug on the server desktop. I would assume it's still issuing a 401 status code yet returning the content anyway. See Update #2. The application is using Anonymous Authentication. The application is written in .NET 4.0 Asp.Net using the MVC framework. Static content works fine, example: http://server.com/content/image.jpg Sysinternals procmon returns these 2 results for each request: FAST IO DISALLOWED and PATH NOT FOUND. I have 2 other MVC apps running fine on the same server. I have checked the security on the folders and they all match. App runs fine on a Server 2008 IIS 7.0 box. Nothing shows up in the Event log on the server related to this. Pulling my hair out here, any troubleshooting tips? UPDATE #1: This just get's more WTF as I dig. If I click on the Application in IIS Manager - Error Pages - Edit Feature Settings select Detailed Errors, the app works remotely. Not leaving this on, so problem is not solved yet, its just more confusing. UPDATE #2: Using Firebug, I see that the Status is still 401 Unauthorized, but the Response is returning the application's correct HTML. UPDATE #3 Playing around with Failed Request Tracing, here is the WARNING Request Trace that is causing the 401: ModuleName ManagedPipelineHandler Notification 128 HttpStatus 401 HttpReason Unauthorized HttpSubStatus 0 ErrorCode 0 ConfigExceptionInfo Notification EXECUTE_REQUEST_HANDLER ErrorCode The operation completed successfully. (0x0) Update #4 Regular IIS log is showing this: #Software: Microsoft Internet Information Services 7.5 #Version: 1.0 #Date: 2010-07-20 19:17:22 #Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status time-taken 2010-07-20 19:17:22 10.10.1.10 GET /Purchasing/Home - 80 - 10.10.1.12 Mozilla/5.0+(Windows;+U;+Windows+NT+6.1;+en-US;+rv:1.9.2.6)+Gecko/20100625+Firefox/3.6.6 401 0 0 4414

    Read the article

  • dmidecode showing more ram slots than available?

    - by Jestep
    I have some failing RAM in a server and I ran dmidecode to figure out what tyoe of RAM I needed to replace it with. The server has 6 RAM slots, 4 of which are in use. When I run dmidecode this is what I get. dmidecode 2.10 SMBIOS 2.4 present. Handle 0x001F, DMI type 17, 27 bytes Memory Device Array Handle: 0x001E Error Information Handle: No Error Total Width: 72 bits Data Width: 64 bits Size: 2048 MB Form Factor: DIMM Set: 1 Locator: JXXX Bank Locator: DIMM 00 Type: DDR2 Type Detail: Synchronous Speed: 667 MHz Manufacturer: Not Specified Serial Number: Not Specified Asset Tag: Not Specified Part Number: Not Specified Handle 0x0020, DMI type 17, 27 bytes Memory Device Array Handle: 0x001E Error Information Handle: No Error Total Width: 72 bits Data Width: 64 bits Size: 2048 MB Form Factor: DIMM Set: 1 Locator: JXXX Bank Locator: DIMM 01 Type: DDR2 Type Detail: Synchronous Speed: 667 MHz Manufacturer: Not Specified Serial Number: Not Specified Asset Tag: Not Specified Part Number: Not Specified Handle 0x0021, DMI type 17, 27 bytes Memory Device Array Handle: 0x001E Error Information Handle: No Error Total Width: Unknown Data Width: Unknown Size: No Module Installed Form Factor: DIMM Set: 1 Locator: JXXX Bank Locator: DIMM 02 Type: DDR2 Type Detail: Synchronous Speed: 667 MHz Manufacturer: Not Specified Serial Number: Not Specified Asset Tag: Not Specified Part Number: Not Specified Handle 0x0022, DMI type 17, 27 bytes Memory Device Array Handle: 0x001E Error Information Handle: No Error Total Width: Unknown Data Width: Unknown Size: No Module Installed Form Factor: DIMM Set: 1 Locator: JXXX Bank Locator: DIMM 03 Type: DDR2 Type Detail: Synchronous Speed: 667 MHz Manufacturer: Not Specified Serial Number: Not Specified Asset Tag: Not Specified Part Number: Not Specified Handle 0x0023, DMI type 17, 27 bytes Memory Device Array Handle: 0x001E Error Information Handle: No Error Total Width: 72 bits Data Width: 64 bits Size: 2048 MB Form Factor: DIMM Set: 1 Locator: JXXX Bank Locator: DIMM 10 Type: DDR2 Type Detail: Synchronous Speed: 667 MHz Manufacturer: Not Specified Serial Number: Not Specified Asset Tag: Not Specified Part Number: Not Specified Handle 0x0024, DMI type 17, 27 bytes Memory Device Array Handle: 0x001E Error Information Handle: No Error Total Width: 72 bits Data Width: 64 bits Size: 2048 MB Form Factor: DIMM Set: 1 Locator: JXXX Bank Locator: DIMM 11 Type: DDR2 Type Detail: Synchronous Speed: 667 MHz Manufacturer: Not Specified Serial Number: Not Specified Asset Tag: Not Specified Part Number: Not Specified Handle 0x0025, DMI type 17, 27 bytes Memory Device Array Handle: 0x001E Error Information Handle: No Error Total Width: Unknown Data Width: Unknown Size: No Module Installed Form Factor: DIMM Set: 1 Locator: JXXX Bank Locator: DIMM 12 Type: DDR2 Type Detail: Synchronous Speed: 667 MHz Manufacturer: Not Specified Serial Number: Not Specified Asset Tag: Not Specified Part Number: Not Specified Handle 0x0026, DMI type 17, 27 bytes Memory Device Array Handle: 0x001E Error Information Handle: No Error Total Width: Unknown Data Width: Unknown Size: No Module Installed Form Factor: DIMM Set: 1 Locator: JXXX Bank Locator: DIMM 13 Type: DDR2 Type Detail: Synchronous Speed: 667 MHz Manufacturer: Not Specified Serial Number: Not Specified Asset Tag: Not Specified Part Number: Not Specified Does anyone know why it would show 8 slots, with 4 empty instead of 6 slots with 2 empty? Also, but my records and by other tools, the server has 16Gb and not 8Gb in it currently. grep MemTotal /proc/meminfo MemTotal: 16435808 kB The board is a Tyan S5372-LC, running CentOS 5.4 x64. Also, my error log is showing errors in bank 6. Is there any way to determine which slot bank 6 is in via: dmidecode?

    Read the article

  • How to unlock and remove a protected partition from Prestigio USB stick?

    - by mr.b
    Ok, so, I have one of those fancy schmancy devices, which is given to me by a frustrated friend of mine. Device is a Prestigio Leather 8GB, which identifies itself to Linux host as: Bus 001 Device 006: ID 1307:0165 Transcend Information, Inc. 2GB/4GB Flash Drive Kernel messages as USB device is plugged in: kernel: [ 2769.580042] usb 1-9: new high speed USB device using ehci_hcd and address 7 kernel: [ 2769.714782] scsi8 : usb-storage 1-9:1.0 kernel: [ 2770.713937] scsi 8:0:0:0: Direct-Access 8192MB flash drive 1.00 PQ: 0 ANSI: 2 kernel: [ 2770.714535] scsi 8:0:0:1: Direct-Access 8192MB flash drive 1.00 PQ: 0 ANSI: 2 kernel: [ 2770.715734] sd 8:0:0:0: Attached scsi generic sg3 type 0 kernel: [ 2770.716108] sd 8:0:0:1: Attached scsi generic sg4 type 0 kernel: [ 2770.722175] sd 8:0:0:0: [sdc] 962560 512-byte logical blocks: (492 MB/470 MiB) kernel: [ 2770.722657] sd 8:0:0:0: [sdc] Write Protect is on kernel: [ 2770.731078] sd 8:0:0:1: [sdd] 14012416 512-byte logical blocks: (7.17 GB/6.68 GiB) kernel: [ 2770.731215] sdc: kernel: [ 2770.738251] sd 8:0:0:1: [sdd] Write Protect is off kernel: [ 2770.880328] kernel: [ 2770.885876] sd 8:0:0:0: [sdc] Attached SCSI removable disk kernel: [ 2770.887442] sdd: unknown partition table kernel: [ 2771.049605] sd 8:0:0:1: [sdd] Attached SCSI removable disk So, symptoms are typical for U3-like devices: two separate devices inside of a single flash device. Windows sees it also as two identical usb devices, and mounts two separate drives to system, whereas first one presents itself as a CDROM device, holding a write-protected content, and second is a regular flash-disk partition, that "can" be written to. However, it seems like it's broken in some weird way, since it won't let me write anything to it, format it, nothing, but that's not the issue right now. Question: How can I unlock entire USB stick so it appears to system as a single, 8GB device which can be partitioned and used normally, without restrictions? Since it appeared to be an U3 device, I have tried standard utilities: both U3 Uninstaller by u3.com (found on SoftPedia), and opensource u3_tool from sourceforge (on both Windows and Linux). First utility failed to even detect USB stick as U3 device (simply stood idle while I re-plugged stick several times), while second tool failed with some obscure error about SCSI command unable to do something (I might be able to provide exact errors when I switch back to windows). u3_tool -i /dev/sg3 (Display device info) fails with u3_partition_info() failed: Device reported command failed: status 1 ...and every other option fails with same error, minus first part which states which command precisely has failed. So, apparently, this isn't a U3 device. Or, if it is, it doesn't behave like one. I read on a few occasions that this device protection is done by special command sent to device which tells it to lock itself, and so there should be an unlock command, that would set drive straight. Does anyone have any idea about what could I do to this device to fix it? P.S. I also mentioned a problem with being unable to use second "drive", but I'll tackle that problem when (and if) I manage to merge those two devices into one...

    Read the article

  • Windows 7 The boot selection failed because a required device is inaccessible 0xc000000f

    - by piratejackus
    I have a problem with my Windows 7, hardware : Acer 3820TG Operating Systems : Windows 7 and Ubuntu 10.04 dual Case: When I try to boot my windows 7 I see an error: "Window failed to start. A recent hardware or software change might be the cause. To fix the problem: 1.Insert.... 2. .... ... status : 0xc000000f info : The boot selection failed because a required device is inaccessible .... " I can't exactly remember what were my last actions on Windows. I already searched this error and applied the proposed solutions, I created a repair USB (because I don't have a CD-ROM nor a Windows 7 CD) such as; -repair operating system :it says it cannot repair it -checking disk (chkdsk D: /f /r) : it checks the disk without a problem or error and it takes pretty long (more than a hour). But when I restart, still the same error. -I didn't create a restore point so I pass this option -I don't have a system image -I tried to run windows recovery (I have a recovery partition) but there are just two options: 1- Format the operating system but retain user data (copies the files under users to c\backup folder, but when I searched deeper I found that there are some people who already tried this option and couldn't find their user files under backup directory). Plus, I have unfortunately just one partition D (it is a fault I know) because I use always Ubuntu. So this is not applicable in my situation 2- Format entire system (Windows). I keep my valuable data in windows but not in user folder. I was reaching them from Windows. -I tried to repair windows boot by: bootrec /fixMBR bootrec /fixBoot bootrec /rebuildBCD I lost all grub menu, and reinstalled it. - ubuntuforums.org/showthread.php?t=1014708&page=29 nothing changed, same error. I created a thread in microsoft forums - http://social.answers.microsoft.com/Forums/en-US/w7install/thread/69517faf-850a-45fd- 8195-6d4ed831f805 but I couldn't find a solution. Before I run chkdsk from usb repair disk I couldn't able to mount Windows (NTFS) partition from Ubuntu, I was getting "couldn't mount file system, error code 2". I tried to fix ntfs partition from ubuntu and got "segmentation fault". I also created a thread on ubuntuforums for this mount problem: - http://ubuntuforums.org/showthread.php?t=1606427 So, after chkdsk, I could enable to mount windows partition but all I see in this partition is chkdsk logs, no any other data. Now, I don't think I lost my data because I don't get any filesystem errors, just the boot section, but this log files under windows partition makes me afraid. I see that Microsoft developers don't have a solution yet for this error. If you need any information to get more idea I can give, maybe I miss some points or it could be complicated. Thanks in advance.

    Read the article

  • RHEL 6.x on Rackspace Cloud and Dedicated hardware experiencing Redis Timeouts

    - by zhallett
    I just recently set up a mixture of RHEL 6.1 Rackspace cloud hosts and RHEL 6.2 dedicated hosts using Rackconnect. I am experiencing intermittent Redis timeouts from within our Rails 3.2.8 app with Redis 2.4.16 running on the RHEL 6.2 dedicated hosts. There is no network latency or packet loss. Also there are no errors on any interfaces on our cloud or dedicated servers or on the managed firewall from Rackspace. When Redis timesout, there is nothing logged within redis even though it is set up to do debug logging. The only error we receive is from Airbrake saying there was a Redis timeout. Network topology: RHEL 6.1 cloud hosts <--> Alert logic IDS <--> Cisco ASA 5510 <--> RHEL 6.2 dedicated hosts (web nodes) (two way NAT) (db hosts running redis) Ping from db host to web host: 64 bytes from 10.181.230.180: icmp_seq=998 ttl=64 time=0.520 ms 64 bytes from 10.181.230.180: icmp_seq=999 ttl=64 time=0.579 ms 64 bytes from 10.181.230.180: icmp_seq=1000 ttl=64 time=0.482 ms --- web1.xxxxxx.com ping statistics --- 1000 packets transmitted, 1000 received, 0% packet loss, time 999007ms rtt min/avg/max/mdev = 0.359/0.535/5.684/0.200 ms Ping from web host to db host: 64 bytes from 192.168.100.26: icmp_seq=998 ttl=64 time=0.544 ms 64 bytes from 192.168.100.26: icmp_seq=999 ttl=64 time=0.452 ms 64 bytes from 192.168.100.26: icmp_seq=1000 ttl=64 time=0.529 ms --- data1.xxxxxx.com ping statistics --- 1000 packets transmitted, 1000 received, 0% packet loss, time 999017ms rtt min/avg/max/mdev = 0.358/0.499/6.120/0.201 ms Redis config: daemonize yes pidfile /var/run/redis/6379/redis_6379.pid port 6379 timeout 0 loglevel debug logfile /var/lib/redis/log syslog-enabled yes syslog-ident redis-6379 syslog-facility local0 databases 16 save 900 1 save 300 10 save 60 10000 rdbcompression yes dbfilename dump-6379.rdb dir /var/lib/redis maxclients 10000 maxmemory-policy volatile-lru maxmemory-samples 3 appendfilename appendonly-6379.aof appendfsync everysec no-appendfsync-on-rewrite no auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb slowlog-log-slower-than 10000 slowlog-max-len 1024 vm-enabled no vm-swap-file /tmp/redis.swap vm-max-memory 0 vm-page-size 32 vm-pages 134217728 vm-max-threads 4 hash-max-zipmap-entries 512 hash-max-zipmap-value 64 list-max-ziplist-entries 512 list-max-ziplist-value 64 set-max-intset-entries 512 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 activerehashing yes Redis-cli info: redis-cli info redis_version:2.4.16 redis_git_sha1:00000000 redis_git_dirty:0 arch_bits:64 multiplexing_api:epoll gcc_version:4.4.6 process_id:4174 uptime_in_seconds:79346 uptime_in_days:0 lru_clock:1064644 used_cpu_sys:13.08 used_cpu_user:19.81 used_cpu_sys_children:1.56 used_cpu_user_children:7.69 connected_clients:167 connected_slaves:0 client_longest_output_list:0 client_biggest_input_buf:0 blocked_clients:6 used_memory:15060312 used_memory_human:14.36M used_memory_rss:22061056 used_memory_peak:15265928 used_memory_peak_human:14.56M mem_fragmentation_ratio:1.46 mem_allocator:jemalloc-3.0.0 loading:0 aof_enabled:0 changes_since_last_save:166 bgsave_in_progress:0 last_save_time:1352823542 bgrewriteaof_in_progress:0 total_connections_received:286 total_commands_processed:507254 expired_keys:0 evicted_keys:0 keyspace_hits:1509 keyspace_misses:65167 pubsub_channels:0 pubsub_patterns:0 latest_fork_usec:690 vm_enabled:0 role:master db0:keys=6,expires=0 edit 1: add redis-cli info output

    Read the article

  • Microsoft signed drivers appears as publisher not verfied

    - by Priyanka Gupta
    Task at hand: Microsoft sign drivers on Win 7. I microsoft signed my driver package 3 times every time thinking I might have missed a step or something. However, I cannot seem to get rid of the Windows Security error message "Windows can't verify the publisher of this driver software'. This is not the first time I have signed the driver packages. I was successfully able to sign other driver packages a few months ago. However, with this driver package I keep getting Windows security dialog box. Here's the procedure I follow - Create a new cat file using INF2CAT tool. Self sign the driver using a Versign Class 3 Public Primary Certification Authority - G5.cer. Run the microsoft tests on DTM Servers and clients with the devices that use this driver. Create WLK submission package. Self sign the cab file. Submit the package for certification. The catalog file that comes back after successfully passing tests says Name of signer "Microsoft Windows Hardware Comptibility Publisher". When I check the validity of signature using SignTool, it says the signature is vaild. However, when I try to install the driver with new signed catalog file the windows complain. Any ideas? Edit 11/12/2012: Reply to Eugene's comment Thanks for the help, Eugene. Yes. I did sign two other driver packages before. One of them was modified version of WinUSB driver. I am using the same certificate I used when I signed those two driver packages a few months ago. It costs $250 per signing from Microsoft. I would think that Microsoft would complain about it during certification if the certificate is wrong. I use the following command to self sign the CAT file. I don't have to specify the ceritificate name as there's only one certificate in the directory - Signtool sign /v /a /n CompanyName /t http://timestamp.verisign.com/scripts/timestamp.dll OurCatalogFile.cat Below is the result from running Verify command on the Microsoft signed OutCatalogFile.cat C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\x64signtool verify /v "C:\User s\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Verifying: C:\Users\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Hash of file (sha1): BDDF39B1DD95881B462164129758A7FFD54F47D9 Signing Certificate Chain: Issued to: Microsoft Root Certificate Authority Issued by: Microsoft Root Certificate Authority Expires: Sun May 09 18:28:13 2021 SHA1 hash: CDD4EEAE6000AC7F40C3802C171E30148030C072 Issued to: Microsoft Windows Hardware Compatibility PCA Issued by: Microsoft Root Certificate Authority Expires: Thu Jun 04 16:15:46 2020 SHA1 hash: 8D42419D8B21E5CF9C3204D0060B19312B96EB78 Issued to: Microsoft Windows Hardware Compatibility Publisher Issued by: Microsoft Windows Hardware Compatibility PCA Expires: Wed Sep 18 18:20:55 2013 SHA1 hash: D94345C032D23404231DD3902F22AB1C2100341E The signature is timestamped: Tue Nov 06 11:26:48 2012 Timestamp Verified by: Issued to: Microsoft Root Authority Issued by: Microsoft Root Authority Expires: Thu Dec 31 02:00:00 2020 SHA1 hash: A43489159A520F0D93D032CCAF37E7FE20A8B419 Issued to: Microsoft Timestamping PCA Issued by: Microsoft Root Authority Expires: Sun Sep 15 02:00:00 2019 SHA1 hash: 3EA99A60058275E0ED83B892A909449F8C33B245 Issued to: Microsoft Time-Stamp Service Issued by: Microsoft Timestamping PCA Expires: Tue Apr 09 16:53:56 2013 SHA1 hash: 1895C2C907E0D7E5C0292B92C6EA8D0E236F525E Successfully verified: C:\Users\logotest\Documents\serialdriversigning\OurCatalogFile.cat" Number of files successfully Verified: 1 Number of warnings: 0 Number of errors: 0 Thank you!

    Read the article

  • Debugging IO limitation

    - by Martin F
    I have a Fedora box with some severe IO limitations which I have no idea how to debug. The server has a Areca Technology Corp. ARC-1130 12-Port PCI-X to SATA RAID Controller with 12 7200 RPM 1.5 TB disks and a Marvell Technology Group Ltd. 88E8050 PCI-E ASF Gigabit Ethernet Controller. uname -a output: 2.6.32.11-99.fc12.x86_64 #1 SMP Mon Apr 5 19:59:38 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux The server is a file server running Nginx with the stub status module enabled, so I can see the current amount of connections. The problem present itself when I have a high number of simultaneous connections in a writing state. Usually around 350, at this very moment it's at 590 and the server is almost unusable and stuck at 230mbit/s. If I run stop and hit 1 to see CPU core usages I have all 4 cores with around 99% io wait, if I run iotop the nginx workers are the only processes producing any read load, currently at around 25MB/s. I have each of the workers bound to their own core. Initially I figured it was just the disks being bugged. But I've run fscheck and smartmontools checks and found no errors. I also ran an iozone test which you can see the result of here: http://www.pastie.org/951667.txt?key=fimcvljulnuqy2dcdxa Additionally, when the amount of connections are low I have no problem getting a good speed. If I wget over the local network it easily hits 60MB/sec. Right now I just tried putting a file in /dev/shm, then I symlinked a file from the public dir to it and used wget over the local network and only got 50KB/s. Also, if I try to cp /dev/shm/test /root/test it quickly copies around 740MB and then slows down HEAVILY. Again with iotop reporting 99% iowait. I'm not really sure how to go about figuring out what the problems are. It could be a natural disk limitation but then the file from /dev/shm ought to transfer so it seems there's a network limit, but that's fine when there's not many connections. Perhaps it's a TCP stack problem but I really have no idea how to check that. Any suggestions on how to proceed with debugging would be very welcome. If additional information is required then let me know and I'll try to get it. Thanks.

    Read the article

  • Unable to use TweetDeck on Windows due to "Ooops, TweetDeck can't find your data" and "Sorry, Adobe

    - by Matt
    I'm running Adobe AIR 1.5.2 (latest) on Windows 7 (64-bit RTM) and downloaded TweetDeck 0.31.1 (latest). When I run TweetDeck I get the following errors: Ooops, TweetDeck can't find your data and Sorry, Adobe AIR has a problem running on this computer Other AIR applications install and run fine. I've uninstalled both TweetDeck and AIR and reinstalled. Following the uninstalls I've also removed all on-disk references to both TweetDeck and AIR, but no luck. UPDATE: Using Process Monitor I did a trace of Tweetdeck from the moment it launched until the first error occurred. I saw the following information in the output of the trace: 1 5:22:18.6522338 PM TweetDeck.exe 5580 CreateFile D:\ProgramData\Microsoft\Windows\Start Menu\Programs\rs\??\d:\Use\myusername\AppData\Roaming\Adobe\AIR\ELS\TweetDeckFast.F9107117265DB7542C1A806C8DB837742CE14C21.1\PrivateEncryptedDatak NAME INVALID Desired Access: Generic Write, Read Attributes, Disposition: OverwriteIf, Options: Synchronous IO Non-Alert, Non-Directory File, Attributes: N, ShareMode: Read, Write, AllocationSize: 0 In this trace output, Tweetdeck.exe is trying to create the file D:\ProgramData\Microsoft\Windows\Start Menu\Programs\rs\??\d:\Use\myusername\AppData\Roaming\Adobe\AIR\ELS\TweetDeckFast.F9107117265DB7542C1A806C8DB837742CE14C21.1\PrivateEncryptedDatak but the path specified is invalid. When looking at the path you can see that it is indeed an invalid path. First, there’s the “??” portion which doesn’t exist in the file system since the “?” is an invalid character in Windows/NTFS file systems. Additionally, looking at this path, it actually seems to be composed of two parts (is the "??" a delimiter?): Part 1: D:\ProgramData\Microsoft\Windows\Start Menu\Programs\rs\?? Part 2: d:\Use\myusername\AppData\Roaming\Adobe\AIR\ELS \TweetDeckFast.F9107117265DB7542C1A806C8DB837742CE14C21.1\PrivateEncryptedDatak (the problem here is that d:\Use... doesn’t even exist. What seems to be happening here is that Tweetdeck is looking for the user credentials (the “PrivateEncryptedDatak” file) but it’s looking in the wrong place, can’t find the file, and hence the error that Tweetdeck is giving (shown in the screenshot). I'm trying to determine how TweetDeck is getting this path. I searched the contents of all files on my hard disk hoping to find some TweetDeck or Adobe AIR configuration file containing this incorrect path, but I was unable to find anything. UPDATE: See Carl's comment regarding directory junctions and symbolic links under my accepted answer. This ended up being the problem. Edit by Gnoupi: People, the answer section is there to provide an actual ANSWER, not to say you have the same issue. It doesn't help anyone that you have the same problem. Eventually, if you think this is really worth mentioning, put it as a comment under the question. But simply, if what you want to add is not an answer to the question, then don't post it as an answer. This is not a forum, I recommend new users to read the FAQ: http://superuser.com/faq

    Read the article

  • How do I correct a directory incorrectly copied into itself?

    - by Peter Boughton
    Given the following situation... <path>/mydir1/mydir2 ...where mydir2 should have overwritten mydir1, but was instead placed inside, and both directories actually have the same filename. How is that fixed? Attempting to do mv <path>/mydir/mydir/* <path>/mydir/ or mv <path>/mydir <path>/ results in: mv: cannot move `<path>/mydir/mydir` to a subdirectory of itself, `<path>/mydir` This seems stupidly simple, but it's late here and I can't figure it out. There are seventeen such directories to fix (path differs for each, but same mydir name). To confirm, the error message can be caused with this: # cd /path/to/directory # mv mydir/mydir ./ mv: cannot move `mydir/mydir' to a subdirectory of itself, `./mydir' Also tried: # mv mydir/mydir/* mydir/ mv: cannot move `mydir/mydir/otherdir1' to a subdirectory of itself, `mydir/otherdir1' mv: cannot move `mydir/mydir/otherdir2' to a subdirectory of itself, `mydir/otherdir2' and... # mv /path/to/directory/mydir/mydir/otherdir1 /path/to/directory/mydir/ mv: cannot move `/path/to/directory/mydir/mydir/otherdir1' to a subdirectory of itself, `/path/to/directory/mydir/otherdir1' and using a temporary directory: # mv mydir/mydir ./mydir-temp # mv mydir-temp/* mydir/ mv: cannot move `mydir-temp/otherdir1' to a subdirectory of itself, `mydir/otherdir1' mv: cannot move `mydir-temp/otherdir2' to a subdirectory of itself, `mydir/otherdir2' I found a similar question "How to recursively move all files (including hidden) in a subfolder into a parent folder in *nix?" which suggested that mv bar/{,.}* . would do this. But this also gives the same errors, as well as confusingly picking up . and .. from somewhere. # cd mydir # mv mydir/{,.}* . mv: cannot move `mydir/otherdir1' to a subdirectory of itself, `./otherdir1' mv: cannot move `mydir/otherdir2' to a subdirectory of itself, `./otherdir2' mv: cannot move `mydir/.' to `./.': Device or resource busy mv: cannot move `mydir/..' to `./..': Device or resource busy mv: overwrite `./.file'? y Another similar question "linux mv command weirdness" suggests that mv doesn't overwrite and a copy is required. # cd mydir # cp -rf ./mydir/* ./ cp: overwrite `./otherdir1/file1'? y cp: overwrite `./otherdir1/file2'? y cp: overwrite `./otherdir1/file3'? This appears to be working... except there's a lot of files (and dirs) - I don't want to confirm every one! Isn't the f there supposed to prevent this? Ok, so cp was aliased to cp -i (which I found out with type cp), and bypassed by using \cp -rf ./mydir/* ./ which seems to have worked. Although I've solved the problem of getting dirs/files from one place to another, I'm still curious as to what's going on with the mv stuff - is this really a deliberate feature as suggested by Warner?

    Read the article

  • High I/O latency with software RAID, LUKS encrypted and LVM partitioned KVM setup

    - by aef
    I found out a performance problems with a Mumble server, which I described in a previous question are caused by an I/O latency problem of unknown origin. As I have no idea what is causing this and how to further debug it, I'm asking for your ideas on the topic. I'm running a Hetzner EX4S root server as KVM hypervisor. The server is running Debian Wheezy Beta 4 and KVM virtualisation is utilized through LibVirt. The server has two different 3TB hard drives as one of the hard drives was replaced after S.M.A.R.T. errors were reported. The first hard disk is a Seagate Barracuda XT ST33000651AS (512 bytes logical, 4096 bytes physical sector size), the other one a Seagate Barracuda 7200.14 (AF) ST3000DM001-9YN166 (512 bytes logical and physical sector size). There are two Linux software RAID1 devices. One for the unencrypted boot partition and one as container for the encrypted rest, using both hard drives. Inside the latter RAID device lies an AES encrypted LUKS container. Inside the LUKS container there is a LVM physical volume. The hypervisor's VFS is split on three logical volumes on the described LVM physical volume: one for /, one for /home and one for swap. Here is a diagram of the block device configuration stack: sda (Physical HDD) - md0 (RAID1) - md1 (RAID1) sdb (Physical HDD) - md0 (RAID1) - md1 (RAID1) md0 (Boot RAID) - ext4 (/boot) md1 (Data RAID) - LUKS container - LVM Physical volume - LVM volume hypervisor-root - LVM volume hypervisor-home - LVM volume hypervisor-swap - … (Virtual machine volumes) The guest systems (virtual machines) are mostly running Debian Wheezy Beta 4 too. We have one additional Ubuntu Precise instance. They get their block devices from the LVM physical volume, too. The volumes are accessed through Virtio drivers in native writethrough mode. The IO scheduler (elevator) on both the hypervisor and the guest system is set to deadline instead of the default cfs as that happened to be the most performant setup according to our bonnie++ test series. The I/O latency problem is experienced not only inside the guest systems but is also affecting services running on the hypervisor system itself. The setup seems complex, but I'm sure that not the basic structure causes the latency problems, as my previous server ran four years with almost the same basic setup, without any of the performance problems. On the old setup the following things were different: Debian Lenny was the OS for both hypervisor and almost all guests Xen software virtualisation (therefore no Virtio, also) no LibVirt management Different hard drives, each 1.5TB in size (one of them was a Seagate Barracuda 7200.11 ST31500341AS, the other one I can't tell anymore) We had no IPv6 connectivity Neither in the hypervisor nor in guests we had noticable I/O latency problems According the the datasheets, the current hard drives and the one of the old machine have an average latency of 4.12ms.

    Read the article

  • Dovecot Virtual Users Not Authenticating

    - by blankabout
    We have a standard Postfix/Dovecot installation working perfectly with real users but cannot work out how to add virtual users, all virtual user login attempts fail with authentication errors. Following are snippets from the configuration files: /etc/postfix/main.cf: virtual_mailbox_domains = virtualexample.com virtual_mailbox_base = /var/spool/vhosts virtual_mailbox_recipients = hash:/etc/postfix/virtual_mailbox_recipients /etc/dovecot/dovecot.conf: !include conf.d/*.conf /etc/dovecot/conf.d/10-auth.conf auth_mechanisms = cram-md5 digest-md5 plain passdb { driver = passwd-file # Path for passwd-file. Also set the default password scheme. args = scheme=cram-md5 /etc/cram-md5.pwd } /etc/cram-md5.pwd [email protected]{MD5}$1$uIMvzy92$9Xt67B/qw4u6txkkxzne80 This is a snippet from the log when a login attempt is made: auth: Debug: Loading modules from directory: /usr/lib64/dovecot/auth auth: Debug: Module loaded: /usr/lib64/dovecot/auth/libauthdb_ldap.so auth: Debug: Module loaded: /usr/lib64/dovecot/auth/libdriver_sqlite.so auth: Debug: Module loaded: /usr/lib64/dovecot/auth/libmech_gssapi.so auth: Debug: passwd-file /etc/cram-md5.pwd: Read 1 users auth: Debug: auth client connected (pid=21990) auth: Debug: client in: AUTH#0111#011CRAM-MD5#011service=imap#011lip=1.1.1.1#011rip=2.2.2.2#011lport=143#011rport=51774 auth: Debug: client out: CONT#0111#011PDI1Njc0NjQ1NzQ3MTY0NTkuMTM0MTIxNzkwN0BncDM+ auth: Debug: client in: CONT auth: Debug: passwd-file([email protected],2.2.2.2): lookup: [email protected] file=/etc/cram-md5.pwd auth: Debug: client out: OK#0111#[email protected] auth: Debug: master in: REQUEST#0111630404609#01121990#0111#011b66b5f46b520a08e1d19d3d249be7073 auth: Debug: passwd([email protected],2.2.2.2): lookup auth: passwd([email protected],2.2.2.2): unknown user auth: Error: userdb([email protected],2.2.2.2): user not found from userdb passwd auth: Debug: master out: NOTFOUND#0111630404609 imap: Error: Authenticated user not found from userdb, auth lookup id=1630404609 (client-pid=21990 client-id=1) imap-login: Internal login failure (pid=21990 id=1) (auth failed, 1 attempts): user=, method=CRAM-MD5, rip=2.2.2.2, lip=1.1.1.1, mpid=21993 auth: Debug: auth client connected (pid=22010) auth: Debug: client in: AUTH#0111#011CRAM-MD5#011service=imap#011lip=1.1.1.1#011rip=2.2.2.2#011lport=143#011rport=51775 auth: Debug: client out: CONT#0111#011PDcxMDkwNDY1NTQzODUzMDkuMTM0MTIxNzkyOEBncDM+ auth: Debug: client in: CONT auth: Debug: passwd-file([email protected],2.2.2.2): lookup: [email protected] file=/etc/cram-md5.pwd auth: Debug: client out: OK#0111#[email protected] auth: Debug: master in: REQUEST#011343539713#01122010#0111#011e47b1345784e2845d59e794afa9a6bbe auth: Debug: passwd([email protected],2.2.2.2): lookup auth: passwd([email protected],2.2.2.2): unknown user auth: Error: userdb([email protected],2.2.2.2): user not found from userdb passwd auth: Debug: master out: NOTFOUND#011343539713 imap: Error: Authenticated user not found from userdb, auth lookup id=343539713 (client-pid=22010 client-id=1) imap-login: Internal login failure (pid=22010 id=1) (auth failed, 1 attempts): user=, method=CRAM-MD5, rip=2.2.2.2, lip=1.1.1.1, mpid=22011 It would appear that the user lookup is not working, even tho' the log suggests that Dovecot is using the /etc/cram-md5.pwd file and the user is configured in that same file. There are of course dozens of examples of using virtual users with Dovecot, but all the ones we have found either refer to Dovecot 1.x (we are using 2.x), using only virtual users (we must use real AND virtual users) or want to use a MySQL db, we need to use a text file. Some hints about where we are going wrong would be very much appreciated.

    Read the article

  • Intermittent Disconnection of Client Computers from Domain Server

    - by dilip nagle
    The Background: I have Windows 2008 server Enterprise Version with 25 user cal licences. It has a domain and all users and a network shared HP printer in it. The Server has two network cards and both these cards as well as all client machines are on IP addressing scheme of 192.168.1.* with subnetmask 255.255.255.0. Of the two network cards viz. 192.168.1.231 and 192.168.1.233, only 192.168.1.231 is registered with DNS. In 192.168.1.233(i.e. 2nd network card) has default getway as 192.168.1.231 and dns address as 192.168.1.231. The Server has three hard disks with capacities as 500gb, 500gb and 1TB and are partitioned as (C,D,E), (F,G) and (K) with partition K having all user data into various Shared Folders. Each of these folders(On Partition K), are mapped onto each user's computer as per the right of access given to them. The Problem: The Server was installed about 6 months ago and till date not even once, the Server has Hung or has given any problem. All the Clients computers are able to run the web based software from their computers via ip address, e.g. http://192.168.1.231/webERP/default.aspx. However, occassionally, when any client computer tries to browse network mappings, it hangs. Again, there is no fixed pattern. This may happen after running smoothly for say 3 days. On each Client's machine, the network settings are as follows: IP Address: 192.168.1.* where * is 1,2,3 .... Sunnetmask: 255.255.255.0 defauly getway: 192.168.1.231 Which is a server card and DNS address. preferred DNS Server: 192.168.1.231 In Advanced Tab under Wins: LMHostLookup is Unticked and default is radio buttoned. Ideally, I would have loved to have Disabled NETBIOS over TCP/IP but some network printers do not get accessed if this option is enabled(ie. Radio Buttoned). Bacause Disabling Netbios will drastically reduce traffic of NETBIOS broadcasting to all the computers on the net to do naming resolution. On Server, I have WINs Running which I have Scavanged Records, verified Database Integrity etc, removed Tombstoned Records etc. The Critical Errors shown only once a day when the server is statred are 4224(WINS) and 12923 - Server Licencing failed to Update DNS Record. I fail to understand as why do client machines HANG when they try to browse mapped network shared folders on K Drive. Kindly Advice

    Read the article

  • OpenLDAP with StartTLS broken on Debian Lenny

    - by mr.zog
    I'm trying to get OpenLDAP on Lenny to work with StartTLS. I have a Fedora 13 machine which I'm using as a client for testing. So far the Fedora client is ignoring the 'host' directive in /etc/ldap.conf when I try to connect using ldapsearch. The client wants to connect to 127.0.0.1:389 even if I specify -H ldaps://server.name on when using ldapsearch. /etc/ldap.conf on the client machine is in mode 444. But even when I try connecting locally from an ssh session, I see errors like this: ldap_sasl_interactive_bind_s: Can't contact LDAP server (-1) Someone hit me with a cluebat, plz. Update: you must use ~/.ldaprc for settings such as 'host'. Also, I just used nmap against the ldap server and it showed 636 and 389 in an open state. Here's what prints to screen when I try to connect with, ldapsearch -ZZ –x '(objectclass=*)'+ -d -1 ldap_create ldap_extended_operation_s ldap_extended_operation ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP 192.168.10.41:636 ldap_new_socket: 3 ldap_prepare_socket: 3 ldap_connect_to_host: Trying 192.168.10.41:636 ldap_pvt_connect: fd: 3 tm: -1 async: 0 ldap_open_defconn: successful ldap_send_server_request ber_scanf fmt ({it) ber: ber_dump: buf=0x9bdbdb8 ptr=0x9bdbdb8 end=0x9bdbdd7 len=31 0000: 30 1d 02 01 01 77 18 80 16 31 2e 33 2e 36 2e 31 0....w...1.3.6.1 0010: 2e 34 2e 31 2e 31 34 36 36 2e 32 30 30 33 37 .4.1.1466.20037 ber_scanf fmt ({) ber: ber_dump: buf=0x9bdbdb8 ptr=0x9bdbdbd end=0x9bdbdd7 len=26 0000: 77 18 80 16 31 2e 33 2e 36 2e 31 2e 34 2e 31 2e w...1.3.6.1.4.1. 0010: 31 34 36 36 2e 32 30 30 33 37 1466.20037 ber_flush2: 31 bytes to sd 3 0000: 30 1d 02 01 01 77 18 80 16 31 2e 33 2e 36 2e 31 0....w...1.3.6.1 0010: 2e 34 2e 31 2e 31 34 36 36 2e 32 30 30 33 37 .4.1.1466.20037 ldap_write: want=31, written=31 0000: 30 1d 02 01 01 77 18 80 16 31 2e 33 2e 36 2e 31 0....w...1.3.6.1 0010: 2e 34 2e 31 2e 31 34 36 36 2e 32 30 30 33 37 .4.1.1466.20037 ldap_result ld 0x9bd3050 msgid 1 wait4msg ld 0x9bd3050 msgid 1 (infinite timeout) wait4msg continue ld 0x9bd3050 msgid 1 all 1 ** ld 0x9bd3050 Connections: * host: 192.168.10.41 port: 636 (default) refcnt: 2 status: Connected last used: Sun Jun 6 12:54:05 2010 ** ld 0x9bd3050 Outstanding Requests: * msgid 1, origid 1, status InProgress outstanding referrals 0, parent count 0 ld 0x9bd3050 request count 1 (abandoned 0) ** ld 0x9bd3050 Response Queue: Empty ld 0x9bd3050 response count 0 ldap_chkResponseList ld 0x9bd3050 msgid 1 all 1 ldap_chkResponseList returns ld 0x9bd3050 NULL ldap_int_select read1msg: ld 0x9bd3050 msgid 1 all 1 ber_get_next ldap_read: want=8, got=0 ber_get_next failed. ldap_err2string ldap_start_tls: Can't contact LDAP server (-1)

    Read the article

  • Moving from single-site to multi-site Active Directory has broken OWA proxying

    - by messick
    Originally we had the following setup: OfficeExch01 has Mailbox Role and CAS Role OfficeExch01 is in the office. CoLoExch01 had just CAS Role. CoLoExch01 is internet facing and in a CoLo. Three AD domain controllers in the default site. Users could go to https://webmail.whatever.com/owa, get proxyed to OfficeExch01 and everything was great. Well, we recently setup a separate AD site and put a domain controller and the ColoExch01 server in the new site. I also made that remote DC be a Global Catalog. Now, users get the following error: Outlook Web Access is not available. If the problem continues, contact technical support for your organization and tell them the following: There is no Microsoft Exchange Client Access server that has the necessary configuration in the Active Directory site where the mailbox is stored. I also see event 41 errors in the logs: The Client Access server "https://webmail.xxxxxxx.com/owa" attempted to proxy Outlook Web Access traffic for mailbox "/o=XXXXX/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Recipients/cn=xxxxxxk". This failed because no Client Access server with an Outlook Web Access virtual directory configured for Kerberos authentication could be found in the Active Directory site of the mailbox. The simplest way to configure an Outlook Web Access virtual directory for Kerberos authentication is to set it to use Integrated Windows authentication by using the Set-OwaVirtualDirectory cmdlet in the Exchange Management Shell, or by using the Exchange Management Console. If you already have a Client Access server deployed in the target Active Directory site with an Outlook Web Access virtual directory configured for Kerberos authentication, the proxying Client Access server may not be finding that target Client Access server because it does not have an internalUrl parameter configured. You can configure the internalUrl parameter for the Outlook Web Access virtual directory on the Client Access server in the target Active Directory site by using the Set-OwaVirtualDirectory cmdlet. Looking this up I see a lot talk about ExternalURL and InternalURL settings. However, everything worked great until we made the new AD site. I also made sure the internal CAS server's /owa virtual directory is set to use Integrated Authentication. Is there something I need to do to allow Exchange to see that I've made these AD changes?

    Read the article

< Previous Page | 462 463 464 465 466 467 468 469 470 471 472 473  | Next Page >